December 22, 2014

Tech News

Tech News issue #52, 2014 (December 22, 2014)

This document has a planned publication deadline (link leads to timeanddate.com).
TriangleArrow-Left.svgprevious 2014, week 52 (Monday 22 December 2014) nextTriangleArrow-Right.svg
Other languages:
čeština • ‎English • ‎français • ‎italiano • ‎日本語 • ‎українська • ‎中文

December 22, 2014 12:00 AM

December 19, 2014

Wikimedia Tech Blog

Wikimedia Technical Operations joins the Phabricator party

The Wikimedia Foundation Technical Operations team (“Ops”) are moving their processes to phabricator.wikimedia.org, thus joining the merge of multiple developer and project management tools into a single Wikimedia technical collaboration platform. This week, we migrated 5,986 tickets from RT to Phabricator, and now most requests to Ops can be initiated by creating a task there. access-requests@ and procurement@ queues are still handled in RT, and they will be moved to Phabricator as well at a later stage.

This move comes a few weeks after the migration of 73,000 tickets from Bugzilla to Phabricator. Now we are tracking more than 83k tasks, from which about 18,100 are open. Yes, we have a lot of work to do! And it keeps growing…

Approximate volume of tasks hosted in Wikimedia Phabricator since its opening.

Migrating RT

RT is a tool based on Request Tracker used by the Operations team for managing day-to-day tasks. It was an efficient tool for processing tickets, based on email queues and assisted with a web interface at rt.wikimedia.org. However, while RT was efficient for dedicated professionals interacting with this tool on a daily basis, for most contributors out of the Ops team it was complex, obscure, almost arcane. This was becoming a serious problem for the openness and desire for community engagement that the Operations team wants to increase in their many not-top-secret activities.

Most of the migrated RT tickets are accessible only to a limited audience with NDAs in place, at least at the time of submission and processing. The move to Phabricator makes it easier to create and keep public the information that doesn’t require any confidentiality, and allows to link these tasks directly with bug reports and other activities of development teams. This change also simplifies the planning work of the Operations team, who can now organize projects with workboards and personal backlogs, just like the other teams.

Processes migrated

Is any of these boxes bringing you trouble? Create a task! (“Eqiadwmf 9051″ by RobH, under CC-BY-SA-3.0)

What was RT used for, and which processes were migrated? The following types of requests are now handled in Phabricator:

  • All kinds of issues related with Wikimedia servers and networking;
  • Tasks requiring physical access to Wikimedia’s data centers;
  • Security and maintenance announcements, also from external providers;
  • Technical topics with legal implications;
  • Purchase and configuration of web domains;
  • Decommissioning of servers.

Requests to the Operations team are submitted through the regular process of creating a task in Phabricator and associating it to the Operations project.

There are two queues that have been kept in RT for now, because of their more complex processes and the usually more sensitive information that they contain:

  • access-requests, used to grant server permissions to professional or volunteer contributors
  • procurement, used to handle hardware purchases and other contracts with external providers

You can learn more about RT on the wikitech wiki, and about the migration and its remaining bits on mediawiki.org.

Quim Gil, Engineering Community Manager, Wikimedia Foundation

by Guillaume Paumier at December 19, 2014 11:55 PM

Sumana Harihareswara

Join Me In Donating to Stumptown Syndicate and Open Source Bridge

https://secure.flickr.com/photos/reidab/7674996428/ Woman laughing alone with salad, by reidab, CC BY-NC-SAI'm donating up to $15,000 to the Stumptown Syndicate -- depending on how much you are willing to match by December 29th. Please join me by donating today and doubling your impact!

I really love Open Source Bridge, which Stumptown Syndicate runs. I've spoken there every year since 2010, and it's the tech conference that has imprinted itself on my heart -- informative technical talks, inspiring ideas that help me improve how I do my work, and belly laughs and great food (see right). I love that I can tell friends "Come to OSB!" without having to add "but watch out for..." the way I do with so many other conferences. Hospitality lives in the DNA of Open Source Bridge, so it's a place where people from different projects and backgrounds can share their experiences as equals. Wikimedians, Linux developers, Mac users, designers, hardware hackers, managers, knitters, teachers, and everyone from Fiona Tay to Ward Cunningham swap tips and inspire each other. I especially appreciate that Stumptown Syndicate curates an inclusive all-genders tech conference where I'm never the only woman in the room; in fact, in 2014, half the speakers were women.

I don't live in Portland, so I don't get to benefit directly from most of Stumptown Syndicate's events. But they document their processes to make a playbook, and they built and improve open conferenceware and an open source shared calendar, all of which contribute to the infrastructure of inclusion for everyone to reuse.

With some more cash in the bank, the Syndicate can look at adding childcare to its events, improving access and scholarship options for low-income participants and guest speakers, and improving the audiovisual experience (with faster video processing or transcripts/captioning).

So: I'll match donations starting today and ending on December 29th, whether corporate or individual, one-time or recurring memberships. Please donate now to help raise $30,000 for Stumptown Syndicate and Open Source Bridge!

December 19, 2014 05:53 PM

Royal Society of Chemistry - Wikimedian in Residence (User:Pigsonthewing)

Free 'RSC Gold ' accounts for Wikipedia editors

Yesterday, we launched the offer of 100 free 'RSC Gold' accounts for Wikipedia editors and those working on its sister projects. 'RSC Gold' is a bundled subscription to all of the journals and databases we publish, including all their back issues - back to the 1840s in some cases. As you can imagine, this fantastic resource is a treasure trove of facts and figures for Wikipedia editors; not least since Wikipedia requires all statements added to its articles to be cited to reliable sources - and you can't get much more reliable than a Royal Society of Chemistry journal! To prevent abuse, the offer is only open to Wikipedia editors who have had an account for at least a year, and who have made at least 1,000 edits. Evidence of work on chemistry-related topics is also required. Accounts may be requested via the Wikipedia LIbrary project.

Posted by Andy Mabbett
Dec 19, 2014 11:47 am

December 19, 2014 10:47 AM

Gerard Meijssen

#Wikimedia Foundation - does not get one € of mine

The WMF Fundraiser will be a success. I hope it will be and when you feel like it, please do donate. You can donate by creditcard, by money transfer, by paypal, Please do.

However, I will not make a donation except for the donation of my time. Paying money is comparatively superbly organised in Europe. As a person you can transfer money without cost within Europe. All it takes is knowledge of the IBAN number to transfer to and the name of the entity you transfer money to. Easy peasy.

When you have a website with customers in the Netherlands, you can use a system called iDEAL it has the virtue of being cheap. Wikimedia Foundation does not support cheap.

Paypal and credit cards cost money, they deduce money from the amount given.

In Great Britain an organisation collects money for the WMF for a fee. At the same time there is a UK chapter who could easily organise it for the WMF and in the process hone their skills in fundraising, a skill the WMF wants it to develop anyway.

I refuse to pay these additional costs.

While I hope the WMF collects all the money it wants in the USA, It effectively hands over ownership to the USA and its way of working. There is little consideration for the rest of the world because if there was, they would actively welcome more monetary contributions and partner with its chapters in raising the funds needed for our movement and not only for the Wikimedia Foundation.

by Gerard Meijssen (noreply@blogger.com) at December 19, 2014 07:28 AM

December 18, 2014

Wikimedia Foundation

Interview: Former Swedish Member of Parliament now lobbies for Wikimedia Sverige in Brussels

"Wikimedia Sverige - årsmöte 2009 Karl Sigfrid" by Hannibal, under PD

Karl Sigfrid
“Wikimedia Sverige – årsmöte 2009 Karl Sigfrid” by Hannibal, under public domain

Swedish politician Karl Sigfrid discusses his lobbying work for digital rights and Wikimedia in the European Union. Sigfrid was a Member of Parliament (the Riksdag) in Sweden for the Moderate Party from 2006 to 2014. He now works as a volunteer for Wikimedia Sverige in Brussels, in coordination with the office in Stockholm.

Hello Karl Sigfrid,

You are about to move to Brussels to help with Wikimedia Sverige’s lobbying efforts for a year – and you are doing this as a volunteer! We would love to hear a bit more about this.

Q: Tell us a little bit about yourself? Who are you?
A: I have been working in the Committee on the Constitution in the Swedish Parliament for eight years. During this time, I have come to realize that almost all new issues that affect free speech, privacy and other fundamental rights are in one way or another tied to technical innovation and to the development of the Internet. Therefore, it has made sense for me to focus my efforts on digital rights.

Q: Great! What made you decide to work with Wikimedia’s lobbying efforts, out of all things?
A: I view this as an opportunity to spend even more of my time doing what I love – engage in the issues that determine what the future will look like. Free access to information is perhaps the most important element in a successful society. Free access to knowledge also means equal access to knowledge and new opportunities for those who are shut out from the traditional educational systems.

Q: How can the Wikimedia movement engage in lobbying, while Wikipedia claims to be neutral?
A: We have to distinguish between the encyclopedia Wikipedia and the organization Wikimedia. That Volvo’s idea is to build safe cars doesn’t mean that Volvo as a company in every instance has to avoid risk. That the information in Wikipedia is neutral does not, along the same lines, mean that Wikimedia as an organization always must remain neutral. Wikimedia’s vision of a world in which every single human being can freely share in the sum of all knowledge is anything but neutral. Likewise, there is nothing neutral about the organization’s mission to disseminate information under free licenses.
That being said, everything an organization does will be associated with its services or products, so if relevance and correctness should be important values in all lobbying — and they are especially important for Wikimedia.

Q: What are you planning to focus on in Brussels during the first months?
A: My first task will be to establish a priority list. The are plenty of EU regulations, existing ones as well as those in the making, which have the potential to affect Wikimedia’s activities. Everything from copyright reform to data protection and trade agreements can entail regulatory changes that help or hurt the efforts to get more information out there. All these processes must be prioritized. Which of them are the most significant? Which can we steer in the right direction? That’s what I’ll have to determine.

Q: What do you think will be the most interesting challenges with the work ahead?
A: Probably the ones that I haven’t yet realized that I will face. Regardless of what I expect of Brussels, I’m sure that the city will turn out to be something completely different.

Thank you Karl!

Interview by John Andersson, Wikimedia Sverige

by wikimediablog at December 18, 2014 10:35 PM

Wiki Education Foundation

Changes to Classroom Program for Spring 2015

Wiki Ed will be rolling out a new system for connecting volunteers and courses this week. As part of this shift, Wiki Ed will no longer accredit individual volunteers as Wikipedia Ambassadors. Instead of a complicated application process, we want to make supporting students more Wikipedia-like and open to anyone in the community. We want to give volunteers tools to contribute in ways they find satisfying and meaningful. From now on, Wiki Ed staff will handle onboarding and be the liaison between the course and the editing community, freeing community volunteers to focus on the aspects of helping with the class they like most.

As we’ve grown, the pace of our understanding of best practices has accelerated. We’ve been developing new tools and resources, too. It’s unfair to expect volunteers to keep up with all these changes. And no volunteer wants to deal with criticism from other editors when a course has problems that could have been avoided if the course design had followed Wiki Ed’s current recommendations. All these reasons have factored in to this change.

We’ve developed ways to make volunteering easier: we’ve set up a new category system for student work. We will tag student work with specific suggestions for how to improve them. If you enjoy copyediting or wikifying articles, you’ll find pages to copyedit or wikify. If you want to provide feedback on a draft, you can find student drafts awaiting feedback. If you like finding freely licensed images for articles, you can see student articles that would benefit from images. For an example of the new system, check out our central portal for tagged articles.

This system is designed to work with the organic spirit of volunteerism that built Wikipedia. Community volunteers can help with the tasks they enjoy helping with most; they can spend 5 minutes with one article or several hours with many articles. We hope that by providing more staff support for each course, we can ensure the course design meets our best practices, and our student editors have productive contributions to Wikipedia.

LiAnna Davis
Director of Programs

by LiAnna Davis at December 18, 2014 07:24 PM

Gerard Meijssen

#Google - Would you use your #trickery for us?

After the shock and awe of yesterday's announcement, There has been time to think. Google showed a real interest in Wikidata. It created a new tool to help improve the quality of its data. But the real expertise of Google is in determining the probability of facts. It is part and parcel of its ranking algorithms.

It would be as awesome when Google would indicate those statements it deems to have a less than even chance of being true. The combination of such a list and the new tool would make the efforts of the people seeking sources all the more relevant. When statements are debunked, it has a potential quality effect to all the associated Wikimedia projects. Given that it is probable that most statements are fine, it makes for more concentrated effort and consequently its effects will be noticed.

While we are on this line of thought, given the data of Freebase, Google could indicate based on its algorithms how probable its sets of data are. Everything that is highly likely should be a candidate for import in Wikidata. The other reason for importing data into Wikidata anyway is that it is an invitation to all the Freebasers to join our ranks, increase our expertise and together be awesome.

by Gerard Meijssen (noreply@blogger.com) at December 18, 2014 07:38 AM

Wikimedia Foundation

#Edit2014: Q&A with the producer

Today the Wikimedia Foundation published #Edit2014. This is the Foundation’s first-ever year-in-review video, a look at how the world used and contributed to Wikipedia in 2014. To learn more about how the project came to be, I interviewed the video’s director and producer, and Wikimedia Foundation storyteller, Victor Grigas (User:VGrigas (WMF)).

Halla Imam: This is the first time the Wikimedia Foundation has ever done a year-in-review. What inspired the Foundation to start this year?

Victor Grigas: So many people around the world use and love Wikipedia, but not everyone knows where it comes from, or how it is supported and maintained. The vast majority of people who read Wikipedia don’t edit or upload. They use it to learn and discover. We wanted to capture that experience, but also highlight how powerful editing can be. As an editor, you are creating all of this knowledge for the public.

So Katherine (the Wikimedia Foundation’s CCO) and I talked in September about maybe doing something like this for the end of this year. My first response was that it sounded great, even though it was a short time period to dream up an entire project and execute it. I said I’d research it. A couple weeks later I decided it was doable, so we pressed go.

“Recording typing sounds with different keyboards” by Victorgrigas, under CC-BY-SA-3.0


HI: It seems like you chose Ebola as the centerpiece story. Was this an obvious choice? Why choose such bad news?

VG: This was an obvious choice. Let me put it this way: 2014 was filled with bad news; I think it was a particularly bad year for global news. We decided to focus on this particular story because this was also something that galvanized the Wikimedia community.

When the Ebola outbreak happened, members of our medical community organized with Translators Without Borders to translate the Wikipedia Ebola virus disease article into more than 50 different languages, including some of those spoken in the affected regions of West Africa. Many of these other Wikipedias had very few articles before, but now they have information about Ebola. The use of Wikipedia to understand Ebola was a story that ended up in The New York Times. By recreating this story visually through reenacting the editing process, we’re highlighting its impact. It’s a news event that happened globally, but also within the Wikimedia community.

HI: Were there controversial moments in 2014 that you felt should not be included in the video?

VG: Not really. There are all kinds of events that happened that have political ramifications. The whole point of Wikimedia is to be neutral about them. We wanted to show the events as events, and not necessarily push for one side or the other. There was one scene in particular where we show the Gaza war from this year. We saw it as a way to showcase Wikipedia’s policies on Neutral Point of View (NPOV), where knowledge on Wikipedia is self-critical. You see the famous [citation needed] tag, and you see the [disputed-discuss] tags, the wall of references, and the articles in English and Arabic and Hebrew. The goal on Wikipedia is to have a wide range of sources to illustrate the knowledge being presented.

“Stickies to brainstorm Edit.2014.” by Victorgrigas, under CC-BY-SA-3.0

HI: Did you worry about finding a balance between positive and negative events to showcase in the video?

VG: Yes. 1,000%. It’s not just positive or negative events, but how they show certain parts of the world. Ideally, it would have been a mix of stories that show the positive and negative everywhere. Unfortunately, this year, we saw a lot of terrible events in places that already are experiencing hardship — like Ebola in West Africa. We realized there was so much bad news in 2014, what’s the good news? Well, we had all these global sporting events, like the FIFA World Cup and the Sochi Winter Olympics, and a lot of very powerful science content too..

HI: Do you think you succeeded in making the video sufficiently international, given the global diversity of Wikimedia’s editors?

VG: We started and ended with an event that felt universal — we get a close up on a comet! That doesn’t really need text or any dialogue, but humanity only has so many of those experiences. I think the video leans towards being understood by people who don’t speak English, but unfortunately they may not understand it entirely. We’d love to make that video in the future.

As a team, we tried to be conscious of world view. I am from the United States, so I have my own inherent bias. I think everyone does. So we tried to make it as wide-ranging as we could, reaching out to as many people as we could, because Wikipedia is truly global and we wanted to represent that. But it is impossible to have one video represent the totality of things that mattered to the world in three minutes. This is a first attempt, and like Wikipedia, we’ll be looking to make this even more balanced for the future.

HI: Other organizations do year-in-review videos. What makes the Wikimedia Foundation’s different?

VG: I think for any video like this, you’re trying to communicate an identity. Wikimedia’s identity is about contributing, sharing, and learning. We don’t have shareholders, we have stakeholders: we have editors, readers, donors, people who want a wide range of information and knowledge. We’re the platform that allows them to contribute that to the world. They want to be able to have access to the knowledge that happened in 2014. We tried to put in as much as we could without sacrificing too much. It’s a balancing act to try and capture the world.

HI: How did you choose Bach as the soundtrack?

VG: It was a bit of a surprise! We needed to score the video with something as a placeholder. I found the Bach on Wikipedia and initially used it as a temporary track, but then everyone who watched it felt like it just sounded right. So we kept it.

We did work with a musician, Andy R. Jordan, to score the section on Ebola. It needed something to convey the gravity of the moment. We used simple sounds and instruments — a cello bow being rubbed against a wooden marimba key. It had a resonance and solemnity that felt appropriate.

HI: Wikipedia relies on Creative Commons and other freely licenced work — was it difficult to find all the imagery you needed using only freely licensed imagery?

VG: One of the first things I realized in researching is that we wouldn’t be able to license footage, because Wikimedia only publishes content that is Creative Commons. So we needed to use media that has been contributed by Wikimedians, and that means we had to be creative. We ultimately found a way around this by telling these stories using screen shots of the Wikipedia articles. In the end, I think it makes the experience stronger, communicating the message of Wikimedia through the experience of Wikipedia itself.

For example, we don’t always have a wealth of visual content about hard news, because although some Wikimedians are photojournalists, most photojournalists are not Wikimedians. Similarly, there’s a disparity among countries: the more industrialized a country is, the more images you’ll find. There’s more internet connectivity, they’ve been online longer, they have more access to cellphones and cameras, and things like that. Less industrialized countries are less well covered.  These disparities did make it more difficult to cover certain world events. For example, the images for Ebola are from an outbreak in what was then Zaire, in 1976. So here’s a call to action: Photographers, freely license and upload your images to Commons! Do it for the world!

HI: Were there any images that were especially challenging to obtain?

VG: Wikimedia Commons has images of just about everything, but sometimes we wanted to show a more personal or nuanced perspective. For example, when it came to the Hong Kong ‘Umbrella protests,’ many of the images were too busy or confusing. So I spent a day on Twitter trying to find photographers in Hong Kong, and asking them if they’d contribute their images to Commons. In the end, I found two who were happy to contribute — a special thanks to them!

Similarly, we know Wikipedia isn’t only used for science, history, and philosophy. Pop culture is very popular, but most pop culture imagery is totally proprietary. We didn’t have the rights to show scenes from TV shows, but we could show the article about the tv show. So while I would have loved to create a montage of popular television programs from around the world, that’s just not the content we typically have on Wikipedia.

HI: What were some of the other roadblocks you face in producing this video?

VG: Well, we worked on a shoestring budget. We’re a non-profit, so that was just something we knew was a constraint early on. But instead of seeing that as a limitation, we looked at it as inspiration to get creative: what can we do to make this happen? Once we did the research, we saw that it was something we could do within our budget.

HI: Filmmaking is a ‘dictatorship,’ but Wikimedia is famous for being a collaborative project. Was this video collaborative?

VG: In some ways! We drew on source material that was highly collaborative — all the images and video and text you see were contributed by people around the world. Most of this content is from Wikimedia Commons. In that way it is collaborative, but it isn’t a collaborative in the sense of being ‘real time.’ It is collaborative in the sense that Wikipedia is — made up of contributions from all over, from many different people and sources.

We’d love to have left it wide open, and had a lot of people weigh in, but we only had a few weeks to get it done. I took input from a lot of individuals I knew personally from across Wikimedia from as many different countries as possible. They say video productions are dictatorships because there needs to be some kind of visual and auditory continuity. You could argue the same thing about Wikipedia, how is it not going to be a huge blend of competing voices? But somehow Wikipedia works.

HI: What is Wiki Loves Monuments?

VG: It’s a photo competition that started in the Netherlands. Now, according to the Guinness Book of World Records, it is the world’s largest photo competition. In the early drafts of this script, we wanted to segue from the point of view of a reader of Wikipedia to an editor, and show the engine of the car — what’s under the hood. We ended up scrapping that approach, but fell in love with the images, so we kept them.

In Wiki Loves Monuments, different countries have an open contest to submit photos to illustrate monuments, buildings, or other landmarks in their country. They’re gorgeous images, and they’re all get judged by an international jury. The contest isn’t over yet, so we picked a few. I reached out to this great guy, Patrick David, a parallax illustrator, who animates images. He volunteered to bring of these images to life, which was fantastic.

HI: Will you produce another video next year? If so, what would you do differently?

VG: I’d like to experiment to see if we could make it an even more collaborative process. I would love to be able to say “Take my idea, my rough concept, my script, my notes, and can you — the plural you — help research this?” Imagine — if making this video were a truly open collaboration like Wikipedia, we have the potential to make something really incredible. You could imagine people all over researching content, weighing in on visuals, and justifying cuts. Would having those discussions in public amplify the efforts? It’d be interesting to find out.

HI: Do you have a favorite moment in the video?

VG: If you watch closely, you’ll notice there are no pans or zooms except for when we click the “Edit” button. That was my favorite part of the video.

HI: Is there anything you learned about Wikimedia or its contributors that you didn’t know before you started?

VG: I learned how many Wikimedians are also on social networks! That proved a great way to reach out to a lot more people. The experience of spending a day reaching out to people on Twitter who might have photos to share — that was really interesting.

Halla Imam
Communications Intern
Wikimedia Foundation

by julietvbarbara88 at December 18, 2014 05:31 AM

Wikipedia’s first-ever annual video reflects contributions from people around the world

File:Wikipedia Edit 2014.webm

Wikipedia: #Edit2014 tells the story of what you read and edited in 2014. You can also view the video on YouTube and on Vimeo.

Today, the Wikimedia Foundation released its first ever year in review video, chronicling the celebration, pain, fear, resilience, and discovery that came to characterize 2014. More than anything, it celebrates those who come to Wikipedia to learn and understand the complexity of our world, and those who edit and contribute information so that others might do the same.

In watching the video, you embark on a journey through the world and Wikipedia, revisiting what you read and edited this year. From the FIFA World Cup to the Indian general elections, and the Ice Bucket Challenge to Ebola in West Africa, we follow threads of discovery through Wikipedia’s vast constellation of knowledge, finding opportunities to contribute along the way. We venture from Sochi to outer space in less than three minutes.

Wikipedia is among the most popular sites in the world, but the Wikimedia Foundation (WMF) is a small non-profit. The video was put together on a shoestring budget, and in less than two months, through the generous collaboration and contributions of Wikimedians and Wikipedia supporters. The Wikimedia Foundation’s storyteller and video producer, Victor Grigas said, “We had to get creative to make this happen, we couldn’t just throw money at it. This video was made with everyday tools: a computer, an internet connection, lots of deep, patient thinking, research and collaboration, and the free content that ordinary people uploaded to Wikipedia.”

Every piece of imagery and video we use was uploaded by you. Wikimedia’s commitment to open access and free information meant we could only use freely licensed photos and videos when producing this video. While the Foundation may have edited the video, contributions came from users around the world.

You will see many amazing freely licensed images in the video — beautiful photographs of monuments, recordings of major world events from citizen journalists. At the same time, you will also see some grainy and dated images — such as those used to illustrate West Africa’s struggle with the deadly Ebola outbreak. The images used to illustrate that segment date back to 1976, from an outbreak in Zaire. Although other, more recent freely licensed images are available, most addressed things such as proper use of personal protective equipment or laboratory facilities, rather than the immediate impact on human lives.

With hundreds of millions of people relying on Wikipedia to learn and understand more about the world around them, the instance of Ebola highlights the immense need for freely licensed images of important world events. We encourage people everywhere to freely license and share images and photographs of the notable people, places, or historic events — and in doing so, help make the sum of all knowledge available to everyone. You can upload your pictures Wikimedia Commons (Wikipedia’s central media repository) under a free license.

While Ebola’s treatment in this video underscores the continuing need for people to contribute freely licensed images, it is also an inspiring true story about collaboration. As the Ebola outbreak raged, devastating the lives of people in numerous countries, Wikimedians looked for ways to contribute. Together with Translators Without Borders and the medical professionals at the WikiProject Med Foundation, volunteers translated the article on Ebola into more than fifty languages, including numerous African languages. In October, The New York Times reported that Wikipedia had emerged as a trusted internet source for Ebola information.

Wikipedia reflects the world around us. With each new event, it changes and grows, accommodating our human triumphs and losses. It is the largest collaborative knowledge project in human history, and it is made possible by even the tiniest of contributions from people around the world. Join us in rediscovering 2014, and consider contributing to Wikipedia’s boundless knowledge.

Together, we edit our common history.

Katherine Maher
Chief Communications Officer
Wikimedia Foundation

by maherwiki at December 18, 2014 05:09 AM

Introducing lead images to Wikipedia’s Android beta app

"Lead images on the Wikipedia Android app" by Deskana (WMF), under CC-BY-SA-3.0

“Lead images on the Wikipedia Android app” by Deskana (WMF), under CC-BY-SA-3.0

Here in the Mobile Apps Team at the Wikimedia Foundation, we’re working to make it easier for the world to experience knowledge on mobile. More and more, users around the world are accessing Wikipedia on the go through their mobile devices, and it’s important that they can easily and quickly get the information they seek.

Especially on the smaller, portable screens of mobile, readers need seamless and intuitive ways to interact with content and learn. With that in mind, for the past few months we’ve been working on a restyling of how the first section of content in articles appears in the Wikipedia app. Our newest feature that we are testing in Android now more prominently displays the most relevant image for the reader at the beginning of each article. We hope that this will help set the context for the reader and naturally lead them into the text to learn more.

In order to build a more functional and compelling mobile experience on the apps, here’s what we’ve done recently:

    • A prominent image from the article is now displayed at the top of each page, including parallax scrolling!
    • Face detection centers on the face of the subject in the image.
    • Images are displayed in a mobile-friendly image viewer panel when selected.
    • A short description about the article from Wikidata is displayed for additional context.
    • The first sentence of the article is available on the initial view.
    • Page issue tags and disambiguation are wrapped up into buttons underneath the page title.

We’ve released this work to our Wikipedia Beta app for Android and hope you’ll check it out! Remember that this is a peek into our ongoing work, and that since this is a beta app the feature is subject to change and, as always, there may be the odd bug while we round our testing off. Meanwhile, we’re diligently working to also bring this to iOS.

Text has always been central to the Wikipedia experience, but as they say: a picture is worth a thousand words. The Wikimedia movement is lucky to have a wealth imagery on our projects, and we’re excited to put it to good use with this new feature.

And, if you’re an engineer with experience building apps and want to join us in building the future of mobile for Wikipedia, we’re hiring!

Dan Garry, Associate Product Manager, Mobile Apps Team, Wikimedia Foundation

by julietvbarbara88 at December 18, 2014 01:45 AM

December 17, 2014

Andre Klapper

Good bye Bugzilla, welcome Phabricator.

<tl;dr>: Wikimedia migrated its bug tracking from Bugzilla to Phabricator in late November 2014.

After ten years of using Bugzilla with 73681 tickets and ~20000 user accounts and after months of planning, writing migration code, testing, gathering feedback, discussing, writing more code, writing documentation, communicating, et cetera, Wikimedia switched from Bugzilla to Phabricator as its issue tracking tool.
Phabricator is a fun adventure game collaboration platform and a forge that consists of several well-integrated applications. Maniphest is the name of the application for handling bug reports and tasks.
My announcement from May 2014 explained the idea (better collaboration and having less tools) and the decision making process that led to choosing Phabricator and starting to work on making it happen.

Wikimedia Phabricator frontpage an hour after opening it for the public again after the migration from Bugzilla.

Wikimedia Phabricator frontpage an hour after opening it for the public again after the migration from Bugzilla.

Quim already published an excellent summary of Wikimedia Phabricator right after the migration from Bugzilla, covering its main features and custom functionality that we implemented for our needs. Read that if you want to get an overview of how Phabricator helps Wikimedia with collaborating and planning in software development.
This blog post instead covers more details of the actual steps taken in the last months and the migration from Bugzilla itself. If you want even more verbose steps and information on the progress, check the status updates that I published every other week with links to the specific tickets and/or commits.


After reviewing our project management tools and closing the RfC the team started to implement a Wikimedia SUL authentication provider (via OAuth) so no separate account is needed, work on an implementation to restrict access to certain tasks (access restrictions are on a task level and not on a project level), and creating an initial Phabricator module in Puppet.
We started to discuss how to convert information in Bugzilla (keywords, products and components, target milestones, versions, custom fields, …), which information to entirely drop (e.g. the severity field, the history of field value changes, votes, …), and which information to only drop as text in the initial description instead of a dedicated field. More information about data migrated is available in a table. This constantly influenced the scope of the script for the actual data migration from Bugzilla (more information on code).

We already had a (now defunct) Phabricator test instance in Wikimedia Labs under fab.wmflabs.org which we now started to also use for planning the actual migration.
There’s a 7 minute video summary from June describing the general problem with our tools that we were trying to solve and the plan at that time. We also started to write help documentation.

As we got closer to launching the final production instance on phabricator.wikimedia.org, we decided to split our planning into three separate projects to have a better overview: Day 1 of a Phabricator Production instance in use, Bugzilla migration, and RT migration.

On September 15th, phabricator.wikimedia.org launched with relevant content imported from the fab.wmflabs.org test instance which we had used for dogfooding. In the case of Wikimedia, this required setting up SNI and making it work with nginx and the certificate to allow using SUL and LDAP for login. After the production instance had launched we also had another Hangout video session to teach the very basics of Phabricator.

To provide a short impression of further stuff happening in the background: Elasticsearch was set up as Phabricator’s search backend, some legal aspects (footer, determining the license of submitted content) were brought up, phab-01.wmflabs.org was set up as a new playground, and we made several further customizations when it comes to user-visible strings and information on user pages within Phabricator. In the larger environment of Wikimedia infrastructure interacting with the issue tracker, areas like IRC bots, interwiki links, on-wiki templates, and automatic notifications in tasks about related patches in the code review system were dealed with or being worked on.

Paying attention to details: The “tracked” template on Wikimedia sites supports linking to tasks in Phabricator, while still redirecting links to Bugzilla tickets via URL redirects (see below).

We also had a chicken and egg problem to solve: Accounts versus tickets. Accounts in Bugzilla are defined by email addresses while accounts in Phabricator are user names. For weeks we were asking Bugzilla users and community users to already create an account in Phabricator and “claim” their Bugzilla accounts by entering the email address that they used in Bugzilla in their Phabricator account settings. The plan was to import the tickets and account ‘placeholders’ and then use cron jobs to connect the placeholder accounts with the actual users and to ‘claim’/connect their past Bugzilla contributions and activity by updating the imported data in Phabricator.

On October 23th, we made a separate “bugzillapreview” test instance available on Wikimedia Labs with thousands of Bugzilla tickets imported. For two weeks, the community was invited to check how Bugzilla tickets are going to look in Phabricator after the migration and to identify more potential issues. The input was helpful and valuable: We received 45 reports and fixed 25 of them (9 were duplicates, 2 invalid, and 9 got declined).

A task imported from Bugzilla in the Phabricator preview instance.

A task imported from Bugzilla in the Phabricator preview instance.

Having had reached a good overview, we created a consolidated list of known issues and potential regressions created by the migration from Bugzilla to Phabricator and defined a final date for the migration: November 21-23.

Keeping timestamps of comments intact (such as the original creation date of a ticket in Bugzilla or when a certain comment was made) was still something to sort out at this point (and got tackled). It would have been confusing and would have broken searches that triagers need when trying to clean up (e.g. tickets which have not seen updates for two years).

It was also tricky performance-wise to keep the linear numbering order of reports which was requested by many people to not solely depend on URL redirects from bugzilla.wikimedia.org to phabricator.wikimedia.org which we planned to set up (more information on the redirect setup). As we already had ~1400 tickets in Phabricator we went for the simple rule “report ID in Bugzilla + 2000 = task ID in Phabricator”.

Regarding documentation and communication, we created initial project creation guidelines, sent one email to those 66 users of personal tags in Bugzilla warning that tags will not be migrated, sent two emails to the 850 most recently active Bugzilla users asking them to log into Phabricator and provide their email address used in Bugzilla to claim their imported contributions as part of the migration already (for comparison, the average number of active users per month in Bugzilla was around 500+ for the last months), put migration announcement banners on mediawiki.org and every page on our Bugzilla, sent reminders to the wikitech-l, mediawiki-l, wikitech-ambassadors, and wmfall mailing lists.

After a last ‘Go versus No-Go’ meeting on November 12th, we set up the timeline with the list of steps to perform for the migration, ordered, with assignees defined for each task. This was mostly based on the remaining open dependencies of our planning task. We had two more IRC office hours on November 18 and 19 to answer questions regarding the migration and Phabricator itself.

While migrating, the team used a special X-Forwarded-For header to still be able to access Bugzilla and Phabricator via their browsers while normal users trying to access Phabricator or Bugzilla were redirected to a wikipage telling them what’s going on and where to escalate urgent issues (MediaWiki support desk or IRC) while no issue tracker is available. With aforementioned URL redirects in place we intended to move and keep Bugzilla available for a while under the new address old-bugzilla.wikimedia.org.


The page on mediawiki.org that users were redirected to while the migration was taking place.

The page on mediawiki.org that users were redirected to while the migration was taking place.

The migration started by switching Bugzilla to read-only for good. Users can still log into Bugzilla (now available at old-bugzilla.wikimedia.org) and e.g. run their searched queries or access their list of votes on the outdated data but they cannot create or change any existing tickets.

We pulled phabricator.wikimedia.org and disabled its email interface, switched off the code review notification bot for Bugzilla, and switched off the scripts to sync Bugzilla tickets with Mingle and Trello.

The data migration started by applying a hack to workaround a Bugzilla XML-RPC API issue (see below), running the migration fetch script (tasks and comments), reverting the hack, running the migration create script (attachments), moving Bugzilla to old-bugzilla.wikimedia.org, starting the cron jobs to start assigning Bugzilla activity to Phabricator users by replacing the generic “bzimport” user by the actual corresponding users, and setting up redirects from bugzilla.wikimedia.org URLs.

A task before and after users have claimed their previous Bugzilla accounts (positions of comments in the right image manually altered for better comparison).

A task before and after users have claimed their previous Bugzilla accounts (positions of comments in the right image manually altered for better comparison).

After several of those data migration steps we performed numerous tests. In parallel we prepared emails and announcements to send out and publish once we’re finished, updated links to Bugzilla by Phabricator on dozens of wikipages, updating MediaWiki templates on the Wikimedia, and further small tasks.

Paying attention to details: The "infobox" template on MediaWiki extension homepages linking to the extension's bug reports at the bottom, now handled in Phabricator instead of Bugzilla.

Paying attention to details: The infobox template on MediaWiki extension homepages linking to the extension’s bug reports at the bottom, now handled in Phabricator instead of Bugzilla.

For those being curious about time spans: Fetching the 73681 Bugzilla tickets took ~5 hours, importing them ~25 hours, and claiming the imported user contributions of the single most active Bugzilla user took ~15 minutes.

But obviously we were pioneers that could not rely on Stackoverflow.
Even if you try to test everything, unexpected things happen while you are running the migration. I’m proud to say that we (well, rather Chase, Daniel, Mukunda and Sean when it came to dealing with code) managed to fix all of them. And while you try to plan everything, for such a complex move that nobody has tried before, there are things that you simply forget or have not thought about:

  • We had to work around an unresolved upstream XML-RPC API bug in Bugzilla by applying a custom hack when exporting comments in a first step and removing the hack when exporting attachments (with binary data) in a second step. Though we did, it took us a while to realize that Bugzilla attachments imported into Phabricator were scrambled as the hack got still applied for unknown reasons (some caching?). Rebooting the Bugzilla server fixed the problem but we had to start from scratch with importing attachments.
  • Though we had planned to move Bugzilla from bugzilla.wikimedia.org to old-bugzilla.wikimedia.org after exporting all data, we hadn’t realized that we would need a certificate for that new subdomain. For a short time we had an ugly “This website might be insecure” browser warning when users tried to access the old Bugzilla until old Bugzilla was moved behind the Varnish/nginx layer with its wildcard *.wikimedia.org certificate.
  • Two Bugzilla statuses did not get converted into Phabricator tags. The code once worked when testing but broke again later at some point without anybody realizing but this was noticed and fixed.
  • Bugzilla comments marked as private got public again once the cron jobs claiming contributions of that commenter were run. Again this was noticed and fixed.
  • We ended up with a huge feed queue and search indexing queue. We killed the feed daemon at some point. Realizing that it would have taken Phabricator’s daemons ~10 days to handle the queue, Chase and Mukunda debugged the problem together with upstream’s Evan and found a way to improve the SQL performance drastically.
  • We hadn’t thought about switching off some Bugzilla related cronjobs (minor) and I hadn’t switched off mail notifications from Bugzilla so some users still received "whining" emails until we stopped that.
  • We had a race condition in the migration code which did not always set the assignee of a Bugzilla ticket also as the assignee of the corresponding task in Phabricator. We realized early enough by comparing the numbers of assigned tickets for specific users and fixed the problem.
  • I hadn’t tested that aliases of Bugzilla reports actually get migrated. As this only affected ~120 tickets we decided to not try to fix this retroactively.
Phabricator daemons being (too) busy handling the tasks mass-imported from Bugzilla.

Phabricator daemons being (too) busy handling the tasks mass-imported from Bugzilla.

We silently reopened Phabricator on late Sunday evening (UTC) and announced its availability on Monday morning (UTC) to the wikitech-l community and via the aforementioned blogpost.

A list of dependency tasks handled before completing the migration from Bugzilla to Phabricator is available.


Phabricator has many advantages compared to Bugzilla: Wikimedia users do not reveal their email addresses and users do not have another separate login and password. (These were the most popular complaints about Bugzilla.)

Integration with MediaWiki's Single User Login via OAuth - no separate login.

Integration with MediaWiki’s Single User Login via OAuth – no separate login.

There is a preview when writing comments.
The initial description can be edited and updated like a summary while the discussion on a task evolves.
Users have a profile showing their latest activity.
There’s a global activity feed.
There is a notification panel on top.
The UI looks modern and works pretty well on devices with small screens.
Tasks can have either zero or one assignee. In Bugzilla an assignee must be set even if nobody plans to work on a ticket.
Tasks can have between zero and unlimited projects (such as code bases, sprints, releases, cross-project tags) associated. In Bugzilla, tickets must have exactly one product, exactly one component, exactly one target milestone, and between zero and unlimited cross-project keywords. That also solves Bugzilla’s problem of dealing with branches, e.g. setting several target milestones.
Projects have workboards (a card wall) with columns for planning sprints (Bugzilla only allowed getting lists of items which you cannot directly interact with from the list view.). Thanks to Wikimedia Deutschland we now also have burndown charts for sprint projects.

The workboard of the Wikimedia Phabricator project, right after the Bugzilla migration.

Burndown chart for a two week sprint of the Wikimedia Analytics team.

Burndown chart for a two week sprint of the Wikimedia Analytics team.


From a bugmaster point of view there are also small disadvantages:
Some searches are not possible anymore via the web interface, e.g. searching for open tasks which have the same assignee set for more than 12 months ("cookie-licking") or tasks that have been closed within the last month.
Phabricator is more atomic when it comes to actions: I receive more mail notifications and it also takes me slightly longer to perform several steps in a single ticket (though my local Greasemonkey script saves me a little bit of time).

Furthermore, admins don’t have the same powers as in Bugzilla. The UI feels very clean though (breadcrumbs!):

Administrator view for settings policies in Maniphest.

Administrator view for settings policies in Phabricator.

New territories

Apart from the previous list of unexpected situations while migrating, there were also further issues we experienced before or after the migration.
Mass-importing huge amounts of data from an external system into Phabricator was new territory. For example, Phabricator initially had no API to create new projects or to import tickets from other systems. No Phabricator instance with >70000 tasks had existed before – before the migration we had a crash affecting anonymous users and after the migration the reports/statistics functionality became inaccessible (timing out). Those Phabricator issues were quickly fixed by upstream.
And of course in hindsight, there are always a few more things that you would have approached differently.

Next steps

All in all and so far, things work surprisingly well.
We are still consolidating good practices and guidelines for project management (we had a Hangout video session on December 11th about that), I’ve shared some queries helpful for triagers, and we keep improving our Phabricator and bug management related help and documentation. The workflow offered by Phabricator also creates interesting new questions to discuss. Just one example: When a task has several code related projects assigned that belong to different teams, who decides on the priority level of the task?

Next on the list is to replace RT (mostly used by the Operations team) and helping teams to migrate from Trello and Mingle (Language Engineering, Multimedia and parts of Analytics have already succeeded). In 2015 we plan to migrate code repository browsing from gitblit and code review from Gerrit.

OMG we made it

A huge huge thanks to my team: Chase (Operations), Mukunda (Platform), Quim (Engineering Community Team), the many people who contributed code or helped out (Christopher, Daniel, Sean, Valhallasw, Yuvi, and more that I’ve likely forgotten), and the even more people who provided input and opinions (developers, product managers, release management, triagers, bug reporters, …) leading to decisions.
I can only repeat that the upstream Phabricator team (especially Evan) have been extremely responsive and helpful by providing feedback incredibly fast, fixing many of our requests and being available when we ran into problems we could not easily solve ourselves.


by aklapper at December 17, 2014 04:49 PM

Niharika Kohli

GHC India & Adacamp

 I recently got a chance to go to Bangalore for attending Adacamp India and GHC India. It was an amazing experience which Id love to repeat over and over! 

Here’s something about GHC India:

The awesome talks from industry leaders, the endeavors of the women entrepreneurs, and especially the sheer amount of women in tech there would make your head spin. 

Every other woman I interacted with had their own share of struggles when they tried to dive into this predominantly-male industry. But despite that, they all had very successful careers today (and almost 80% of the women looked after a whole family along while pursuing their careers). That was deeply inspiring. And also the fact that most of them didn’t go to the popular colleges when they were graduating. I’d really encourage everyone to look up on the session videos if they make them public on YouTube (especially by ones by Lakshmi Pratury and Jayshree Ullal). Closed door tech connect session were equally awesome. Sunia Mani taught me how eCommerce websites actually work in under an hour. Besides the sessions, the venue and food was top-class, as expected. The whole organizing team has done a wonderful job, a big congratulations to them all! There was a goodie fair too (formally “Hands on lab”) where sponsors gave out stuff as a publicity gimmick. Not that it stopped me from grabbing those goodies. :P 

It was my first time attending Adacamp in Bangalore. I was really eager to go there because I’d heard about it from lots of women and the only word they had for it was “Awesome!”. It did live up to my expectations. With lots of cool sessions and cooler women, providing an opportunity to network and share your experiences alike. 

I met Netha, Sumana, Anu George, Jicksy, Diwanshi, Tracey, Dinu and so many others. The meaningful patient conversations that happen in the quiet corners of Adacamp are its best part, I felt. It was casual enough to make you feel comfortable talking about anything from personal relations to virtual reality. 

I’m glad I could be a part of both events. And I’d love to have the chance to attend them again. :)

by Niharika at December 17, 2014 04:18 PM

Gerard Meijssen

#Google - What it does with #Freebase is beyond awesome

In an e-mail Denny Vrandečić announces an astounding bit of news. It effectively says that Wikidata can have all its data if it wants it.

It then goes on saying that it is not expected for Wikidata to accept all this data and follows with an announcement for a tool that is to source data. This news is best read as it was announced..
Thank you Google!
Freebase was launched to be a “Wikipedia for structured data”, because in 2007 there was no such project. But now we do have Wikidata, and Wikidata and its community is developing very fast. Today, the goals of Freebase might be better served by supporting Wikidata [1]. 
Freebase has seen a huge amount of effort go into it since it went public in 2007. It makes a lot of sense to make the results of this work available to Wikidata. But knowing Wikidata and its community a bit, it is obvious that we can not and should not simply upload Freebase data to Wikidata: Wikidata would prefer the data to be referenced to external, primary sources. 
In order to do so, Google will soon start to work on an Open Source tool which will run on Wikimedia labs and which will allow Wikidata contributors to find references for a statement and then upload the statement and the reference to Wikidata. We will release several sets of Freebase data ready for consumption by this tool under a CC0 license. This tool should also work for statements already in Wikidata without sufficient references, or for other datasets, like DBpedia and other machine extraction efforts, etc. To make sure we get it right, we invite you to participate in the design and development of this tool here: 
 https://www.wikidata.org/<wbr></wbr>wiki/Wikidata:Primary sources tool 
I hope you are as excited as I am about this project, and I hope that you will join me in making this a reality. I am looking forward to your contributions!  
[1] https://plus.sandbox.google.<wbr></wbr>com/109936836907132434202/<wbr></wbr>posts/bu3z2wVqcQc

Denny Vrandečić via lists.wikimedia.org 

by Gerard Meijssen (noreply@blogger.com) at December 17, 2014 10:59 AM

Joseph Reagle

Measure, manage, manipulate

The aphorism “If you can’t measure it, you can’t manage it” is common in contemporary life. It is often attributed to business guru Peter Drucker, and, even if he did not say it, the notion has become a slogan for the quantified, big-data world in which we live. In boardrooms, non-profits, and universities, we are fixated on quantifiable measures. Otherwise, how do you know what to improve? Another aphorism I find equally compelling is Goodhart’s law which, in Marilyn Strathern’s words, states “When a measure becomes a target it ceases to be a good measure” (Strathern, 1997: 308). Why? Because measures which become targets are soon subject to manipulation. I refer to this as the 3-M’s paradox (measure/manage/manipulate). I first thought about this in research about ratings and rankings at an online photography sharing. I concluded that evaluation in the digital age is characterized by the following.

  1. It’s hard to quantify the qualitative: there was much experimentation with rating and ranking systems.
  2. Quantitative mechanisms beget their manipulation: people “mate” rated friends, “revenge” rated enemies, and inflated their own standing.
  3. “Fixes” to manipulation have their own, often unintended, consequences and are also susceptible to manipulation: non-anonymous ratings led to rating inflation.
  4. Quantification (and the how one implements it) privileges some things over others: nudes were highly rated, more so when measured by number of comments, not so with photos of flowers.
  5. Any “fixes” often take the form of more elaborate, automated, and meta quantification: such as making some users “curators” or labeling them as “helpful.”

Of course, this extends beyond online ratings communities. When politicians sought to manage primary schools on the basis of measures of student achievement, cheating soon followed. My favorite example of this is in Texas where administrators “disappeared” poorly performing students so that they could not take the standardized tests. Colleges can be measured with respect to class size and selectivity; this too can be “gamed.”.

What is most interesting about ranking systems that reduce multiple variables into a single index is how arbitrary they often are. In a classic paper Richard Becker and his colleagues looked at how they could manipulate the outcomes of the best places to live. While the methods used to construct the rankings show fairly good agreement at the top and bottom ends, the choice of the ranking method and how the variables were weighted did make significant differences in order (Becker et al., 1987). Malcolm Gladwell described this problem as “A ranking can be heterogeneous … as long as it doesn’t try to be too comprehensive. And it can be comprehensive as long as it doesn’t try to measure things that are heterogeneous” (Gladwell, 2011). Yet, many schemes try to do both, including U.S. News’ college rankings. (To get a feel for this, you can play Jeffrey Stake’s ranking game of law schools.)

Honestly, I’m confused by all of this. Clearly, we need to measure some things, but we also need to be highly skeptical of what we choose to measure, how we do so, and what we do with the resulting data.

Becker RA, Denby L, Mcgill R, et al. (1987) Analysis of data from the places rated almanac. American Statistician, 41(3), 169–186, Available from: http://www.jstor.org/pss/2685098 (accessed 19 August 2011).

Gladwell M (2011) The order of things. The New Yorker, Available from: http://www.newyorker.com/magazine/2011/02/14/the-order-of-things (accessed 18 December 2014).

Strathern M (1997) 'Improving ratings’: audit in the British University system. European Review, 5(3), 305–321, Available from: http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=5299904.

by Joseph Reagle at December 17, 2014 05:00 AM

December 16, 2014

Gerard Meijssen

#Wikidata - WDQ with load balancing

The #Wikipedia app on the #mobile is to give you everything that is near you. The question is, should it be based on Wikipedia or on Wikidata data. In order for software to find geo references, "magic words" need to be employed in Wikipedia. These same magic words can be used to harvest the information for Wikidata.

So what are the benefits of using Wikidata over Wikipedia with magic words .. Most importantly, there is only one Wikidata and there are 280+ Wikipedias. Everyone seeking information about subjects nearby is as entitled to great information as anyone else.

Wikidata does not have official query functionality. But it does have WDQ. Magnus and Yuvi are finishing the implementation of load balancing for WDQ. So the question is not can we serve geo coordinates from Wikidata but can we afford to let this opportunity slip us by.

PS There has been no evaluation of WDQ yet by WMF engineers.. Why not?

by Gerard Meijssen (noreply@blogger.com) at December 16, 2014 08:13 AM

December 15, 2014

Wiki Education Foundation

Monthly report for November 2014


  • Joined by volunteer Becky Carmichael, Jami attended the National Women’s Studies Association’s annual conference to recruit instructors to join the Classroom Program. During the 3-day event, they engaged with instructors who teach in Women’s Studies and related departments as potential target group for closing content gaps on Wikipedia. Becky and Jami hosted an exhibitor’s booth as well as two workshops about designing and implementing a Wikipedia assignment, and generated a lot of interest from new instructors who work in a field that is largely underrepresented on Wikipedia.
  • Between November 15 and 17, Wiki Education Foundation’s board gathered in San Francisco for their first in-person board meeting. The board adjusted Wiki Ed’s bylaws to the needs of our growing organization, discussed mission and vision, and approved our first annual plan and budget. Senior staff members joined most of the board meeting, reported on the past impact, and outlined plans for the future. Board and senior staff agreed to embark on a 6-month strategy process which will kick off in early 2015.
  • The Assignment Design Wizard, which will streamline course planning for instructors, was deployed to a handful of instructors for product testing. The next project, Course Dashboards to reflect student activity for instructors, is now underway.
  • With the help of the Wikipedia Medical community, we have completed a subject-specific handout on navigating the specific requirements for medical information on Wikipedia.


Educational Partnerships

Jami talking to attendees of the National Women's Studies Association's annual conference

Jami talking to attendees of the National Women’s Studies Association’s annual conference

Jami Mathewson and Frank Schulenburg visited Louisiana State University’s campus to support partners at Communication across the Curriculum in co-hosting a series of workshops about teaching with Wikipedia. The aim of this campus partnership is to support the existing Classroom Program in expanding to more student editors and more instructors. Wiki Ed staff helped with outreach by presenting to LSU instructors about assignment design, how their students can close content gaps in their field, and an introduction to Wikipedia policies that are important for successful assignments. The sessions gave us the opportunity to connect with new participants, and we’re extremely excited about the potential for growth at LSU. For more information about this visit and the goals of campus partnerships, see our blog post, “Exploring the perks of partnership with Louisiana State University.

In November, Jami also attended the National Women’s Studies Association’s annual conference to recruit instructors to join the Classroom Program. We targeted the instructors who teach in Women’s Studies and related departments because these courses can help close content gaps on Wikipedia. Jami and volunteer Becky Carmichael hosted an exhibitor’s booth for three days as well as two workshops about designing and implementing a Wikipedia assignment, and we generated a lot of interest from new instructors.

Classroom Program

The Fall 2014 term is almost over, and about half of our classes have completed their Wikipedia assignment. We are supporting 97 courses, exceeding our goal of 85 for the term, and we have our largest number of student editors to date.

Current status of the Classroom Program (fall term 2014) in numbers, as of November 30:

  • 97 Wiki Ed-supported courses have Course Pages (41 or 42% are led by returning instructors)
  • 2,604 student editors are enrolled
  • 743 students have successfully completed the online training

Student work highlights:

We’ve also seen some great work from Avery Dame’s Women, Art, and Culture course at the University of Maryland:

Amy Hughes’s Theater Course at Brooklyn College also had outstanding work, including articles on:

As the term comes to a close, the majority of our students are hard at work finishing up their Wikipedia projects. This month, 66% of students have made edits to the article main space. 1,177 articles have been created and student editors have made contributions to a total of 3,426. We are supporting our largest number of students to date, almost 1,000 more than the Spring 2014 term. Ian and Adam are continuing to provide students with valuable feedback and encouraging students to nominate their work for Did You Know and Good Article status. We are well on our way to our most successful term yet!


WikiProject Med Foundation’s Lane Rasberry approached us several months ago about using the discipline-specific brochure Wiki Ed produced on psychology as a basis for a medicine handout as well. We thought this sounded like a great idea, and this month, the Editing Wikipedia articles on Medicine handout was released on Wikimedia Commons. Lane and other medicine editors drafted the content based on our template, and we finalized the design and printing of the brochure. These discipline-specific handouts serve students as a reference for editing Wikipedia in certain subjects, heading off potential mistakes new student editors make when editing in that discipline for the first time. In addition to the medicine and psychology handouts, a sociology handout is drafted and undergoing a community review process in November.

Communications has also been producing materials combining outreach effort and our brand identity, including handouts for use at academic conferences.

This month also saw our second most-read blog post, “Help us close Wikipedia’s gender gap” from November 13. This post, which explored the role of Wiki Ed through the lens of the gender gap, was shared by more than 250 people on Facebook and Twitter.

Blog posts:

News coverage:

Digital Infrastructure

After completing primary development of the “1.0” version of the wikiedu.org Assignment Design Wizard the previous month, in November we rolled the Wizard out for use by instructors preparing for their Spring 2015 courses. Product Manager Sage Ross focused this month on user testing to ensure a smooth user experience, on refining the content of the wizard, and on integrating it into the training and onboarding process for new instructors. We also conducted an additional development sprint late in November to add several new features to the wizard: several different types of Wikipedia assignments are now supported, and assignment timelines will adapt to match the dates of each course. These new features are currently available for testing at http://wizard-testing.wikiedu.org, and they will be rolled out to the production version of the wizard in early December. You can check out our code — freely licensed, of course — at our GitHub repo.

This month we’ve also been preparing for our next wikiedu.org development project: course dashboards. The dashboard system, which will be ready in time for using during the Spring 2015, is intended to make it easier to keep track of what is going on in a single course, as well as across all of the courses Wiki Ed is supporting. For an individual course, the dashboard will show which articles the student editors are working on, how many page views their work has received, which editors have completed the student training, and more. Across all courses, the dashboard will provide aggregate statistics on the impact our program participants are having, and help us find classes that are exceptionally productive or that may need extra help.

Research and development

Outreach to high-achieving students

In the month of November we started planning the outreach to high-achieving students pilot. The goal of this project, which will be kicked off in spring term 2015, is to get university students to improve Wikipedia content as an extracurricular activity. After getting oriented, Samantha started to establish a program plan, a roles and responsibilities matrix and a high-level timeline for the pilot. She also embarked on developing selection criteria for which universities and which type of existing student groups to target.

Finance & Administration / Fundraising

Finance & Administration

New carpets have been installed in the Wiki Ed offices, adding a professional environment. The hardwood floors of the office had been protected by the Presidio Trust, which had, for some time, limited our options for placing chairs. The new carpets (a gray-blue) work to protect these historic hardwood floors and also damp sound throughout the office.

The revised budget, which includes additional funding received after creation of our original budget, was approved at the November board meeting. The revised budget was also adjusted to reflect a 12-month period instead of the 18-month period in the original budget. The plan amounts for the monthly and year-to-date comparisons reflected the approved revised budget.

  • Month of November expenses are $139,066 versus the plan of $200,801. The primary cause of the variance for the month is due to the timing of the receipt of, and payment of, invoices. This timing issue accounts for approximately $45k of the variance. While another $10–15k is attributed to expenses that have been delayed to future dates – primarily fundraising events and trips.
  • YTD expenses are $570,671 versus plan of $725,161. Much like the variance for the monthly expenses, much of the variance for the year-to-date is due to timing. Vacancies in staff and the timing of getting staff on-board accounts for approximately $40k. In addition, the delay in getting everything set up in our offices continues to create a variance (approx. $23k) as we slowly get all the things needed to run efficiently.

Wiki Education Foundation Expenses Year to Date, November 2014


Expenses for November, monthly vs plan



During the months of October and November, we focused fundraising efforts on identifying new prospects for the organization for short- and long-term funding needs. We did an in-depth assessment of needs and priorities, defined these as funding opportunities, and initiated development of short-form funding proposals that represent these needs/priorities as funding. We began and will continue outreach to top prospects — primarily foundations — that will receive these proposals (each according to their interests). We received a major donor (who wishes to remain anonymous) at the Wiki Education Foundation office, and the team presented a comprehensive update on our work, focusing on the programmatic and operational areas supported by a grant from this donor. We are nearing completion/printing of general collateral for fundraising. We are also actively working with several members of the Wiki Education Foundation to incorporate their input into our fundraising strategy and activities, further fundraising efforts via board networks, and plan for a fundraising event in 2015.



Board and senior staff during the first in-person board meeting.

Board and senior staff during the first in-person board meeting.

Between November 15 and 17, Wiki Education Foundation’s board gathered in San Francisco for their first in-person board meeting. During two full days of meetings, the board adjusted Wiki Ed’s bylaws to the needs of our growing organization, discussed mission and vision, and approved our first annual plan and budget. Senior staff members joined most of the board meeting, reporting on the past impact and plans for the future. Board and senior staff also agreed to embark on a 6-month strategy process which will kick off in early 2015. The board meeting was a great opportunity to get to know each other and create a shared vision for the future of the organization.

Office of the ED

Board and staff of the Wiki Education Foundation in San Francisco.

Board and staff of the Wiki Education Foundation in San Francisco.

Current priorities:

  • Planning a 6-month strategy process to kick off in early 2015
  • Getting our pilot project with high-achieving students off the ground
  • Overseeing the planning for our academic conference and Wiki Conference USA next year
  • November started with Frank joining Jami on her visit to LSU in Baton Rouge. As the Communication across the Curriculum staff at LSU will play a major role in our expansion plans for the upcoming spring term, Jami and Frank discussed partnership opportunities. Frank also met with a number of faculty members as well as with staff from LSU’s Natural History Museum to explore ways we might work together in the future.
  • November was largely dominated by the in-person board meeting held in San Francisco (see above under “Board”). As a result, the board approved the annual plan and budget for our ongoing fiscal year, which makes this document the first official annual plan and budget for our organization.
  • Also in November, Frank started onboarding Samantha Erickson, who joined our organization as manager for our outreach to high-achieving students pilot. Samantha’s work is happening in the context of Wiki Education Foundation exploring new ways of encouraging students to improve Wikipedia’s content. Our existing classroom program is already a highly impactful way of filling content gaps. We believe there are more ways of bringing universities and Wikipedia together. Our pilot with high achieving students is a first step in this direction.

Visitors and guests

  • Pavel Richter, Wikimedia Deutschland
  • Tobias (Benutzer:Church of emacs), German Wikipedia
  • Diana Strassmann, Wiki Education Foundation board member

by Eryk Salvaggio at December 15, 2014 06:35 PM

Not Confusing (Max Klein)

Wikidata and the Measure of Nationality: The Germanic Shift

Best viewed in the IPython notebook viewer.

<iframe class="iframe-class" frameborder="0" height="1000" scrolling="yes" src="http://nbviewer.ipython.org/github/notconfusing/WIGI/blob/master/German%20Austrian%20Analysis.ipynb" width="600"></iframe>

by max at December 15, 2014 07:51 AM

Tech News

Tech News issue #51, 2014 (December 15, 2014)

TriangleArrow-Left.svgprevious 2014, week 51 (Monday 15 December 2014) nextTriangleArrow-Right.svg
Other languages:
čeština • ‎English • ‎español • ‎suomi • ‎français • ‎עברית • ‎italiano • ‎日本語 • ‎русский • ‎українська • ‎中文

December 15, 2014 12:00 AM

December 14, 2014

Gerard Meijssen

#Wikipedia - The Time Jumpers

Recently one of the Time Jumpers, Dawn Sears, died. She was married to one of the other band members; Kenny Sears he is the one playing the fiddle to the right.

According to her Wikipedia article, she is indeed married to a Kenny Sears. This is however a redirect to someone else. The husband is known on the Time Jumpers article as Kenny Sears (fiddler), it is a red link.

It is easy enough to add an item for Mr Sears in Wikidata and link him to both the Time Jumpers and to his wife. It would be good when the Wikipedia red link could be linked to Wikidata. When red links are linked to Wikidata, it is possible to relate them to existing items. In this way information is available that can be used as information for a possible article. To bring this article to an editor it just needs to be presented on the red link. That is easy enough.

by Gerard Meijssen (noreply@blogger.com) at December 14, 2014 08:54 AM

December 13, 2014

This month in GLAM

This Month in GLAM: November 2014

  • Australia and New Zealand report: ALIA partnership goes countrywide
  • Belgium report: Workshops for collection holders across Europe; Founding event of Wikimedia Belgium; Wiki Loves Monuments in Belgium & Luxembourg; Plantin-Moretus Museum; Edit-a-thon at faculty library in Ghent University; Image donation UGentMemorie; Upcoming activities
  • France report: Wiki Loves Monuments; mass upload; Musée de Bretagne
  • Germany report: Facts, fun and free content
  • Ireland report: Ada Lovelace day in Dublin
  • Italy report: National Library Conference; Wiki Loves Monuments; Archaeological Open Data; BEIC
  • Netherlands report: Video challenge; Wikidata workshop and hackathon; Wikipedia courses in libraries; WWII editathon
  • Norway report: Edit-a-thon far north at the Museum of Nordland (Nordlandsmuseet)
  • Spain report: Picasso, first Galipedia edit-a-thon, course in Biblioteca Reina Sofía and free portraits
  • South Africa report: Wiki Loves GLAMs, Cape Town
  • Sweden report: Use, reuse and contributions back and forth
  • UK report: Medals, maps and multilingual marvels
  • Special story: ORCID identifiers
  • Open Access report: Open proposal: Wikidata for Research; Open Access signalling
  • Tool testing report: Tools for references, images, video, file usage; Popular Pages
  • Calendar: December’s GLAM events

by Admin at December 13, 2014 10:34 PM

Alex Druk

Example of good analytical posts about WP traffic.

For rather long time I follow the site from Poland “bigpicture.pl“. Recently published articles are good example of simple but very productive and scientifically significant “mini-research”:

by Alex Druk at December 13, 2014 09:10 PM

Wikipedia traffic patterns vs. Google Trends

What are the Wikipedia traffic patterns? How do they reflect general usage of the Internet? Simple comparison of Google Trends patterns with the traffic of Wikipedia can help to answer these questions.

We randomly selected a sample of 10,000 Wikipedia queries. The only selection criterion was the absence of the punctuation (mostly commas and parenthesizes) because Google Trends ignore any punctuation. About 2,000 queries did not have enough traffic to be shown on Google Trends. For the rest 8,000 queries correlations were calculated. Results of the correlation coefficient distribution are shown on the graph below. An average correlation coefficient is 0.45.

Correlation between Wikipedia traffic patterns and Google Trends

Considering the data of this graph, we can make an important conclusion that Wikipedia traffic patterns reflect general search patterns present on Google Trends.

Next, I compared periodic calendar queries “January 1” .. “December 31” and “January” ..”December” that I studied in one of my previous post. Results were impressive: average correlation coefficient was 0.891. It means that all compared curves were practically identical!

Later, I decided to cheer up rather boring calendar pages of Wikipedia (like “January 1”..”December 31” and proposed to include “Other pages read frequently on this day”. However, my proposal met an opposition from famous mathematician Arthur Rubin. His position was that my proposal “Not suitable for Wikipedia, being self-referential.”. He was referencing to WP:SELFREF: “Mentioning that the article is being read on Wikipedia … should be avoided where possible.”

This is why I have to prove that calendar is global phenomenon, that there are certain topics that people are interested in the most on a certain day of every year. To do so I have to compare all ~3,000 Wikipedia articles used to build the calendar with Google Trends. It was not easy. First, about half of titles do not have enough data to be shown on GT. Other show in GT only monthly averages that is not enough for statistically valid comparison. Nevertheless, I was able to successfully compare 577 articles. Average correlation coefficient is 0.63, which is much stronger then for random WP article. Low coefficients were caused mostly by disambiguation. For example, “1447” means “1447 year AD” in Wikipedia, but just a number in Google. This is why correlation coefficient for this query is -0.07. However, general correlation between calendar traffic patterns on Google and Wikipedia is high – most of compared patterns (55%) have a coefficient above 0.8. Here is graph of coefficient distribution.

Correlation between Wikipedia calendar traffic patterns and Google Trends

If you find this interesting, please support my proposal to include “Other pages read frequently on this day” on Wikipedia talk:WikiProject Days of the year.

by Alex Druk at December 13, 2014 08:46 PM

December 12, 2014

Gerard Meijssen

#Wikidata - Hans Wallat, conductor

According to the English Wikipedia Mr Wallat was awarded the Musikpreis der Stadt Duisburg. It is a "red link". The German Wikipedia has an article about this award. It lists all the winners of this award.

Using the Linked Items tool, it is trivially easy to add statements for the winners of this award. For all but three; they are red links on the German Wikipedia, it is easy enough to add the items for them.

Arguably they are notable because they complete the list of all the winners for this award. Adding dates is icing on the cake..

On the English Wikipedia it is nice to link to the Reasonator for the award. It links to a Reasonator page for the awardees. It is how we can share in the sum of all available knowledge.

by Gerard Meijssen (noreply@blogger.com) at December 12, 2014 09:27 PM

Wikimedia Foundation

Iberoconf 2014: towards a regional perspective

For the Spanish version of this blog post, click here

Iberoconf is the annual meeting for Wikimedia chapters and user groups that are members of the Ibero-American cooperative. The event was held between November 21 and 23 and was made possible through a Project and Events Grant. The purpose of this grant was to develop the Wikimedia movement in Iberoamerica with a specific approach: to strengthen the affiliates’ capacity. This was important to do not only as individual organizations, but as vital parts in the connected through the cooperative’s network.

The program offered participants a series of training workshops in strategic planning, and served as an arena to collaboratively design the approach for programmatic work on GLAM and Education. During their time together, participants were able to revise, visualize and evaluate outcomes from each organization during 2014, and discuss them and design an effective evaluation plan. This international gathering was also the place where Iberocoop affiliates established the vision and mission of the entity.

During Iberoconf 2014, 40 participants were involved in the three-day congress, among which were counted volunteers, chapters or user groups’ representatives, Wikimedia Foundation emissaries and external consultants. The qualitative leap of this Iberoamerican Wikimedia Conference, as compared to earlier editions, is closely related to the maturing stage of the different organizations that took part. This was a crucial part in establishing concrete methodological definitions.

The days were divided in training workshops, presentations on good practice, and cooperative spaces. The workshops included strategic planning, design and evaluation of projects, and metrics. The presentations on good practice allowed chapters and user groups to showcase their results from Education and GLAM programs. In the cooperative spaces, different workshops focused on collaborative creation of a joint mission.

Working together on strategic planning

For the first time in an event of this kind, and lead by external consultants, this gathering adopted a very strong focus on training. The specific goal was to enhance participants’ knowledge in strategic planning, SMART goal-setting and how to choose metrics in planning and reporting. These workshops aimed to improve the regional coordination as far as joint efforts are implied. Iberoconf 2014 also hosted a presentation by Funds Dissemination Committee (FDC), that faced one of its first direct encounters with groups and chapters, as stated by its strategy on community liaison.

Ginevra, from Wikimedia Italy, presenting the results from the program Archeowiki 

“Iberoconf 2014 183 FDR0640″ by Fedaro, under CC BY-SA 4.0

The presentations on good practice enabled a space within Iberoconf to share good results and to learn from one another, through successful activities that have been taking place across the region. Italy, Mexico, Venezuela, Chile and Argentina were the main actors in this part of the conference program.

Four other workshops were given by education specialists and current partners of Wikimedia Argentina. These were focused on the current work of the chapter on Education. Even though these activities are still on a test mode, they are delivering good results and could become innovative practices in the mid term.

Iberoconf 2014 came to a close with a work day that involved all participants. The discussion was dedicated to a collaborative strategic planning that would help design the programmatic work on education and GLAM for the different chapters and user groups in Iberocoop. It was in this last moment of the conference when the sound success of the event was evident to everyone in the room: participants were able to apply in a very efficient way what they learnt in the previous days about strategic planning. The discussion resulted in a much more fine-tuned compass to lead the Iberocoop Wikimedia organizations’ initiatives. The SMART goals are now available for consultation and further discussion on Iberocop’s portal discussion on Meta. You can also find more pictures and the presentations shared at the event on its category in Commons.

Wikimedia Argentina, as organizing team, is very pleased with the results and want to thank everyone that made Iberoconf 2014 possible!

Valentín Muro is Wikimedia Argentina Communication Coordinator

María Cruz is Learning and Evaluation’s Community Liaison at the Wikimedia Foundation

by wikimediablog at December 12, 2014 09:01 PM

Tapping into the knowledge of indigenous communities

Tatekulu Hijapendje pondering what to answer.
“Elder IK holder in Otjinene, Namibia” by DanielGCabrero, under CC-BY-SA-4.0

Wikipedia has made tremendous progress towards its mission to provide free access to the sum of human knowledge, but indigenous knowledge is largely excluded because the majority of it is not available in writing. Starting from some theoretical considerations, I designed a workshop for the 2014 Participatory Design conference in w:Windhoek to produce and document examples of relevant oral citations. See the detailed workshop description.

In October a small group of Wikipedians traveled to the Namibian village of w:Otjinene to interview elders. The aim of the interview was to directly convert narratives into Wikipedia content with oral citations. We visited the homestead of Tatekulu Festus Hijapendje and his wife Memekulu Olga Muhaindjumba Hijapendje and asked them about traditions, culture and development of the local Herero community. First results are available at Wikipedia:Oral citations experiment/Articles.

Thanks to a Project and Event Grant from the WMF I could invite editors from the region to collect the narratives. Editors wishing to participate had to answer the Call for participation, adopt an article from a list of offers, and improve the article with conventional (written) references. In the village we then planned to ask questions around the ‘blind spots’ in the adopted articles, missing information that no written source could help fill. Now we can present two scenarios for a small set of Wikipedia articles: One restricted to ordinary, written sources, and one that utilizes narratives originating from indigenous knowledge. We hope to be able to dismiss the suspicion by Wikipedia’s editor community that the online encyclopedia has nothing to gain from the inclusion of indigenous knowledge.


The number of applicants was much lower than I had hoped for. From a planned group of twelve participants I could admit only four, one of which cancelled on short notice, and one editor from Ghana could not come because the Namibian authorities did not grant a visa, possibly due to the ongoing Ebola scare.

Down to a group of five, (two participants, the driver and translator, a filmmaker, and me) we traveled 250km into the w:Omaheke Region to collect knowledge from the oral repository of the w:Herero people. On arrival we found that a funeral of an important community member had all but emptied the village on this particular weekend. Only children, the elderly, and some very few inhabitants were around. On top of that, our original plan to attract interviewees by means of a free barbecue was ill-considered, few people had noticed our arrival, and the Hijapendje couple would not have been able to walk over to our accommodation.

Workshop participants in the elders’ homestead. From left: Muhaindjumba Hijapendje, Gereon Koch Kapuire, Festus Hijapendje. From right: Bobby Shabangu, Peter Gallert.
“Workshop on oral tradition being conveyed by local female and male elderly in Otjinene.” by DanielGCabrero, under CC-BY-SA-4.0

Convincing Wikipedia editors of the encyclopedic value of oral knowledge repositories has also so far been an uphill battle. This value needs to be discussed in two dimensions. First, the usefulness of narrated content for Wikipedia needs to be established, and second, ‘good’ oral citations need to be distinguished from ‘bad’ ones – as with written sources, by far not everything that could be cited, should be cited.

Gathering oral citations is a learning process, and we are just at the beginning. Because interviewers, literally, do not know what information they want, questions can never be very specific. And even if they are, the answer might not be, for example:

Q: Please tell us when this settlement was founded.
A: (after long deliberation) Many, many years ago.

The resulting narrative might take its own direction, and every once in a while needs to be steered back on course, of course without interrupting the elder. And finally, without a bit of prior insider knowledge about the indigenous community certain answers are impossible to understand, and certain subtleties cannot be captured:

Q: In a Herero family, who is making the monetary decisions?
A: (by the woman) We reach an agreement but the man has the final word.
Q: (out of a suspicion because the man did not say anything) So the woman is in charge of the household but the man is in charge of the money?
A: (by the woman) Yes.
Q: Imagine you only have 500 dollars, from which the school fee could be paid, or from which the car could be repaired. What will be the likely outcome?
A: (by the woman) The man will not know we have 500 dollars. I will have paid the school fees already.


Local community members are much less likely to misinterpret oral information than outside researchers; eventually such interviewing should be done by members of the community. But for now, it requires a great deal of Wikipedia knowledge and experience to make an oral citation stick. Experienced editors need to get the ball rolling, both by collecting oral citations and by participating in the inevitable policy debate. We now understand a bit better of how to collect oral citations, and how to select them for building an encyclopedia. WMF allowing, I will be soon in the village again to gather more.

Peter Gallert, Polytechnic of Namibia

by wikimediablog at December 12, 2014 06:57 PM

Wikimedia UK

English Heritage and the Archaeological Data Service: What does it mean to Wikipedia?

In October, English Heritage made 84 of their publications freely available online through the Archaeological Data Service. The ADS has been running since 1996 and it brings together a huge amount of information from archaeologists in the UK. Amongst the gems on the site you can find copies of unpublished fieldwork reports (known as grey literature) and copies of journals such as the Proceedings of the Antiquarian Society of Scotland. These resources are freely available online. The release of the monographs by English Heritage adds to the rich tapestry of information already available.

Digitisation is not universal. Many archaeological societies would like to digitise their publications, particularly those which are out of copyright, but time and money can be difficult to come by. But progress is being made, and the ADS is a valuable resource to researchers.

<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>The release was so popular the ADS server struggled to keep up with the demand.

<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

But what does this mean for Wikipedia? These books aren’t just reliable sources, they are written by some leading archaeologists, the likes of Philip Barker, Francis Pryor, and Timothy Darvill. In many cases, these are the definitive works on a particular subject. The 1990 survey and history of Carlisle Castle should be the starting place for anyone looking for detailed information on the site. The account of the excavations at Beeston Castle are the most detailed available.

The breadth and depth of these books is tremendous, and cover prehistory right up to the 20th century. It’s not hard to imagine how they could be used in Wikipedia. The pages on Acton Court (224 words) and Camber Castle (265 words) are both very short, yet have entire books written about them. Battle Abbey (686 words), Wroxeter Roman city (698 words), and Bodmin Moor (1,037 words) could be a lot more detailed and during November was read more than 1,000 times. Even sites as well known as Hadrian’s Wall which have lengthy articles could benefit from the quality of information available.

Wikipedia has an important role to play, not just in helping people discover this information but in accommodating a general audience. These monographs are often technical, and Wikipedia can be an easily accessible bridge. By using these sources to improve Wikipedia, editors are also helping English Heritage and ADS spread this information and making it more accessible.

Work has already begun: an IP has visited many of the relevant articles and added the publication available through ADS and English Heritage as a source, but there’s plenty still to do. So browse through the list and see if something catches your eye. Maybe you can be the one to make a difference to the reader.

<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

by Richard Nevell at December 12, 2014 01:44 PM

Wikimedia Foundation

The Grant Advisory Committee Wants You

"UnclesamwantyouGAC" by Alleycat80, under CC-BY-SA-4.0

We want you for the GAC!
“UnclesamwantyouGAC” by Alleycat80 (based on public domain image by J. M. Flagg), under CC BY-SA 4.0

How do $1 million in Wikimedia Foundation Project and Event grants get reviewed each year? By a collaboration between the Grant Advisory Committee (GAC) and WMF staff. Project and Event Grants (PEG) fund organizations, groups, and individuals to undertake mission-aligned projects that benefit the Wikimedia movement. Grants typically fund offline organizing of events, community outreach, and partnerships in education and GLAM. From Wiki Loves Monuments to regional conferences to thematic editathons, the PEG program funded 55 projects across 25 countries in the last year. Some exciting projects we’ve funded in the last six months include:

"Praia market potatoes manioc" by Cayambe, under CC-BY-SA-3.0

Submission for Wiki Loves Africa
“Praia market potatoes manioc” by Cayambe, under CC-BY-SA-3.0

The GAC plays a key role in the grantmaking process. The GAC is a group of volunteer advisors, who support WMF staff in achieving more impactful grantmaking by mentoring grantees to structure better grants and ultimately achieve better projects and outcomes. The committee reviews proposal requests that range in size from USD $500 to $100,000, providing suggestions and constructive criticism about proposals directly to grantees, actively engaging in a back-and-forth discussion. Eventually, GAC members submit their recommendation whether or not to fund the proposal before WMF staff make a final decision.

"Cropped group photo - Cairo 4th conference" by Samir I. Sharbaty, under CC-BY-SA-3.0

Cairo education conference
“Cropped group photo – Cairo 4th conference” by Samir I. Sharbaty, under CC BY-SA 3.0

The GAC encapsulates the Wikimedia movement’s spirit – it is volunteer-based, contains members from a multitude of countries, backgrounds and experience in wiki contributions, and is thus able to provide “outside the box” thinking to grantees. Their accumulated experience from reviewing grants across the movement allow them to transfer knowledge and best practices about good project planning, setting goals, and developing metrics.

Personally, being a GAC member enriches my wiki volunteering with a unique bird’s eye view of the many great things which happen across the movement, and provides me with great ideas and innovations that I can translate to my own chapter. Moreover, for every grant I support, I can also read the results in a report (it’s actually more interesting when you helped the initial set up!), which helps me understand what worked and what didn’t, and helps hone my planning and strategic thinking. Those lessons and skills development are invaluable.

"Wikimania 2014 Grant Committees Training (1)" by AWang (WMF), under CC BY-SA 4.0

Wikimania 2014 Grant Committees Training
“Wikimania 2014 Grant Committees Training (1)” by AWang (WMF), under CC BY-SA 4.0

Lastly, being a GAC member provides an understanding about the many challenges WMF faces as a charitable organization and a grant provider. It allows a deep appreciation of the dedicated staff that is tasked with helping us, the volunteers, do better in fulfilling the vision of making the sum of all human knowledge available to all.

The GAC is currently recruiting new members through December 30th. Visit the Grant Advisory Committee page on Meta to learn more and if you’re interested in joining, leave a message on the Candidates Page! Please join us!

Ido Ivri, Grant Advisory Committee Member and Board Member, Wikimedia Israel

by wikimediablog at December 12, 2014 12:54 AM

December 11, 2014

Wikimedia Foundation

Global Impact: The Wikipedia Library and Persian Wikipedia

Last month, the Wikipedia Library announced another round of digital resource access partnerships to the Wikimedia community. These partnerships allow experienced editors in the community and from all around the globe to access research materials behind a paywall in order to advance our goal of creating and sharing a summary of all human knowledge.

One of the longest lasting and most useful donation partnerships has been with journal archive JSTOR, which saw significant participation from non-English editors. We have seen even more participation from around the world as JSTOR expanded their donations, most prominently from languages like German, Spanish, French and Persian. We had expected uptake from the larger Wikimedia communities operating in European languages, but the Persian community pleasantly surprised us.

To find out more, we asked one of our most active Persian editors with a JSTOR account, User:4nn1l2, why he finds the Wikipedia Library important to his work:

Distribution of Persian speakers in the Middle East and central Asia.

“Persian Language Location Map1″ by Mani1 , public domain.

Already larger than Arabic, Hebrew, and Turkish Wikipedias, Persian Wikipedia is now the largest Wikipedia of the Middle Eastern languages, and is aspiring to become one of the largest and highest-quality Wikipedias of the world. Like its other Middle Eastern counterparts, Persian Wikipedia has developed mostly around political, religious, and historical topics rather than scientific or medical ones. This may be because Middle East is a fairly small region with a long and rich history. Just consider that Persia, now called Iran, has nearly 2,600 years of recorded history; all three major Abrahamic religions have their origins in the Middle East. Consequently, the humanities play an important role in Persian Wikipedia.

Most of my contributions to Persian Wikipedia are about literature and history of Iran and Islam. The journal Iranian Studies, published by Routledge, is one the most reliable sources concerning that cultural heritage. I had access to this journal through my university library, which was subscribed to Taylor & Francis Online. However, things changed when the international sanctions against Iran expanded and included banking transactions. Subscription fees could not be paid and access to digital libraries became almost impossible one by one. Although there are always some loopholes or backdoors to circumvent the sanctions, the growing difficulties reduced my motivation to work on Wikipedia for free. I found myself always asking fellow Wikipedians who live abroad to send me various individual articles; only God knows how frustrating that was! However, thanks to the Wikipedia Library, I received JSTOR access, which incorporates Iranian Studies; this new access allowed me to continue my work on articles like Kelidar, the longest Persian novel, and the biography of Husayn Va’iz-i Kashifi, a prolific prose-stylist and influential preacher of the Islamic world. I have not nominated them for the Good Article reviews yet, but am going to do so in the near future and try to promote them.

Alexander III of Macedon – “Great” or “Accursed”?

“BattleofIssus333BC-mosaic-detail1″  from the Alexander Mosaic, public domain.

In my opinion, JSTOR access is a must for Persian Wikipedia editors, not just due to the lack of reliable sources in Iran or Afghanistan, but because of the systematic bias and censorship that is so prevalent among books published in these countries. Leaving controversial religious subjects untouched, let me point out my first-hand experience of a historical matter: Iranians (Persians) are so proud of their ancient history that it is nearly impossible to find an academic book about Alexander the Great, or as Iranians call him “Alexander the Accursed,” just because he ousted the Achaemenid Empire from power about 2300 years ago. This is not because of the government, but people themselves who would boycott the publisher that dares to publish such books. I was going to write a thorough article about Alexander the Great, but after facing such a hindrance, I had to content myself with just translating the English Wikipedia article. I’m sure there are lots of other similar, untold and unheard stories. It’s perfectly clear that JSTOR access and the Wikipedia Library can’t be waved like magic wands to solve all the problems, but they can give editors some small tools to begin remedying the situation. These can beneficially provide reliable sources for volunteer editors who devote their time to build a better world by sharing their knowledge.

I even consider the Wikipedia Library a helpful project to counter the systemic bias in English Wikipedia itself. While every river or hill in North America or Europe has its own article, many vital issues concerning developing countries have not been covered. By getting global editors like me free access to rich digital libraries, we will be even more encouraged to write decent articles about our culture and geography in your language.

4nn1l2, Persian Wikipedia editor

by wikimediablog at December 11, 2014 11:14 PM

Wiki Loves Earth 2014 winners announced

Wiki Loves Earth, a photo contest of natural monuments, became international for the first time in 2014 and was held in 16 countries. The contest is over now, and after the careful evaluation the international jury is happy to announce the winners.

First prize: Carpathian National Park, Ivano-Frankivsk Oblast, Ukraine | by Dmytro Balkhovitin
“Карпатский 05″ by Balkhovitin, under CC-BY-SA-3.0

Wiki Loves Earth is a photo contest of natural monuments, where participants picture protected areas and upload their photos to Wikimedia Commons. The goal of the project is, on one hand, to capture under a free license as many natural monuments and protected areas as possible, and on the other hand, to contribute to environment protection by raising public awareness.

After years of successful Wiki Loves Monuments organisation there was an idea of a similar contest for natural monuments. The idea of Wiki Loves Earth was born in 2012, and it was implemented for the first time in Ukraine, where the contest was held from 15 April to 15 May 2013.

In 2014, Wiki Loves Earth was joined by 15 other countries from four different continents – Europe, Asia, Africa and America. Most of countries organised the contest from 1 May to 31 May 2014, while some countries extended the contest period till 30 June, and Serbia was the last to finish on 15 July. During the contest, over 70,000 pictures were submitted by more than 3,000 participants.

Similarly to Wiki Loves Monuments, Wiki Loves Earth was organised through numerous national contests, coordinated by local volunteers. The national juries then submitted up to 10 pictures to the international stage of the contest. With 16 participating countries, the international jury had to consider a total of 156 candidate pictures.

The international jury was composed of seven photographers, most of whom were experienced in nature photography: Diego Delso (Spain/Germany), Muhammad Mahdi (Tanzania/India), Julián Monge-Nájera (Costa Rica), Susanne Plank (Austria), Esther Solé (Spain), Oleg Zharii (Ukraine) and Wikimedian Wikimk (Macedonia). Their profiles can be found in the jury report. After several weeks of evaluation they have selected the following images:

The first prize goes to the view of Carpathian National Park from Hoverla, Ukraine by Dmytro Balkhovitin. This photo gives an exciting view from the highest point of Ukraine — Hoverla — towards Carpathian National Park, one of the largest in Ukraine. The jury was particularly impressed by the composition of the photo and its lighting with great crepuscular rays.

Second prize: Serra dos Órgãos, State of Rio de Janeiro, Brazil | by Carlos Perez Couto
“Amanhecer no Hercules –“ by Carlos Perez Couto, under CC-BY-SA-3.0

The second prize was attributed to the photo of God‘s Finger Rock in the Serra dos Órgãos National Park, Brazil. This image by Carlos Perez Couto depicts the best known rock formation in the park, which is also symbol of Brazilian mountaineering and of the entire state of Rio de Janeiro. Breathtaking landscapes are completed with excellent composition that even reminded to one juror of Chinese paintings.


Third prize: Mukri Nature Park, Rapla County, Estonia | by Janno Loide (Amadvr)
“Hommik Mukri rabas” by Amadvr, under CC-BY-SA-3.0

An image of Estonia’s Mukri Nature Park by Janno Loide (Amadvr) was awarded the third place. This original almost monochromatic image represents an autumn fog over the marshes of Estonia. In addition, the jury appreciated the good representation of the red colour of the sky together with high level of detail.


The remaining photos receiving the awards are the following:

"Шаан-Кая в облаках" by Iifar, under CC-BY-SA-3.0

Fourth prize: Mount Shaan-Kaya, Yalta Natural Reserve, Crimea, Ukraine | by Oleksandr Chernykh (A4ernyh)
“Шаан-Кая в облаках” by Александр Черных, under CC-BY-SA-3.0


Fifth prize: Mount Krchin, Mavrovo National Park, Republic of Macedonia | by Martin Dimitrievski
“Na Golem Krchin” by MartinDimitrievski, under CC-BY-SA-3.0

Sixth prize: Aekingerzand, Drents-Friese Wold National, Netherlands | by Ubrerprutser
“Schapen op het Aekingerzand 1″ by Uberprutser, under CC-BY-SA-3.0

Seventh prize: Novyi Svit Sanctuary, Crimea, Ukraine | by Vitaliy Bashkatov (Vian)
“Півострів” by Vian, under CC-BY-SA-3.0

Eighth prize: Zuivskyi Regional Landscape Park, Donetsk Oblast, Ukraine | by Vitaliy Bashkatov (Vian)
“Ранкова палітра” by Vian, under CC-BY-SA-3.0

Ninth prize: Haizer, Djurdjura Massif, Bouira Province, Algeria | by Chettouh Nabil
“Haïzer à Bouira” by Chettouh Nabil, under CC-BY-SA-3.0

Tenth prize: Aïn Legradj Cascade, Bordj Bou Arréridj Province, Algeria | by Chettouh Nabil
“Cascade de Aïn Legradj à Bordj Bou Arreredj” by Chettouh Nabil, under CC-BY-SA-3.0

Eleventh prize: Mount Kukul, Carpathian National Park, Ivano-Frankivsk Oblast & Carpathian Biosphere Reserve, Zakarpattia Oblast, Ukraine | by Volodymyr Khirash (Хіраш Володимир)
“Зимовий Кукуль” by Хіраш Володимир, under CC-BY-SA-3.0

Twelfth prize: Nationalpark Kalkalpen, Upper Austria, Austria | by Isiwal
“NDOÖ 490 Rosenau aHP Rotbuche Zaglbaueralm Stamm” by Isiwal, under CC-BY-SA-3.0-AT

Thirteenth prize: Pool of Cortalet, Aiguamolls de l‘Empordà, Natural Park, Catalonia, Spain | by Mikipons
“Aiguamolls de l’Empordà 2″ by Mikipons, under CC-BY-SA-3.0

Fourteenth prize: Ezumakeeg, Lauwersmeer National Park, Netherlands | by Bayke de Vries (Baykedevries)
“Brandganzen Ezumakeeg” by Baykedevries, under CC-BY-SA-3.0-NL

Fifteenth prize: Mount Thaletat, Bouira Province, Algeria | by Chettouh Nabil
“Main du juive à Tikjda” by Chettouh Nabil, under CC-BY-SA-3.0

Winners were determined by the 7-person jury. Each participating country could nominate one member to the international jury, although only 5 countries used this right. Two further jury members were added by the international organising team to increase diversity of the jury.

156 nominations were submitted to the international organising team by the national juries of the 16 participating countries. Each country was allowed to submit up to 10 images, but some countries decided to submit less images. Andorra and Spain submitted their images in a double nomination. The jury selected and ranked the photos in several stages by means of a dedicated web tool.

The full report of the international jury, explaining the work of the jury, selection process and presenting the results together with comments of the jury, is available here.

Congratulations to the winners and thank you for everyone who worked on organisation of the contest this year!

Mykola Kozlenko

Wikimedia Ukraine / WLE International team

by yoonahawikimedia at December 11, 2014 06:39 PM

Semantic MediaWiki

SMWCon Spring 2015 to be held in St. Louis, Missouri, USA

SMWCon Spring 2015 to be held in St. Louis, Missouri, USA

December 11, 2014. The location and dates of the next SMWCon, or Semantic MediaWiki Conference, have been announced: it will be held in St. Louis, Missouri, USA (a city located in the Mid-Western USA which may be called "Gateway to the West") on May 6-8, 2015. It will be hosted at T-REX, a co-working space and tech incubator. The first day will include tutorials for beginners to SMW; this will be followed by a two-day regular conference.

For more information, see the SMWCon Spring 2015 homepage.

This page in other languages: de

SMWCon Spring 2015 to be held in St. Louis, Missouri, USA en

by Kghbln at December 11, 2014 06:28 PM

Gerard Meijssen

#Wikipedia - #redirects are a one trick pony

Wikipedia and Wikipedians have grown up with the "benefits" of redirects. It is why an article is also known by a different name. In Wikidata they can be labels.

Another use is to link a name to somewhere in an article where they are mentioned. When this finds its way in Wikidata it is assumed that proper information is available on the subject in that Wikipedia article.

Wrong. When you read a Wikipedia article, it is full of all kinds of references from the subject. All of these references are also available in Reasonator in the concept cloud. Many of the references are available in statements and they in turn are available on the referred to qualifiers as well.

What something like Reasonator could do is provide proper information for all the subjects that do not have an article and refer to articles when they exist. It currently links to other Reasonator pages but it is not hard at all to configure this to link to Wikipedia articles in the "current" language. This would be a redirect on steroids.

by Gerard Meijssen (noreply@blogger.com) at December 11, 2014 07:58 AM

December 10, 2014

Wikimedia Tech Blog

An experiment in self-organization at the Wikimedia Foundation

Recently, the Wikimedia Foundation’s mobile web engineering team underwent a rapid change, more than doubling the number of engineers on the team within a period of four weeks. While this change was eagerly anticipated, the team size increase had unanticipated impacts on our team process and communication. It quickly became obvious that our new team size was making our usual way of working unsustainable.

In many organizations, the inclination might be to automatically split the team in two. However, the mobile web team operates with agility in mind, embracing the principle that “the best architectures, requirements, and designs emerge from self-organizing teams” [1]. Rather than telling the team that they would be split up, it was assumed that “If the team is to be trusted with solving the problem of how to build the product, it seems appropriate to trust it with the decision about how to structure itself to do so.” [2]

In this blog post, I will share the challenges that arose when the team grew, and the experiment that we designed to address these challenges.

Big Team Challenges

The team uses a framework called Scrum to coordinate their work and continuously improve. One metric used in scrum is known as the team’s “velocity”, a measure of how much work the team can do within a timeboxed development cycle. Measuring and tracking velocity can provide clues about what is happening on the team. A team may see their velocity decreasing over a couple of development cycles, and discover that a member has other obligations pulling them away from the work that was planned. In the case of the mobile web team, our velocity shot up. While this is “a good problem to have”, we now know that a sharp increase in velocity comes with its own set of challenges. Being observant about changes and trends in velocity can help a team catch problems early and course correct.

As the team prepared onboarding tasks for our new members, we anticipated that it would take the better part of a quarter for new team members to get up to speed. Little did we know that our new team members were actually HUNGRY CODE MONSTERS that would chew through our backlog of work in their first couple of development cycles. While we were rejoicing over our awesome new team members, we also came face to face with a unique and puzzling problem: we ran out of work to do.

By “running out of work to do”, I don’t actually mean that there was nothing for the team to work on. There is always plenty of stuff to work on: code cleanup, low-hanging feature requests, bugs, UI standardization work, helping out other teams, and so on. But as an agile team, our mandate is not just to do whatever work is possible, but to deliver the highest value features to our users at any given point in our development cycle. Working in this way requires a vigilant product owner who is in touch with user needs and always thinking about what the highest priority is at any point in the product life cycle. Under the guidance of our product owner, every two weeks the team meets and carefully considers and plans the work that will deliver the most value for our users. But… there are only so many waking hours in a product owner’s day. With our newly expanded team (lovingly nicknamed “MegaTeam”), the burden of maintaining a well-scoped and prioritized backlog of work suddenly became a lot more challenging, as our pace of working quickly outpaced our capacity to specify the next features to be developed.

Further burden was placed on our tech lead, whose role on our team is to give input on technical considerations of our work so that the team is able to estimate it. More of our tech lead’s time was taken away from coding and diverted towards fleshing out new work. Team members from Design and Analytics also felt the impact. All of a sudden we needed mockups or data analysis at a faster rate than before. The impact of our higher velocity (see sidebar) on our product owner, tech lead, and other team members was one of the first indications that our usual way of working may not be sustainable.

More meetings? Or more freedom…?
A little bit of up-front meeting time every couple of weeks can save hours of waste and confusion. After a one-hour estimation meeting, and a few days later, a short kickoff meeting development cycle, the mobile web team has a two-week uninterrupted stretch of time to focus on the work that they committed to. The one exception is an alternate day fifteen minute standup meeting for the team to coordinate their daily work. This short meeting is a quick and easy way to avoid duplication of effort, help unblock other team members by pointing them to helpful information, and otherwise avoid stepping on each other’s toes. At the end of the cycle, a retrospective meeting allows the team to pause and take stock of how things are going, catching pain points before they turn into dysfunctions, as well as celebrate what went well during the past two weeks. Within these lightweight boundaries, there is room to experiment, innovate, reflect, self-organize, and learn.

Another symptom of our growing pains manifested in our meetings. As a Scrum team, we have a preset schedule of goal-oriented meetings (see sidebar). These meetings are strictly timeboxed to keep the focus tight and the meetings efficient. Where we previously had established a regular and sustainable cadence for planning and working, with our new larger team we were now unable to complete our meeting objectives in the allotted times. When timeboxes run over and planning becomes challenging, it can be another indicator that the state of the team is not sustainable.

As the team became larger, communication overhead also increased. Even with a well-organized issue tracking system, people were still wary of stepping on each other’s toes and not always sure what the right thing to be doing was. Coupling that with the fact that we were working in new territory (Wikidata!) and learning as we went, things began to feel chaotic; not quite emergency status, but, as our Product Owner put it, maybe time for a Pan-pan.

The Experiment

When it started to become obvious where our pain points were, we decided to hold a special retrospective meeting to focus on the issues we were having as a megateam. A retrospective is an opportunity for a team to reflect on what is working well, what is not working well, and what they want to improve, and a practice that the mobile web team does regularly.

For this particular retrospective, we reviewed a list of observations about what had been happening as our team had grown, and did a one-word check-in [3] to get a sense of people’s reactions to the events of the past few weeks. We then spent some time diving deeper into analyzing what had been occurring, and listing possible next steps to mitigate our problems.

One of these possible next steps was to split the team into two teams. In the course of discussion, the notion of a hard split vs a soft split emerged. The hard split would be two different teams with separate work backlogs, planning meetings, and issue tracking. The soft split would keep some parts of our current process intact, like team standups and planning meetings, but introduce a new additional backlog of work that team members could organize around and draw from. We decided to try a soft split as an experiment, and see what the impact would be on our megateam problems. Our theory was that a separate backlog and backlog owner would relieve some of the pressure of having to scope out work at a faster than humanly possible rate, give our fast-working team an alternative stream of work to draw from, and perhaps lessen some of our coordination problems by having two distinct focus areas. We identified a body of work that would comprise our new, second backlog, generally focused on making our codebase easier for new contributors to work on. We captured this work in a new backlog, and designated an owner for the backlog. We set up some working guidelines around the practical aspects of planning and working (e.g. when and how the team would groom and work on the new backlog, who should be involved, and what to do in the case of conflicts between the two backlogs). We set goals and a timeline for trying out this new dual backlog approach, and started our experiment at the beginning of the next development cycle.

What’s Next?

Working in short cycles
By working in short cycles and inspecting and adapting regularly, we create conditions where we can fail and learn quickly, and apply our learnings. Perhaps a feature developed during a two-week sprint has not proven popular with users, or an insurmountable technical problem was discovered while working on a task, revealing that it doesn’t make business sense to continue the work. If an “experiment” fails, it can be hard to feel like you’re throwing work away, but if the time invested is a couple of days during a two-week cycle rather than the hundreds or thousands of hours one might put in to a longer, non-iterative project, the overall risk and investment is much lower.

Our experiment is still in progress and we are learning along the way. We decided to run the experiment over two full development cycles (see sidebar) and revisit it at the beginning of the new year. As we approach Q3, mobile reader engagement features are coming into focus, and we’ve received the call to “double down” on mobile efforts. While we still don’t know if our experiment will be a success and how our work might change in the coming quarter, the mobile web team has already exhibited agility in the face of uncertainty and change, which will surely serve us well in the months to come.

In the meantime, we’ll keep our eye on our current experiment and keep learning from it. Find out what happens next in Part II of The Adventures of MegaTeam, coming in early 2015!

Kristen Lans, WMF Team Practices Group, Mobile Web and App Team ScrumMaster


  1. Beck, Kent; et al. (2001). “Manifesto for Agile Software Development”. Retrieved 14 June 2010.
  2. Cohn, Mike. 2010. Succeeding with Agile: software development using Scrum. Upper Saddle River, NJ: Addison-Wesley, p.188
  3. Derby, Esther, Diana Larsen, and Ken Schwaber. Agile retrospectives: Making good teams great. Raleigh, NC: Pragmatic Bookshelf, 2006, p.40

by Guillaume Paumier at December 10, 2014 11:40 PM

Content Translation: Announcing Version 3

Exciting new features are now available in the third version of the Content Translation tool. Development of the new version was recently completed and the newly added features can be used in Wikimedia’s beta environment. To use it, you first need to enable the Content Translation beta-feature in the wiki, then go to the Special Page to select the article to translate. This change in behavior was done in preparation for the activation of Content Translation as a beta-feature on a few selected Wikipedias in early 2015.

The Content Translation user dashboard


Two important features have been included in this phase of development work: a user dashboard, and saving & continuing of unfinished translations.

Users can currently use these two features to monitor only their own work. The dashboard (see image) will display all the published and unpublished articles created by the user. Unpublished articles are translations that the user has not published to the user namespace of the wiki. These articles can be opened from the dashboard and users can continue to translate them. The dashboard is presently in a very early stage of development, and enhancements will be made to enrich the features.

Additionally, the selector for source and target languages and articles has been redesigned. Published articles with excessive amount of unedited machine-translated content are now included in a category so that they can be easily identified.

Languages currently available with Apertium‘s machine translation support are Catalan, Portuguese and Spanish. Users of other languages can also explore the tool after they have enabled the beta-feature. Please remember that this wiki is hosted on Wikimedia’s beta servers and you will need to create a separate account.

Upcoming plans and participation

Development work is currently going on for the fourth version of this tool. During this phase, we will focus our attention on making the translation interface stable and prepare the tool for deployment as a beta-feature in several Wikipedias.

Since the first release in July 2014, we have been guided by the helpful feedback we have continuously received from early users. We look forward to wider participation and more feedback as the tool progresses with new features and is enabled for new languages. Please let us know your views about Content Translation on the Project talk page, or by signing up for user testing sessions. You can also participate in the language quality evaluation survey to help us identify new languages that can be served through the tool.

Runa Bhattacharjee, Wikimedia Foundation, Language Engineering team

by Guillaume Paumier at December 10, 2014 11:24 PM

Sue Gardner

What’s wrong with restricted grants

I know a lot of people who’re starting up new nonprofits, and most don’t have any prior experience with fundraising. That was me!, back in 2007 when I took over the Wikimedia Foundation. And so, the purpose of this post is to share some of what I learned over the past eight years, both from my own experience and from talking with other EDs and with grantmakers. I’m focusing on restricted grants here because they’re the most obvious and common funding source for nonprofits, especially in their early stages of development.

Restricted grants can be great. Grantmaking institutions fund work that’s socially important, that’s coming out of organizations that may have no other access to funding, and that is often risky or experimental. They take chances on people and organizations with good ideas, who may not yet have a track record. That’s necessary and important.

But restricted grants also pose some specific problems for the organizations seeking them. This is well understood inside nonprofitland, but isn’t immediately obvious to people who’re new to it.

Here are the five main problems with restricted grants.

Restricted grants can be administratively burdensome. At the WMF, we actively sought out restricted grants for about two years, and afterwards accepted them only rarely. We had two rules of thumb: 1) We would only seek restricted grants from organizations we knew well and trusted to be good partners with us, and 2) We would only seek restricted grants from organizations that were roughly our size (by staff headcount) or smaller. Why? Because restricted grants can be a lot of work, particularly if the two organizations aren’t well aligned.

Big institutions have a big capacity to generate process: forms to fill out, procedures to follow, hoops to jump through. They have lots of staff time for meetings and calls and email exchanges. They operate at a slower pace than smaller orgs, and their processes are often inflexible. People who work at grantmaking institutions have a responsibility to be careful with their organization’s money, and want to feel like they’re adding value to the work the nonprofit is doing. Too often, this results in nonprofits feeling burdened by expensive process as they procure and report on grants: time that you want to spend achieving your mission, instead risks getting eaten up by grantmakers’ administrative requirements.

Restricted grants risk overwriting the nonprofit’s priorities with the grantmakers’ priorities. At the WMF, we didn’t accept grants for things we weren’t planning to do anyway. Every year we developed our plan, and then we would (sometimes, with funders we trusted) seek funding for specific components of it. With funders we trusted, we were happy to get their input on our priorities and our plans for executing them. But we weren’t interested in advancing grantmakers’ goals, except insofar as they overlapped with ours.

Too often, especially with young or small non-profits, I see the opposite.

If an organization is cash-strapped, all money looks good. But it’s not. Here’s a crude example. Let’s say the WMF knows it needs to focus its energy on mobile, and a funder is interested in creating physical spaces for Wikipedians to get together F2F for editing parties. In that context, agreeing with a funder to take money for the set-up of editing cafes would pose a distraction from the mobile work the WMF would need to be doing. An organization’s capacity and energies are always limited, and even grants that fully fund a new activity are necessarily drawing on executive and managerial attention, as well as the organization’s support functions (human resources, accounting, admin, legal, PR). If what a restricted grant funds isn’t a near-perfect fit with what the organization hopes to accomplish regardless of the funding, you risk your organization getting pulled off-track.

Restricted grants pull focus from core work. Most grantmakers want their money to accomplish something new. They’re inclined to see their grants as seed money, funding experiments and new activity. Most successful nonprofits though, have important core work that needs to get done. At the WMF for example that core work was the maintenance and continued availability of Wikipedia, the website, which meant stuff like hosting costs, costs of the Ops team, site security work and performance optimization, and lawyers to defend against censorship.

Because restricted grants are often aimed at funding new activity, nonprofits that depend on them are incentivized to continually launch new activities, and to abandon or only weakly support the ones that already exist. They develop a bias towards fragmentation, churn and divergence, at the expense of focus and excellence. An organization that funds itself solely or mainly through restricted grants risks starving its core.

Restricted grants pull the attention of the executive director. I am constantly recommending this excellent article by the nonprofit strategy consultancy Bridgespan, published in the Stanford Social Innovation Review. Its point is that the most effective and fastest-growing nonprofits focus their fundraising efforts on a single type of funder (e.g., crowdfunding, or foundations, or major donors). That’s counter-intuitive because most people reflexively assume that diversification=good: stable, low-risk, prudent. Those people, though, are wrong. What works for e.g. retirement savings, is not the same as what works for nonprofit revenue strategy.

Why? Because organizations need to focus: they can’t be good at everything, and that’s as true when it comes to fundraising as it is with everything else. It’s also true for the executive director. An executive director whose organization is dependent on restricted grants will find him or herself focused on grantmaking institutions, which generally means attending conferences, serving on juries and publicly positioning him or herself as a thought leader in the space in which they work. That’s not necessarily the best use of the ED’s time.

Restricted grants are typically more waterfall than agile. Here’s how grants typically work. The nonprofit writes up a proposal that presumes it understands what it wants to do and how it will do it. It includes a goal statement, a scope statement, usually some kind of theory of change, a set of deliverables, a budget, timeline, and measures of success. There is some back-and-forth with the funder, which may take a few weeks or many months, and once the proposal is approved, funding is released. By the time the project starts, it may be as much as an entire year since it was first conceived. As the plan is executed the organization will learn new things, and it’s often not clear how what’s been learned can or should affect the plan, or who has the ability to make or approve changes to it.

This is how we used to do software development and in a worst-case scenario it led to death march projects building products that nobody ended up wanting. That’s why we shifted from waterfall to agile: because you get a better, more-wanted product, faster and cheaper. It probably makes sense for grantmaking institutions to adapt their processes similarly, but I’m not aware of any who have yet done that. I don’t think it would be easy, or obvious, how to do it.

Upshot: If you’re a new nonprofit considering funding yourself via restricted grants, here’s my advice. Pick your funders carefully. Focus on ones whose goals have a large overlap with your own, and whose processes seem lightweight and smart. Aim to work with people who are willing to trust you, and who are careful with your time. Don’t look to foundations to set your priorities: figure out what you want to do, and then try to find a grantmaker who wants to support it.

Filed under: Fundraising, Leadership, Nonprofits, Revenue

by Sue Gardner at December 10, 2014 06:26 PM

Wiki Education Foundation

Two Quarterly Reviews now available

Wiki Ed is dedicated to sharing resources and information about our work. To that end, staff members provide a Quarterly Review on a rotating basis. These reviews cover accomplishments from the past three months, and highlight future goals.

Slides and notes on two Quarterly Reviews, from Communications (slides) and from Digital Infrastructure (slides), are now available.

by Eryk Salvaggio at December 10, 2014 05:56 PM

Andre Klapper

Wikimedia in Google Code-in 2014: The first week

Wikimedia takes part in Google Code-in (GCI) 2014. The contest has been running for one week and students have already resolved 35 Wikimedia tasks. You can help making that more (see below).

Google Code-in 2014

Some of the achievements:

  • Citoid offers export in BibTeX format (and more contributions)
  • Analytics’ Dashiki has a mobile-friendlier view
  • Echo‘s badge label text has better readability; Echo uses the standard gear icon for preferences
  • Wikidata’s Wikibase API modules use i18n for help/docs
  • Two MediaWiki extensions received patches to not use deprecated i18n functions anymore
  • MediaWiki displays an error when trying to create a self-redirect
  • The sidebar group separator in MediaWiki’s Installer looks like in Vector
  • The Wikimedia Phabricator docs have video screencasts and an updated Bug report life cycle diagram
  • Huggle‘s on-wiki docs were updated; exceptions received cleanup
  • Pywikibot‘s replicate_wiki supports global args; optparse was replaced by argparse
  • Reasons for MediaWiki sites listed as defunct on WikiApiary were researched
  • Wikimedia received logo proposals for the European Wikimedia Hackathon 2015
  • …and many more.

Sounds good? Want to help? Then please spend five minutes to go through the tasks on your to-do list and identify simple ones to help more young people contribute! Got an idea for a task? Become a mentor!

by aklapper at December 10, 2014 05:47 AM

Wikimedia Foundation

Using Licenses in an easy (and legal) way

Title page of “Open Content – A Practical Guide to Using Creative Commons Licences”
“Cappadocia Balloon Inflating Wikimedia Commons”, by Benh LIEU SONG / design by Markus Büsges, leomaria designbüro, Germany, CC BY-SA 3.0

According to Wikipedia, “all rights reserved” is a phrase that originated in copyright law as a formal requirement for copyright notice.[1] It means that any reuse of copyrighted material can only be granted by the originator themselves. Only after receiving some kind of permission can copyrighted material such as pictures, sounds or videos be used in the context of a blog or other information material.

Over the past ten years, different free license models have been developed to make this situation more user-friendly. They suggest that not “all” but rather “some” rights should be reserved to the originator, giving reusers the opportunity to use, share, combine and spread content without working out agreements with the authors beforehand. Free licenses, e.g. the ones by Creative Commons, grant or exclude different kinds of reusage in the form of standardized license agreements. Within the scope of these licenses users can share copyrighted material as they please.

By using free licenses instead of traditional copyright, users can reach a far wider audience with their works. In times when people first look for information online, this is how it becomes possible to e.g. run the biggest online encyclopedia consisting of copyrighted texts and pictures in a legal way.

Free licenses are an important tool to make knowledge available to all people but not everyone knows how exactly they are to be used. At the conference “Shaping Access”, new guidelines have been presented that approach this topic in a descriptive and comprehensible way.

Planetario de la Ciudad de Buenos Aires,” by  Emmanuel Iarussi / design by Markus Büsges, leomaria designbüro, Germany, CC BY-SA 3.0

Open Content – A Practical Guide to Using Creative Commons Licences” is a publication by the German Commission for UNESCO, the North Rhine-Westphalian Library Service Centre and Wikimedia Deutschland. Media attorney Dr. Till Kreutzer elaborates on the advantages of Creative Commons licenses and exemplifies different usage scenarios of the different licenses. This publication is also the first one to go into detail about the newest version of these licenses: version 4.0.

The guide extensively deals with all six license modules of the Creative Commons licenses, the ensuing opportunities as well as questions. The license “Creative Commons Attribution-ShareAlike,” e.g., allows adaptations of the respective content but it is not always clear what exactly an adaptation is. And how does the license deal with adaptations in the form of remixes or mashups? Kreutzer also talks about what it means to make a piece of work available “publicly” or “privately.” In addition, there is a section about trademarks and moral rights and their relationship to free licenses.

The “NonCommercial” license module also leaves a wide margin for interpretation. It doesn’t follow from the license agreement what exactly has to be understood as “non-commercial” usage.[2] Is it for example commercial or non-commercial when a publicly financed institution incorporates content on their website? This was the subject of a recent legal case in Germany where the Higher Regional Court of North Rhine-Westfalia ruled that the publicly funded Deutschlandradio acted within its rights when it included a picture on its website that was only to be used in non-commercial contexts. The court declared that as an entity of the public sector, Deutschlandradio is inherently not profit-oriented.

Fleur de givre L” by Annick MONNIER / design by Markus Büsges, leomaria designbüro, Germany, CC BY-SA 3.0

Practical tips, e.g. on how to find freely licensed content online or how to attach a license notice, make up the last chapter of “Open Content – A Practical Guide to Using Creative Commons Licences.” Thus, the guide is not only a great starting point for creators but also for reusers of content. It is supposed to encourage everyone to make copyrighted material available to the public, to deliberately give up control and to combine freely licensed material in a creative way.

“Open Content – A Practical Guide to Using Creative Commons Licences” is available as full text on Meta-Wiki and PDF. The text itself is licensed under CC-BY. We encourage you to adapt it, to translate it into your language and to share it with the whole Wikimedia movement and beyond. If you have any comment, you can get back to me via katja.ullrich@wikimedia.de or leave a comment on the talk page on Meta. Please feel free to share any new versions of the guide you may create over time with me.

  1. https://en.wikipedia.org/w/index.php?title=All_rights_reserved&oldid=614754821
  2. The brochure “Free Knowledge thanks to Creative Commons licenses – Why a non-commercial clause often won’t serve your needs” published by iRights, Creative Commons and Wikimedia Deutschland actually deals with this topic in a very accessible manner.

Katja Ullrich, project manager at Wikimedia Deutschland

by wikimediablog at December 10, 2014 01:03 AM

December 09, 2014

Royal Society of Chemistry - Wikimedian in Residence (User:Pigsonthewing)

Tips for new Wikipedia editors

Having recently joined the Royal Society of Chemistry's staff, I know all too well that walking into a big room full of strangers and starting work under their gaze can be daunting. Thankfully, everyone was really welcoming and I soon felt at home. When that room is Wikipedia, and you have millions of new colleagues, none of whose body language you can see, it can be even more of a challenge. But it needn't be. Here are a few tips that I've picked up over the years, for new editors. 1. Create an account Yes, you can edit most articles without signing in, but creating an account gives you additional editing privileges, It allows you to easily find pages you've edited previously. It allows other people to leave you messages, and gives them somewhere (your "talk" page) to reply to yours. You can "watch" (or "favourite") articles you want to keep track of. And once you know your way around, you can configure the interface and add tools, to suit your way of working. 2. Give an email address When you sign up you have the option fo supplying an email address or not. Do. It's the only way you can recover your account if you forget your password. You can, at any time, disable the ability for other editors, or Wikipedia itself, to send you messages. 3. Choose your user name wisely Your user name is yours, and not your employer's or other organisation's. So not "RSC-editor" or "Acme chemist". You can use a real name, a plausible pseudonym like "JoeSmith43" or a nickname which is or isn't associated with your real identity - it's up to you (mine is "Pigsonthewing", and that's my name on Twitter and other social media sites, too). If you decide you made a mistake in your choice of user name, just abandon the account, and start a new one (the only time that's frowned upon is if you're trying to do something nefarious like using multiple accounts to support each other in a discussion). Simialry, you may operate two accounts, such as one for professional matters, and another for writing about hobbies or your favourite by band. But again, only if you don't abuse them as described above. 4. Introduce yourself Write a little about yourself, on you user page. Here is mine, but that's been around for some time and I've a lot to talk about: you don't have to give a full CV, and you can choose not to reveal anything about your identity, gender, employer (but see the next section), location or age. But you might like to say something like: "My name is Jane and I'm a chemistry journal editor from the United Kingdom. In my spare time I go rock climbing" - it makes you seem more approachable and helps to establish your areas of interest. Also, if you have an ORCID identifier, and are using your real identity, you can display it on your user page, See the guide to using ORCID in Wikipedia. 5. Declare any potential conflicts interest Wikipedia requires editors to declare any involvement which may lead to a conflict of interest, in what they write about. So if you work for the Royal Society of Chemistry, and plan to edit articles about it (say, one of its journals or wards), you must mention that. You can do so in each relevant edit summary, on the article's talk page, or on your user page - the latter has the advantage that you only need to do it once. Even then, you must not use Wikipedia to promote your employer or to advocate their position (I'll return to this in a future post). For the most part, it's best not to edit where you have a conflict  interest, or at least not until you understand Wikipedia's culture and polices. You should never write a Wikipedia autobiography, nor an artcile about a close relation or colleague. 6. Start small You might sta rt by creating a short new article (especially if you come to one to the training sessions or public events I'll be running!), or by expanding a short ("stub") article created by someone else. Otherwise start with a few small edits; add a fact here, correct a  spelling there. While maxim "anybody can edit Wikipedia" is generally true, if you start by making major revisions to an established article, you may find that other editors simply undo your work. That can feel hostile (and sometimes is), but while you're learning the ropes, its best not to get dragged into "flamewars" over such things. 7. Cite your sources Whatever fact you add to a Wikipedia arctile needs to be cited using a reliable source (and our journals are very reliable!). It doesn't matter that it's a subject in which you gained your degree, or that you discovered it yourself. On Wikipedia we ask everyone to prove what they're saying - because we don't really know who any eidtor is, its the only fair way to work. 8. Use talk pages If your edits are undone ("reverted"), don't simply reapply them. Use the article's talk page to ask why, and suggest that other editors review them and reapply them or find a compromise. You can even do this first, if you want to make a big change to a stable article. Again, its not required, but can make your life easier, specially when you are starting out. 9. Visit the Teahouse The Teahouse is a special part of Wikipedia for new editors. It's staffed by volunteers who specialise in supporting people as they start out on their Wikipedia adventure. So you can ask questions that might seem basic, or which have been asked before, and no-one will mind. Teahouse volunteers will demonstrate technical tricks, fix problems, and generally hold your hand, for as long as you need. 10. If all else fails, do something else No, I don't mean go away and watch television! But if you find that Wikipedia editing is not for you, you can help on one of the other projects, for example by adding definitions to WIktionary, quotations to Wikiquote, or facts and figures to WIkidata. Perhaps you can take pictures and upload them to WIkimedia Commons, or help to catalogue and describe others that are already there. Or scan old, out-of-copyright books, magazines or journals and upload them to Wikisource. The great thing about the family of WIkimedia projects is that everyone finds a niche. I hope you do enjoy Wikipedia editing, and look forward to hearing how you get on.  

Posted by Andy Mabbett
Dec 9, 2014 2:52 pm

December 09, 2014 01:52 PM

Andy Mabbett (User:Pigsonthewing)

Finding ORCID identifiers used in Wikidata & Wikipedia

As you may know, I’m was appointed Wikipedian in Residence at ORCID in June this year.

I’ve previously written a guide to using ORCID identifiers in Wikipedia.

A new tool, ‘Resolver‘, by my friend Magnus Manske, who has awesome coding skills, and is generous with them, allows you to find whether a particular ORCID identifier is used in (and thus in one or more Wikipedia projects, in any language).

By entering the property “P496” (the Wikidata property for an ORCID ID) and the ORCID ID value (the short form, e.g. “0000-0003-4402-5296″, not the full identifier, “http://orcid.org/0000-0003-4402-5296″) into Resolver, the relevant Wikidata page, if any, is retuned. At the foot of that page are links to Wikipedia articles (again, if any).

Resolver screenshot

An ORCID identifier query in Resolver

Alternatively, you may compile a URL in the format https://tools.wmflabs.org/wikidata-todo/resolver.php?prop=P496&value=0000-0003-4402-5296 – which will automagically redirect.

Note that this works for articles, but not identifiers used on Wikipedia editors’ user pages, which have no Wikidata equivalent.

Resolver works with other unique identifiers, too, such as VIAF, or BBC Your Paintings artist identifiers, and many more. If you want to know why that’s important, see Andrew Gray’s post, “Wikidata identifiers and the ODNB – where next?“. Resolver is not just for people, though. It will also resolve unique identifiers for other types of subjects, such as BBC programme IDs or ChemSpider IDs for chemical compounds.

by Andy Mabbett at December 09, 2014 12:30 PM

Wikimedia Foundation

Inaugural Monuments of Spain Challenge completed

Albacete monuments have been added to more than a dozen Wikipedias.
“Pasaje Lodares” by Chowdon, under Public Domain

In October 2014, Wikimedia España ran the Monuments of Spain Challenge. This contest takes profit from our Wiki Loves Monuments experience in order to spread the knowledge of Spanish culture –mainly architectural in this case- around the World.

Spain is a multicultural country, with eight different languages present in the Wikimedia Projects: Aragonese, Asturian, Basque, Extremaduran, Catalan, Galician, Occitan and Castilian Spanish. It is also the cradle of a language with a global presence: Spanish. So we in Wikimedia España felt the need to expose ourselves to a multilingual world, and have tried to make our contest known in 145 languages.

Just saying thanks took some effort
“Nota agradecimiento MoSC” by A. Barra, under CC BY-SA 3.0

Participants in the Challenge were asked to edit –create, translate or expand- articles about monuments. This is an idea we borrowed from Wikimedia Sverige, with some influence of contests by Amical Catalan language group and Welsh Wikipedia. We are very grateful to them, as their experiences helped our team address different issues along the way.

The results of the Challenge have been encouraging. There have been 2079 articles edited, corresponding to approximately 1086 different Wikidata items in 37 languages, from Guarani and Korean to Welsh and Malay. We are particularly proud about the Awadhi and Maithili edits, which are still in the Incubator. All together, 46 people participated in the edits.

The winners were users Alphama in the general category, and Rauletemunoz in the languages of Spain special contest. Alphama completed edits almost entirely in Vietnamese, and Rauletemunoz did so in Catalan. Some editors used seven, nine or even eleven different languages.

We started this endeavor with the best intentions, but little practical knowledge. This has been a rewarding experience, as it improved our organization and helped us learn more about contests, monuments, people, and languages. We have had the help of many people we did not expect to work with, and our knowledge of communities within the Movement has grown too.

This has been just a step and we are willing to take more. We have learned, I said, so the next one will be different. Our goal is still to improve.

User:B25es, Wikimedia España

by wikimediablog at December 09, 2014 04:35 AM

December 08, 2014

Wiki Education Foundation

“Did You Know” is abuzz with wasps

"Wasp March 2008-3" by Alvesgaspar - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

Wasp March 2008-3” by AlvesgasparOwn work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

Regular readers of the “Did You Know” section of Wikipedia’s main page may have noticed a lot of buzz about wasps lately.

Articles from nine students in Dr. Joan Strassmann’s Behavioral Ecology course have been given Did You Know nods this term, and 13 more are in the works. As part of the course assignment, students are asked to contribute two, 1,000-word articles about the insects, each with at least seven references.

The students are tackling the wasps, as the syllabus explains, to understand “why organisms evolve to act the way they do. We focus on social behaviors and particularly on understanding conflict and cooperation. “

As students learn about these social behaviors in organisms, they’re learning some social behaviors on a human scale, too, as they work to contribute facts about their adopted animal species to Wikipedia. The success of this class has proven to be a great example of what can happen when classrooms, the Wiki Ed support structure, and the Wikipedia community come together.

In this case, it’s meant a surprising number of Did You Know spots — fascinating bits of trivia for biologists, such as:

Did You Know…
…that the wasp Agelaia multipicta removes ants from its nest with blasts of wing buzzing?
…that the Southeast Asian social wasp Parischnogaster jacobsoni has a gland that creates an ant repellant?
…that the queen wasp Belonogaster petiolata inspects her nest to ensure the eggs are hers, and eats any not laid by her?
…that the wasp Blastophaga psenes has a symbiotic relationship with figs?
…that Protonectarina sylveirae can increase the yield of coffee crops?

This is a testament to the quality of work these students are creating, but also to Dr. Strassmann’s experience with Wikipedia assignments. Each DYK submission requires submitting articles to a process where other editors look for good, quality data, and ensure the question is phrased to pique the interest of readers in the question format you see above, called “hooks.”

“They know it is important,” Dr. Strassmann said. “They want the attention.”

Wiki Ed Wikipedia Content Expert Ian Ramjohn also identified the student’s contributions as worthy of DYK nominations.

“I realised that many of them might make good DYKs,” Ian said. “So I checked them, ruled out the ones that were going to time out in the next day, and came up with 30 articles that seemed like candidates.”

The community has been involved, too. Ian notes that a Wikipedia editor, User:Cwmhiraeth, an extremely proficient creator of species articles, has been instrumental in getting the DYKs passed.

This course on social behavior and cooperation in the animal kingdom has given us an excellent example of cooperation on Wikipedia. Bridging students, instructors, and the community is what the Wiki Education Foundation aims to do. This course has shown how beneficial that cooperation can be.

by Eryk Salvaggio at December 08, 2014 06:27 PM

Priyanka Nag

Portland coincidental work-week

I will leave my travel adventure out from this blog post cause they are sufficiently interesting to deserve a separate, dedicated post. So, I will jump directly to my experience of this co-incidential work-week at Portland.

On the first day, when I walked into the Portland Art Museum in the morning, I was overwhelmed to see so many known faces and being able to flag a few new faces to their IRC nicks (or twitter handles), whom I was meeting for the first time outside of the virtual world. 

What's your slingshot?

During this one week, I heard a lot of amazing people, from David Slater to Chris Beard, from Mark Surman to Mitchel Baker....too much awesomeness on the stage! The guest speakers on the first day was Brian Muirhead, from NASA who made us realize that even though we were not NASA engineers, and our work was limited to the earthen atmosphere, sometimes the criticality of projects or the way of handling them didn't need to differ much. The second day's guest speaker, Michael Lopp (@rands), was a person I had been following on twitter but never knew his real name or how he looked untill the moring of 3rd of December. The talk about Old guards vs New guards was not only something I could relate to but also had a few very interestig points we could all learn from.

After the opening addresses on both days, I found a comfortable spot with the MDN folks. I knew that under all possible circumstances, these would be the people I would mostly hang-around with for the rest of the week. Well, MDN is undoubtedly my most favorite project among all other possible contribution pathways that I have (or still do) tried contributing to.

We do know how to mark our territory!

Just like most Mozilla work-weeks, this week also had a lot of sticky notes all around, so many etherpads created and a master etherpad to link all the etherpads and a lot of wiki pages! When you know that you are gonna be haunted my sticky notes for at-least the next one week, you can be sure that you had a great workweek and a lot of planning. Plannings around the different contribution metrics for MDN, contribution recognition, MDN events for 2015, growing the community as well as a few technical changes and a few responsibilities which I have committed to and will be trying to complete before the calender changes it reading to 2015....it was a crazy crazy fun week. One important initiative that I am not only interested in being executed by also am willing to jump into in any possible manner is the linking of Webmaker and MDN. To me, its like my two superheros who are planning to work together to save this world!

I didn't spend much time with the community building team this week, other than the last day when I could finally join the group. First and foremost, Allen Gunner is undoubtedly one of the best facilitators I have seem in my life. Half of the session, my focus on his skills and how I could learn a few of them. I am happy to have been able to join the community building team on their concluding day as I got a summary of the week's discussion as well as could help the concluding plans and also make a few new commitments to a few new interesting things that are being planned in the year 2015.

Well, I am not sure if I have been able to do a good job at thanking Dietrich personally for inviting me and hosting me at his place for the fun filled get-together, but I sincerely do confess that I had way more fun at his party than I had expected to. Meeting so many new people there, mostly meeting so many amazing engineerings who are building the new mobile operating system which I not only extensively use but also brag about to my friends, family and colleagues.

A few wow moments -

[1] Seeing @rage outside the twitter world, live infront of me!

[2] Mitchel's talk on how Mozilla acknowledges the tensions created around the last few decisions that went out and her explanation around why and how they were made and were important.

[3] Macklemore & Ryan Lewis' live performance at the mega party.

[4] My first ever experience of trying to 'dance' with other Mozillians. Yes, I had successfully avoided them during the Summit, MozFest and all other previous events in the last 2 years.

[5] The proudest moment for me was probably the meeting of the MDN and the webmaker team. When neither of the teams knew every other member of the other team, I was probably the one person who knew every person in that circle. Having worked very closely with both the teams, it was my cloud nine (++) moment of the workweek to be sitting with all my rock-stars together!

A lot of people met, a lot of planning done, a lot of things learnt and most importantly, a lot of commitments made which I am looking forward to execute in 2015.

by priyanka nag (noreply@blogger.com) at December 08, 2014 03:19 PM

Magnus Manske

Content Ours, or: the sum of the parts

Open source projects like Linux, and open content projects like Wikipedia and Wikidata, are fine things indeed by themselves. However, the power of individual projects is multiplied if they can be linked up. For free software, this can be taken literally; linking libraries to your code is what allows complex applications to exists. For open data, links can be direct (as in weblinks, or external catalog IDs on Wikidata), or via a third party.

Recently, and once again, Peter Murray-Rust (of Blue Obelisk, CML, and Wikimania 2014 fame) has put his code where his mouth it. ContentMine harvests open access scientific publication and automatically extracts “facts”, such as mentions of species names. These facts are accessible through an API. Due to resource limitations, the facts are only stored temporarily, and will be lost after some time (though they can be regenerated automatically from the publications). Likewise, the search function is rather rudimentary.

Why is this important? Surely these publications are Google-indexed, and you can find what you want by just typing keywords into a search engine; text/data mining would be a waste of time, right? Well, not quite. With over 50 million research papers published (as of 2009), your search terms will have to be very tightly phrased to get a useful answer. Of course, if you use overly specific search terms, you are likely to miss that one paper you were looking for.

At the time of writing this, ContentMine is only a few weeks old; it contains less than 2,000 facts (all of them species names), extracted from 18 publications. But, even this tiny amount of data allows for a demonstration of what the linking of open data projects can accomplish.

Since all facts from ContentMine are CC-BY, I wrote some code to archive the “fact stream” in a database on Labs. As a second step, I use WDQ to automatically match species names to Wikidata items, where possible. Then, I slapped a simple interface on top, which lets a user query the database. One can use either a (trivial) name search in facts, or use a WDQ query; the latter would return a list of papers that contain facts that match items returned from WDQ.

If that sounds too complicated, try the example query on the interface page. This will:

  1. get all species from the Wikidata species tree with root “human” (which is only the item for “human”); query string “tree[5][][171]”
  2. get all species from the Wikidata species tree with root “orangutan”; query string “tree[41050][][171]”
  3. show all papers that have at least one item from 1. and at least one item from 2. as facts

At the moment, this is only one paper, one that talks about both homo sapiens (humans) and Pongo pygmaeus (Bornean orangutans). But here is the point: we did not search for Pongo pygmaeus! We only queried for “any species of orangutans”. ContentMine knows about species mentioned in papers, Wikidata knows about species, and WDQ can query Wikidata. By putting these parts together, even if only in such a simple fashion, we have multiplied the power of what we can achieve!

While this example might not strike you as particularly impressive, it should suffice to bring the point across. Imagine many more publications (and yes, thanks to a recent legal decision, ContentMine can also harvest “closed” journals), and many more types of facts (places, chemicals, genetic data, etc.). Once we can query millions of papers for the effects of a group of chemicals on  bacterial species in a certain genus, or with a specific property, the power of accessing structured knowledge will become blindingly obvious.

by Magnus at December 08, 2014 02:50 PM

Gerard Meijssen

#Wikipedia- The numbers are what II ?

Numbers, statistics have a purpose. Their typical use is to have manager types consider how things are moving. Even though they have infinite wisdom there is not much that they can do when all numbers do is show trends.

The funny thing is that only the numbers that are collected are reflected in these trends. Wikidata for instance attracts no readers and consequently it may not be considered as a source of attention. It is a fallacy and it does not motivate people interested in Wikidata.

There are so many lists that could motivate people. Articles that need writing, differences in data between Wikipedias. The wonderful thing is that they all bring a sense of purpose and are an inspiration to improve both quality and quantity.

My favourite list is the list of zombies for 2014. Currently there are 400+ zombies that need to be killed of. It motivates because I know what to do and it is a convenient way to find categories of information that can easily be imported in Wikidata.

Statistics, numbers can motivate people to be more effective. That is how you influence the numbers these manager types look at.

by Gerard Meijssen (noreply@blogger.com) at December 08, 2014 07:03 AM

Tech News

Tech News issue #50, 2014 (December 8, 2014)

TriangleArrow-Left.svgprevious 2014, week 50 (Monday 08 December 2014) nextTriangleArrow-Right.svg
Other languages:
čeština • ‎English • ‎suomi • ‎עברית • ‎italiano • ‎日本語 • ‎русский • ‎українська • ‎中文

December 08, 2014 12:00 AM

December 07, 2014

Gerard Meijssen

#Wikipedia- The numbers are what ?

So the numbers are flat.. You want initiatives that help provide us with relevance.. Ok, how about this scenario:

A reader queries Wikipedia for a subject and does not find it. Many more people query for this subject and it becomes the most wanted subject without an article. An editor writes the article and it proves popular. It becomes the most read new article in the next month.

In this way:
  • we give our search statistics a purpose
  • we indicate what subjects our readers want articles about
  • we celebrate the most read new articles and their editors
  • we advertise that we ask our editors to write articles people are looking for
  • we can do this for every Wikipedia in every language
Yes, we can provide search results from Wikidata as well. When people make use of this, it counts as a not found instance. In the mean time we did provide information that is available to us.

by Gerard Meijssen (noreply@blogger.com) at December 07, 2014 01:45 PM

#Wikidata - KCG College of Technology

Thank you Google
A friend of mine studied at one of the engineering colleges in India. He now works for the Wikimedia Foundation and, he rocks.

There are articles for many of these colleges and they all could do with some loving attention from the people who study or studied there.

How about adorning the items in Wikidata with the name in an Indian script. How about adding an image. It will show in Reasonator on all the people who studied or taught there..

by Gerard Meijssen (noreply@blogger.com) at December 07, 2014 08:56 AM

#Wikipedia - reaching out for #ebola

When people are not well informed about ebola, they panic. It is therefore ever so important to get the message out and get the message right. The right information is particularly important for the people who live with ebola, who see the effects first hand.

They live in Africa, in countries like Guinea, Sierra Leone and Liberia. A study shows that the most used source for information in these countries about ebola is Wikipedia.

For these people many of the things we take for granted are a dream. When you know about Gapminder, you know things are improving and not as bleak as they seem. People are living longer, getting educated and infrastructure is improving. It is why Wikipedia also thanks to Wikipedia Zero is reaching these parts others do not reach as effectively.

Wikipedia is effective thanks to the dedication of its volunteers, particularly those who ensure the quality of medical articles. Wikipedia comes cheap. It is the best investment in bringing knowledge to a world that is sometimes desperate for great basic and actionable information.

by Gerard Meijssen (noreply@blogger.com) at December 07, 2014 08:09 AM

December 06, 2014


Readership & Wikipedia

One of the most important slidedecks you will look at this year — at least if you care about the future of Wikipedia and her sister projects — is this one, about declining and flattening Wikipedia readership, from the WMF metrics meeting a couple days ago. I wasn’t able to attend the meeting in person (fortunately, they are recorded), but I’ve been reading up on this issue, and the data presented drives home a point I’ve been thinking for a while: that this trend threatens our projects and our mission, and we need to treat it appropriately.

There’s no summing up our typical reader. It’s not possible to generalize about who they are, except that they all have some sort of internet access, because we’re talking about somewhere around 1/15th of the world’s population (that would be on the order of 400,000,000-500,000,000 readers a month — a bit less than half a billion) that access our projects via non-mobile devices.

But we can count our readers, and it’s the “bit less” that this post, and the data in the slides, is about. You see, for a long time — at least a couple of years — those of us who give talks about Wikipedia could confidently say there were half a billion unique readers a month. We were actually measuring a bit more. Now, there are less (on desktops). Mobile visitors are growing, but they seem to access fewer pages; pageviews are basically flat. This is deeply surprising, for a project whose readership and popularity has grown exponentially for a decade. And it’s counterintuitive, that the world’s biggest source of ready reference information wouldn’t continue to gain readers and pageviews as more and more of the world comes online.

Here’s the summary of what we know to date about readership, from the slidedeck above:

High level summary of Readership
● Mobile is growing, desktop is shrinking
● Globally, pageviews are flat (-0.9% AGR)
● Global North traffic (72% of all traffic) is flat
● Global South traffic is increasing, driven by mobile
● In the US, pageviews are declining (-8.6% AGR)
○ In the US, decline on desktop is not fully offset by mobile web

Globally, pageviews are flat. (Actually, they’re decreasing). For a project whose mission is to reach every single person in their own language, and for a project that depends on readership both to provide a source of new contributors and for financial support, that is a crisis.


Why is it happening? The growth of mobile views is self-evident and makes intuitive sense — mobile is where people new to the internet are mostly coming online, and even for those of us with desktop access, the convenience of mobile continues to grow and dominate our lives. So, as pointed out in the slides above, we must continue to make our mobile site a more delightful and usable (and editable) experience.

But what about the rest of the drop in desktop readership? Some of it — though we don’t know exactly how much — is attributable to new innovations in packaging information online. Things like the Google Knowledge Graph (that displays the answer you were looking for at the top or side of your search results) gives you information that is often based on data in the Wikipedia article on the topic, but without providing clicks to our pages. This service arguably fulfills our mission of sharing information, but it threatens our mission of making it possible for everyone to edit those pages — without that edit tab at the top, or even an obvious link back to Wikipedia, how do we gain new contributors, fix errors, or even fulfill the spirit of free knowledge?

And, I worry about the reasons that we don’t yet know about why readership might be dropping — I worry about the potential that Wikipedia is not as useful as it once was, or that there are other sources of information that people find more useful that aren’t free or don’t have the mission that we do.

I am a librarian. Libraries, when faced with fewer people coming into the library, have a classical set of hypotheses for the reasons why:

  • people have found a source of information that is more convenient than the library (the internet, Amazon.com, etc. etc.)
  • the library is hard to use or inconvenient, so people give up before they get to our resources
  • people do not know about the library and what it offers
  • the library does not contain the information or resources that people are looking for, so they must look elsewhere.

Almost always, when faced with declining traffic, some combination of all of these factors are true. In response, libraries put a great deal of effort into things like outreach and marketing, making their collections and catalogs more user-friendly, and trying to offer services that respond to their community’s needs (whether that’s a bilingual children’s storytime at the public library, or a document delivery service at an academic library). It is rarely easy to make these changes — all libraries are constrained by limited budgets and staffing, there are core services at the heart of our mission that cannot be dropped, many things about the library are inherently not very user friendly (like trying to make a collection of millions of books accessible), and it is often hard to know what to do.

But, the ways that libraries have found to surmount these challenges include listening to and surveying their community — asking, what are the changes big and small that would make the library easier to use and make library users more successful? (Having done a fair bit of this kind of survey work, I’ve found that answers are usually quite practical:  things like more power plugs, longer hours, or a better website login service are often on the top of the list). Libraries know it’s important to communicate to stakeholders (and funders) why their work is important. And libraries have a strong culture of listening to each other to gain best practices. Finally, libraries have learned that it’s important not to be dismissive of the quick wins even as you plan for bigger long-term goals — for instance, I don’t have the budget to remodel my building, but I do have the budget to buy some powerstrips and white boards, which will help the students studying in the library right away.

I argue that we at Wikimedia need to approach our solution to this problem in the same way. We need to listen, and measure, our community — that means better analytics, reader and editor surveys, qualitative as well as quantitative analysis. We need to be sure the problem is clear, so we can define it for ourselves and also communicate it cleanly within our community, so we all have a common understanding of what we need to work on solutions for. (Think about the crispness of some of our other issues, which have also taken a long time to sift through the data to figure out: we need more contributors. There is a gender gap.)

And we need to listen to other projects, online and off. This is a bit harder; we’re unique. But we’re not the only website that tries to present long-form text in a readable way, or present lots of information in a confined space in a mobile device. We’re also not the only website that serves up vast number of pages out of a database. And we are not the only mission-driven community. We do not have to reinvent every wheel.

Finally, let’s plan for quick wins and long-term goals. Quick wins: uploading photos via the mobile app. Long-term goals: easy to use, a joy to contribute, the leading shared source of information for the world. We need both kinds of objectives, and we can’t ignore either.

This is not a cheerful post, but it’s also not lacking in optimism. I believe that we have a bright future ahead of us — that this first decade has only been the beginning of the Wikipedia story, not the end. But we need to pay attention to our place in the world, and make sure that we are serving it as best as we can.

Edited since publication because: I mistyped the # of UVs (500M, not 5M); to incorporate Erik M.’s comments about mobile UVs vs desktop UVs, in the comment section; and to elaborate on what libraries do. 

by phoebe at December 06, 2014 10:55 PM

Gerard Meijssen

#Wikidata - #Bangladesh #University of #Engineering and Technology

The Bangladesh University of Engineering and Technology currently knows 16 people who studied there. Two of them were in a category that had not yet been created.  In theory 16 articles are waiting for them to be added to this category. In addition, someone may want to categorise the category..

The good news in all this:
  • Wikidata can be used to populate Wikipedia categories
  • people are actively adding information in Wikidata first
  • by adding students and faculty in Wikidata, people are connected and consequently the universities and colleges get exposure
  • Maybe, my call to the college boys and girls is heard by some of them

by Gerard Meijssen (noreply@blogger.com) at December 06, 2014 09:16 AM

Wikimedia India

Wikimedia Community at AdaCamp Bangalore

Wikimedians with Adacamp Bangalore 2014 participants.

Wikimedians with Adacamp Bangalore 2014 participants. Picture by Suki under CC-BY-SA 4.0

AdaCamp is an amazing initiative to increase the participation of women in open technology and culture.

The AdaCamp at Bangalore was held on 22nd and 23rd of November 2014.

The Camp was true to its claim and was designed to encourage women in every possible way . The atmosphere at the Camp was very cordial and of mutual respect. The programs were of unconference nature which made it very inclusive . Special programmes like Imposter syndrome workshops  showed the commitment of the makers to address the issues of women in open technologies and culture.

Women from the Wikimedia Community made significant contributions to the camp by volunteering and also by  conducting various session including and not limited to Wikimedia Projects. We conducted two sessions with regards to wikipedia  

  1. Introduction to Wikipedia, Led by Chinmayi S K and Netha Hussain

  2. Wikipedia workshop, Led by Netha Hussain and Parul Thakur

Introduction to Wikipedia :  This session was held on Day 1. It had a mixed group of first time editors to those who had the experience of editing wikipedia and had to face issues.We began the session talking about various parts of a wikipedia page and how are they significant along with their uses. There were long detailed discussions on inserting content into the page what kind of content is allowed , what would make it credible and where to host it , use of references and what kind of reference would strengthen the article. Here we also figured out why some edit or articles were disregarded and there was also a mention of sponsored edits.

The other things we were able to cover during this discussion  were  : Macros and templates , Namespaces , Wikitionary, Talk pages ,Portals  and  GLAM initiative.

At the end of the session we also had a very insightful discussion on the diversity of  Wikimedia editors and the plunging numbers which are a cause of concern . How diversity affects the quality and type of articles being written and How one could contribute to increasing the number of women editors ( leading initiatives like Women’s History Month and others ) .

Netha Hussain and Parul Thakur

Netha Hussain and Parul Thakur. Picture by Gurpreet Kaur under CC-BY-SA 4.0

Wikipedia Workshop :  This workshop was held on Day 2 and was a hands on session. We had three women who joined this session. The session initiated with the basics like creating an account to login and leveraging the sandbox to test and edit. On the AdaCamp internal document, a number of resources to help attendees make their first edits. For the purpose of this session, Netha and Parul shared the project pages of the previous edit-a-thons on Women Parliamentarians and Scientists. Biographies are a good starting point and hence we wanted to show these pages to the attendees. We spoke of adding categories, banners, making minor edits and backing them up with references. Throughout the session, we individually helped attendees modify/create/edit content.

Both sessions had a positive impact and was well appreciated by those who attended it. Here is two  pictures of the group after the sessions  .

It was encouraging to see the attendees being interested in Wikipedia sessions and as facilitators, we really wish to address the Gender Gap. More and more women participants in such sessions encourage us to take Wikipedia to a level where we could see equality both in terms of women editors and women specific content.


Attendees from the Wikimedia Community at Ada Camp :

Pavithra Hanchagaiah, Netha Hussain , Rohini Lakshané, Parul Thakur, Chinmayi S K , Nappinnai Krishnamoorthi , Juvairia Nv.

See also:

* In Love with AdaCamp – A blog post by Parul Thakur.

by chinmayisk at December 06, 2014 09:06 AM

December 05, 2014

Wikimedia Foundation

IEG Selects Exciting New Batch of Experimental Projects to Improve Wikipedia, Wikisource, and Commons



Art and Feminism training in New York.
“ArtAndFeminismNYC-training1″ by Michael Mandiberg, under CC BY-SA 3.0

Today we’re announcing round two of Wikimedia’s 2014 Individual Engagement Grantees.

Individual Engagement Grants (IEG) provide funding to individuals and small teams to take on projects that will have online impact and advance Wikimedia’s strategic priorities. These projects can take on many forms, from building and improving online tools or social processes, to creating new types of partnerships with GLAM organizations or conducting actionable research about Wikimedia content and contributors.

The IEG committee scores two rounds of grant proposals a year according to specified selection criteria. Our volunteer committee is made up of 16 Wikimedians who come from various home wikis and collectively speak 15 languages. Outside of their committee work, members edit, review, and translate content, help govern local chapters, write software, organize off-wiki events and facilitate workshops, work as sysops and bureaucrats, verify copyright and licensing permissions, draft and discuss project policies, and recruit and welcome new users to Wikimedia projects.

In this latest round, a total of 26 eligible proposals were submitted for the committee’s review. We recommended seven projects be funded in total, with 13 grantees selected to receive $98,271 overall.

The projects selected for funding this round are:

  • Art+Feminism Editathon training materials and network building: This project will build on a series of successful 2014 edit-a-thons to develop scalable online infrastructure, including training materials and a network of facilitators, to support the expansion and sustainability of the Art+Feminism movement, aimed at improving Wikipedia’s coverage of notable women in history, art, and beyond. 
  • Automated Notability Detection: This project aims to develop a classification algorithm that can assess likeliness of notability (initially within English Wikipedia) and can be used to support editors’ review of newly created articles.
Viswanadh,B.K, the grantee for the Teluga library project.

Viswanadh,B.K, the grantee for the Telugu library project.
“Viswanadh,B.K”by విశ్వనాధ్.బి.కె., under CC BY-SA 3.0

  • Digitization of Important Libraries Book Catalog in Andhra Pradesh and Telangana: Through a partnership between the Telugu Wikipedia community and brick-and-mortar libraries in India, this project will endeavour to digitize five library catalogues of Telugu books in order to support Telugu Wikipedians searching for verifiable sources for new article content. 
  • Fundación Joaquín Díaz: This project will see 23,000 sound recordings from the ethnographic archive of the Joaquín Díaz Foundation in Urueña, Spain uploaded to Wikimedia Commons under a free license, and could serve as a potential model for other institutional collaborations.
  • Revision scoring as a service: The grantees of this project will develop machine classification for assessing quality of contributions on multiple language Wikipedias as a publicly queryable API. This service will in turn support the development of new and powerful tools to support editors beyond the English language Wikipedia environment. 
How WikiBrain IEG works.

How WikiBrainTools works.
“WikiBrainIEG”by Shilad, under CC BY-SA 4.0

  • WikiBrainTools: This project seeks to democratize access to Wikipedia-based algorithms across all Wikipedias, and allow Wikimedians to leverage the work of natural language processing researchers to build smarter tools for Wikipedia. In particular, WikiBrainTools will attempt to close the loop between algorithmic researchers who mine Wikipedia to improve computer-derived insight, Wikipedia developers who could be integrating algorithms into their bots and tools, and Wikipedia researchers who could stand to benefit from tools that improve pattern recognition. 
  • WikiProject X: This project will explore and test design solutions for encouraging optimal effectiveness and supporting sustainability and collaboration between groups of contributors within a WikiProject on English Wikipedia.
  • Additionally, one project funded in the last IEG round, Women Scientists Workshop Development, was also approved by WMF for another 6 months of renewed funding to experiment with scaling the model.

Some of the proposals declined by the committee were ultimately seen as being more appropriate candidates for a PEG (Project and Event Grant), which typically funds offline events and outreach; others outlined innovative approaches to solving a problem but appeared too early in their ideation to be realistically executed, were of unclear direct benefit to Wikimedia projects, or did not adequately engage with the target Wikimedia communities they aimed to serve. As some of these proposals continue to develop in response to feedback, they’ll be welcome to return to IEG in future rounds – we love to see ideas grow and change as a result of the community discussion process.

With so many new ideas put forward in this round, we’re seeing a few emerging trends. Just over half (13) of this round’s proposals fell under the “Tools” category, nine fell under “Offline Outreach & Partnerships”, two fell under “Online Community Organizing”, and one fell under “Research.” Many submissions touched upon the idea of micro-edits in one way or another. Using machine learning to assist human decision-making was another common theme, and we’re pleased to see an increasing number of academic researchers proposing new ways to integrate their research into actionable Wikimedia tools and processes. There appears to be an increasing trend towards developing tools through IEG: the number of tools proposals nearly doubled since IEG’s last funding round. Online tools offer the potential for a small team to help many editors and readers, and we’re curious to see what impact this will have over time. We’re grateful for the many volunteers and WMF staff members who offered the committee expert opinions on these tools proposals in particular, as well as all proposals under consideration. 

IEG is a participatory and iterative grantmaking process. Committee members, as well as the broader Wikimedia community, are encouraged to read over grant proposals and leave comments and questions on proposal talk pages in advance of the formal review process. The back-and-forth between applicants, staff and the committee often results in stronger submissions and helps applicants, many of whom have no prior grant-writing experience, compete effectively for funding. Many committee members also serve as advisors for funded projects between funding rounds.

The IEG committee notes that 3 of 7 grants in this round are targeting English Wikipedia, which makes some sense given the large number of readers and contributors to that project. With so many Wikimedia language projects and potential grantees around the world to support, though, we welcome discussion about how the program can increase the diversity of the proposals it receives. We also look forward to reviewing your future submissions – hope to see you in IdeaLab in 2015, as the next round of IEG develops. For now, though, we say congratulations to the successful grantees and encourage you to follow their progress as they begin work in the coming weeks.

Helen Halbert (User:Thepwnco), on behalf of the IEG Committee

by wikimediablog at December 05, 2014 07:04 PM

Wikimedia UK

Who writes Wikipedia’s health and medical pages and why?

By Nuša Farič, UCL, Centre for Health Informatics & Multiprofessional Education (CHIME)

Half of the editors working on Wikipedia’s 25,000 pages of medical content are qualified medics or other healthcare professionals, providing reassurance about the reliability of the website, according to our newly published research results. Those editors, who are contributing their time for free, are motivated by a belief in the value of Wikipedia, a sense of responsibility to help provide good quality health information, and because they find editing Wikipedia supports their own learning.

Wikipedia is known to be a go-to place for healthcare information for both professionals and the lay public. The first question everyone asks is: but how reliable is it? In a new study, just published in the Journal of Medical Internet Research, we took a different approach. We wanted to know more about the people behind the medical pages on Wikipedia, what background do they come from, whether they have specific interests in health and what drives them to contribute to Wikipedia. Because getting health-related content on Wikipedia right is about more than getting the facts correct. It’s about how the information is presented, how topics are covered and what perspectives taken. You can read the paper here.

I’m at the beginning of my research career and I’m very proud that my first published paper is on Wikipedia and Wikipedians. I did this study over 8 months as part of my Master’s course in Health Psychology at UCL. The project was with Dr Henry Potts, a senior lecturer at UCL’s Institute of Health Informatics, who is also a long-time Wikipedian as User:Bondegezou.


In the study, we randomly selected a set of health-related articles on Wikipedia and invited people contributing to those pages to complete a questionnaire and a follow-up interview. We received 32 replies from 11 different countries, namely the UK, USA, Canada, the Netherlands, Sweden, China, South Africa, Australia, Malaysia and Colombia. In that snapshot of time (July-September 2012) the editors of health-related articles were predominantly men (31 out of 32), ranging in age from 12 to 59 years. 21 spoke more than one language.

Reassuringly, 15 were working in a health-related field, which included general medicine, cancer research, health psychology, health education, internal medicine, health advertising, regulatory affairs, pharmaceutical drug discovery, microbiology and medical publishing. The other half of the sample included individuals with particular health interests and students, including medical students.

72% of the sample were long-term contributors with 8 having contributed between 3-5 years, 10 between 5-8 years and 5 over 8 years. 90% contributed to other non-medical Wikipedia pages spanning architecture, astronomy, mythology, languages, history and art.

People edited health-related content on Wikipedia because they wanted to help improve content; they find that editing Wikipedia is a good way to learn about the topics themselves; they feel a sense of responsibility – often a professional responsibility – to ensure accuracy and reliability of health information for the public; they enjoy editing Wikipedia; they think highly of the value of Wikipedia. This process of inter-related value systems which drives contributing behavior is graphically depicted in our motivational model of contribution. This could be seen as Wikipedians internalising the principles of Wikipedia, the site’s Five Pillars, and that’s a key part of the social contract that makes the site work. Maybe there is a link between the idealism of many Wikipedians and the idealism of many in healthcare.

Even though we randomly selected health articles, we encountered the same editor accounts over and over. It became apparent that the core editor community number is small: it currently consists of around 300 people. Although this number is still clearly much larger than would normally be brought together to write a medical textbook!

We also observed the egalitarianism of Wikipedia: everyone has equal right to edit content if their claims are verifiable. While the high proportion of healthcare professionals provides reassurance about the accuracy of content, Wikipedia is a place of verifiability and not authority. Contributions from those who are not healthcare professionals are important too. Wikipedia’s focus on what is said rather than who is saying it has parallels with the peer review process that journal papers go through, a system that is often anonymous. Likewise, the evidence-based medicine movement, that has become dominant in healthcare, has worked hard to put research evidence above expert opinion.

Current state and the future

Plenty of doctors and patients are still wary of Wikipedia’s use in healthcare, but other research has shown that Wikipedia is extensively used by patients, by medical students, by doctors and by health researchers. We would like to see more of those using Wikipedia becoming editors and there are several recent initiatives in that area. The more people are editing, the better Wikipedia gets… although we also have to help new contributors get used to Wikipedia’s rules. That balance, between increasing participation, improving reliability and maintaining the community, is a challenge for health-related editors as it is for Wikipedia in general.

Healthcare research has already seen a big shift to open access publications, journals that are free to read, so researchers and health practitioners are becoming open to the principles of Wikipedia. I believe strongly that everyone in the world deserves access to high quality healthcare information in the language of their choice. Wikipedia is the only viable method to achieve this goal.

nfaric{at}gmail.com (User:Hydra Rain)

by Stevie Benton at December 05, 2014 01:00 PM

Gerard Meijssen

#Wikipedia - #Russia and the #USA

Yesterday I wondered when Russia will become the champion users of Wikipedia. Today I noticed a presentation with the numbers.. Traffic to Wikipedia from the USA is down by -8.6% while the annual growth of Wikipedia traffic from Russia is +10.3%. A difference of almost 20%.

by Gerard Meijssen (noreply@blogger.com) at December 05, 2014 07:22 AM

Wikimedia Foundation

Joel Aldor wants to preserve historic Filipino architecture one photo at a time.

This profile is part of a series about history and geography on Wikipedia.

"Inmaculada Concepcion Parish Church, Guiuan, Eastern Samar (Before and After 2013 Typhoon Haiyan)" by Joelaldor, under CC BY-SA 4.0

Inmaculada Concepcion Parish Church, Guiuan, Eastern Samar (Before and After Typhoon Haiyan in 2013)
“Inmaculada Concepcion Parish Church, Guiuan, Eastern Samar (Before and After 2013 Typhoon Haiyan)”by Joel Aldor, under CC BY-SA 4.0

The Spanish colonial buildings in the Philippines have served as bastions of the country’s rich and colorful history and culture. But after the Bohol Earthquake and the deadly onslaught of Typhoon Haiyan, much of Filipino historical architecture threatens to crumble. That’s why a number of volunteers from Wikimedia Philippines have decided to take on a long-term project to photograph and document their country’s architecture on Wikipedia and Wikimedia Commons as a means of preservation.

“[The] Philippines is a very culturally rich country but at the same time vulnerable to a lot of threats that would damage and destroy our collective history as manifested in our built heritage sites,” says Joel Aldor, member of Wikimedia Philippines and head of the Philippine Cultural Heritage Mapping Project, who currently resides in Makati.

<a href="https://commons.wikimedia.org/wiki/File:Portrait_of_Joel_Aldor_in_front_of_San_Pablo_Church_Ruins.JPG">"Portrait of Joel Aldor in front of San Pablo Church Ruins"</a> by <a href="https://commons.wikimedia.org/wiki/User:Joelaldor">Joelaldor</a>, under <a href="https://creativecommons.org/licenses/by-sa/4.0/deed.en">CC-BY-SA-4.0</a>

Portrait of Joel Aldor in front of San Pablo Church ruins “Portrait of Joel Aldor in front of San Pablo church ruins” by Joel Aldor under CC BY-SA 4.0

Aldor points out that although it’s been a year since Typhoon Haiyan struck the Philippines,the effects of its onslaught are still strongly felt. For instance, the town of Palo, a historic town in the province of Leyte, well-known for its stately ancestral houses and exemplary Spanish colonial architecture, has been heavily affected by the impact of Typhoon Haiyan. Ancestral houses are currently being demolished to pave way for road widening projects by the national government in preparation for the Papal visit in January 2015.

San Pedro Apostol Parish Church, Loboc, Bohol (before and after the 2013 Bohol Earthquake)
”San Pedro Apostol Parish Church, Loboc, Bohol (Before and After 2013 Bohol Earthquake)” by Joel Aldor, under CC BY-SA 4.0

“A lot has been destroyed by the typhoon. Today a lot of them are barely recognizable,” says Aldor.

The case of towns like Palo that have buildings in danger of destruction, along with a good number of historic towns at risk, inspired a number of Filipino Wikipedians to take a stance in safeguarding their country’s built heritage.

Aldor is an IT project manager by profession, but architecture has been his life-long passion. He has been engaged in heritage documentation around the Philippines since 2008, driving across the Philippines and keeping a photo database of more than 30,000 photos of churches, houses and other architectural details. His recent work has been highlighted with Project Kisame, a comprehensive documentation project on ceiling paintings of colonial churches in Bohol, Cebu and Siquijor, which was funded through a government grant early this year.

San Isidro Labrador Parish Church, Tubigon, Bohol (Before and after the 2013 Bohol Earthquake)“San Isidro Labrador Parish Church, Tubigon, Bohol (Before and After 2013 Bohol Earthquake)” by Joel Aldor under CC BY SA 4.0

He became involved with the Wikimedia movement when Josh Lim, an active chapter member of Wikimedia Philippines, noticed Aldor’s photos of historic church ceilings published under a Creative Commons license, released at that very fateful day of the 7.2 magnitude earthquake in Bohol. Aldor has expressed his intention to donate all his photos to Wikimedia Commons, and since then has become very involved with Wikimedia Philippines.

“I do believe that Philippine architecture is just as unique as every other Asian architecture. We want to showcase our beautiful masterpieces of art and colonial architecture, which exemplifies a fusion of East Asian and European architecture, and is something that we think every citizen should know.” says Aldor.

“It’s very daunting and so while we’re doing it [we realize] there’s so many towns that didn’t have a definite cultural map up till now,” says Aldor. “We get surprised ourselves when we found some some interesting structures that were not documented for so many years.”

Finding undiscovered historical sites excites Aldor, along with the other project volunteers, which highlights the importance of the chapter’s work. For the next six years of the project, the chapter will focus on building a comprehensive database using mapping standards from the premier universities in the Philippines.

Currently the project team includes 22 certified volunteers who will be mapping a series of Philippine towns that are largely underrepresented in academic textbooks. The team has already plotted out a roadmap for the project up to the year 2020.

"WikiExpedition Santa Ana 007" by  Smart Communications, Inc. under CC BY-SA 4.0

Members of Wikimedia Philippines
“WikiExpedition Santa Ana 007″ by Smart Communications, Inc. under CC BY-SA 4.0

“Next year we will start working to normalize our database and work with data analysts to come up [with] a more sound and robust database that could be reused and distributed across several platforms that could make use of our data in tourism and education,” says Aldor.

By 2016, he wishes to assist the government’s efforts in promoting and preserving his built heritage using Wikimedia platforms and help develop a more data driven policy on both a local and national level.

“We’re coming up with a list of accounts that we’re going to start mapping – focusing on unknown obscure towns that haven’t been properly documented yet,” says Aldor.

Aldor plans on going on a series of WikiExpeditions to map a number of towns as part of the volunteers’ continual training and immersions, such as the last WikiExpedition in Santa Ana, Manila back on September 13th, and another one scheduled on November 29th at the historic town of Sariaya, which has many art deco buildings and grand, stately houses built by wealthy families. According to Aldor, many of the grand houses have survived World War II and a series of fires, but needed protection from an impending road widening project that the Department of Public Works and Highways wants to push forward with.

"Assunta de la Nuestra Sra. Parish Church, Dauis, Bohol (Before and After 2013 Bohol Earthquake)" by Joelaldor, under CC BY-SA 4.0

Assunta de la Nuestra Sra. Parish Church, Dauis, Bohol (Before and After 2013 Bohol Earthquake)
“Assunta de la Nuestra Sra. Parish Church, Dauis, Bohol (Before and After 2013 Bohol Earthquake)” by Joel Aldor, under CC BY-SA 4.0

”So we’re going to map all built heritage sites that we can identify and submit all the cultural mapping data to the national historical commission of the Philippines and will ask them – petition them – to determine and delineate a core buffer zone for the historic center.” says Aldor. “That way the whole district can be protected and any plans on at any infrastructure projects that can impact these structures must have to go through a consultant consultation process, which is something that is never has almost never happened before.”

Although Aldor considers the Wikimedia Philippines chapter to be considerably young, he says its efforts are increasingly being noticed and appreciated by community members. He says he hopes to attend Wikimania in 2016.

“I only knew I can share my knowledge in the best way I know.” says Aldor. “I hope our product can also serve as inspiration for other other movements especially in the global South.”

Profile by Yoona Ha, Communications Intern

Interview by Victor Grigas, Wikimedia Foundation Storyteller

by wikimediablog at December 05, 2014 12:59 AM

December 04, 2014

Gerard Meijssen

#Wikipedia - #Russia #rules OK

English Wikipedia is still the biggest. The Russian Wikipedia however is growing much faster. It replaced the Germans, the Spanish and the Japanese to become the second biggest Wikipedia. When it continues to grow like this it will overtake English Wikipedia.

There is one question in the back of my head... When you consider only the traffic from the USA for the English Wikipedia, and Russia for the Russian Wikipedia, how will they compare ? How long will it take for Russia to overtake the USA as the champions of Wikipedia.

by Gerard Meijssen (noreply@blogger.com) at December 04, 2014 11:35 AM

December 03, 2014

Priyanka Nag

My Portland To-do list

My travel to Portland, the city of Roses, is going to start in a few hours. I have always loved being a globetrotter....visiting different places, trying different fun things, meeting a lot of new people...all of it. I am not much of a planner though. I have being to places and gone with the flow. This time, I wanted to do things a little differently. This time, I thought of making a to-do list of all the things I would like to do, see, eat, drink etc in Portland. Everytime I complete executing one task, I will tick it off. Lets see, by the time I am back in India, how many of these items can I get done with!

Action items:

Things to do:

Try out some Voodoo Donuts
Try out some S'more pudding

Check out some alcohol free Portland nightlife (one of them surely is the Ground Kontrol Classic Arcade
(Totally missed it)

Things to see:

Portland Art Museum
Powell's City of Books
Portland Audubon
Benson Bubblers

Forest Park
International Rose Test Garden
Multnomah Falls

I am sure this list is going to be modified on the go, as and when I get to explore the city a little more.

P.S. - Ofcourse all of these action items are to be executed only after the work hours of the work-week.

by priyanka nag (noreply@blogger.com) at December 03, 2014 11:59 PM

Wiki Education Foundation

Wiki Ed offers a prescription for medical editing

medicalthumbnailWe’re proud to announce the publication of our second subject-specific handout, Editing Wikipedia articles on Medicine. This handout joins Editing Wikipedia articles on Psychology in our series of brochures designed to illuminate the nuanced writing and editing guidelines in these subject areas.

The medicine brochure, in particular, covers suggestions for writing quality medical articles, including proper citations and sourcing methods.

The Medicine handout was produced with significant input from the WikiProject Medicine community, particularly Lane Rasberry (User:Bluerasberry). We’re very grateful for their contributions.

You can find the handout on our For Instructors page, and on Wikimedia Commons. If you are an instructor seeking print copies for distribution to your students, we can help: email us at contact@wikiedu.org.

by Eryk Salvaggio at December 03, 2014 05:30 PM

Gerard Meijssen

#Wikimedia and its content delivery

The #vision is "share in the sum of all knowledge" and all our projects contain a wealth of information. The infra structure, the software that brings this information is at this time very much centred on the two ends of delivery. It is in the data centres and it is in the last mile.

The effort of the last mile is the Wikipedia Zero project; This wonderful project brings information at no cost to the mobile phones of people who use the services of cooperating mobile operators. The content they use comes from the WMF datacentres in the USA and, that is suboptimal.

It is suboptimal because it takes time to get that data from the first world data centres of the WMF. It is suboptimal because the pipes are often oversubscribed. The consequence is that the service is not as good as it easily could be.

With a "content delivery network", this information is kept locally and it is only the updates that have to come and go all the way to the central servers in the USA.  This is a lot less data for those pipes, it is a lot cheaper to operate for our cooperating mobile operators in Wikipedia Zero and the quality of service will improve a lot.

There are no technical reasons why the WMF cannot do this. All that I see is personal preferences and possibly some legal issues. The WMF has the experience because of its servers in Amsterdam. It should be relatively easy to mimic this at the sites of our cooperating mobile operators. Alternatively we could pay commercial rates and do it ourselves.

A lot of effort is invested in making Wikipedia, MediaWiki perform better. This is another obvious improvement that will make a big difference not only to our Wikipedia Zero users but for everyone who uses our projects outside of the USA and much of Europe.

by Gerard Meijssen (noreply@blogger.com) at December 03, 2014 07:36 AM