en.planet.wikimedia

April 24, 2017

Tech News

Tech News issue #17, 2017 (April 24, 2017)

This document has a planned publication deadline (link leads to timeanddate.com).
TriangleArrow-Left.svgprevious 2017, week 17 (Monday 24 April 2017) nextTriangleArrow-Right.svg
Other languages:
العربية • ‎čeština • ‎English • ‎español • ‎فارسی • ‎suomi • ‎français • ‎italiano • ‎日本語 • ‎polski • ‎português do Brasil • ‎svenska • ‎українська • ‎中文

April 24, 2017 12:00 AM

April 22, 2017

Semantic MediaWiki

Semantic MediaWiki 2.5.1 released/en

Semantic MediaWiki 2.5.1 released/en

April 22, 2017

Semantic MediaWiki 2.5.1 (SMW 2.5.1) has been released today as a new version of Semantic MediaWiki.

This new version introduces a new feature called "deprecation notices", enhances existing functionality, provides bugfixes and further increases platform stability. Please refer to the help page on installing Semantic MediaWiki to get detailed instructions on how to install or upgrade.

by TranslateBot at April 22, 2017 01:03 PM

Semantic MediaWiki 2.5.1 released

Semantic MediaWiki 2.5.1 released

April 22, 2017

Semantic MediaWiki 2.5.1 (SMW 2.5.1) has been released today as a new version of Semantic MediaWiki.

This new version introduces a new feature called "deprecation notices", enhances existing functionality, provides bugfixes and further increases platform stability. Please refer to the help page on installing Semantic MediaWiki to get detailed instructions on how to install or upgrade.

by Kghbln at April 22, 2017 01:02 PM

April 21, 2017

Wikimedia Foundation

Community digest: Serbian Wikipedians look back at WikiLive 2017; news in brief

Photo by Ivana Madzarevic, CC BY-SA 4.0.

On April 8–9, 2017, members from the Wikimedia community in Serbia, Wikipedia editors, and project volunteers gathered for the third WikiLive conference. WikiLive is an annual Wikipedian conference in Serbia, where we look back at our projects in the previous year to celebrate the success and learn from the challenges we have overcome.

Day one sessions were dedicated to studying the success factors in community projects like the Wikipedia Education Program, the Wikipedia ambassadors role and writing contests, along with brainstorming to find solutions for problems like vandalism on Wikipedia.

The conference was an opportunity to learn from other communities, and the success stories shared during the sessions included projects from Macedonia and Bulgaria.

Experienced Wikipedia editors led a workshop on how to maintain diversified content on Wikipedia and engage more people in the community. The workshop was followed by an opening workshop on the basics of editing.

Photo by IvanaMadzarevic, CC BY-SA 4.0.

The editing workshop was mainly attended by teachers who were interested in integrating Wikipedia in their courses. The workshop attendees were keen to attend the second day of the conference in order to learn more about the Wikimedia projects.

Introductory workshops and follow-up discussions formed the second day of the program, where the participants had many questions answered: What motivates a volunteer to keep contributing, what is GLAM, what is the role of libraries, how does Wikidata work and what are the creative commons licences are all some examples of the workshop takeaways.

During the event, Darko Gajić received the “Branislav Jovanovic” award for his contributions in free knowledge sharing. The award is given out by Wikimedia Serbia every year in memory of Jovanovic, the former Wikipedian and vice president of Wikimedia Serbia.

Participants who came from all parts of Serbia gave positive feedback at the end of the conference.

“Gathering community members isn’t always easy,” says Filip Maljkovic, president of Wikimedia Serbia, “but to gather active participants, who contributed to many topics, is absolutely amazing. We heard so many interesting ideas and saw enthusiasm about contributing even more to all Wikimedia projects. These kinds of events are perfect to feel the spirit of the community.”

Wikimedia Serbia hopes that the annual local WikiLive conference will soon become regional in character, connecting and reaching even more editors in this part of the world.

Ivana Guslarevic, Communications Manager
Wikimedia Serbia

In brief

Photo by Bachounda, CC BY-SA 4.0.

Wikipedia student editors in Algeria celebrate their success: Participants in the Wikipedia Education Program in Hassiba Ben Bouali chlef University in Algeria celebrated their second successful edition of the program. During the term, which lasted from September 2016 to April 2017, over 200 students joined the program. Student editors worked with their professors on expanding Wikipedia articles in the field of their study. Photos from the event are on Facebook and contribution statistics are on the outreach dashboard.

French Wikipedia blocks hundreds of bot-created accounts: Yesterday, hundreds of user accounts were created by an internet bot (software application that runs automated tasks) on the French Wikipedia. All of the new accounts were blocked by the community before making any changes on Wikipedia. More details on the French Wikipedia administrators noticeboard (in French).

Wikimedians in Ghana kick off a large archives project: The Wikimedia community in Ghana is starting a project to digitize a large collection of documents that will be released to the public. Ghanaian Wikipedians will collaborate with Open Foundation West Africa and the Public Records and Archives Administration Department (PRAAD) in Ghana.

Wikipedia Morocco day: WikiProject Morocco is organizing a day for editing Morocco-related articles. Participants will be editing only about Morocco for 24 hours on 1 May 2017. The project page on the Arabic Wikipedia includes several lists with articles for the participants to work on. The list includes many missing articles and low-quality ones about geography, history, religions, and many other topics. The day is open for participation on Wikipedia in different languages, however, the project page is only available in Arabic (as of publishing time). Wikipedians from around the world are highly encouraged to start the project in their language and coordinate with the organizers on the relevant talk page.

Metrics and activities meeting: The Wikimedia Foundation monthly metrics and activities meeting will be held on Thursday, 27 April, 2017 at 6:00 PM UTC. The theme of the April meeting is: “Wikimedia for the world” (part 2) – aiming at understanding how the foundation can better serve and include people around the world in the Wikimedia movement. The organizers chose to use this theme for the April meeting as they more relevant stories from around the globe. Information on how to participate can be found on Meta.

Compiled and edited by Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation

by Ivana Guslarevic and Samir Elsharbaty at April 21, 2017 05:48 PM

Weekly OSM

weeklyOSM 352

11/04/2017-17/04/2017

Changeset Map

Changeset example on Sajjad Anwars changeset map 1 | © Contributors OpenStreetMap CC-BY-SA 2.0

Mapping

  • There is a discussion on the OpenStreetMap Forum about Bing rolling out new aerial imagery for Darmstadt, Germany.
  • A question by wolfbert about grouping buildings and terrain of castles leads (de) (automatic translation) to an amusing discussion. The results are on the wiki.
  • Levente Juhász writes in the Mapillary Blog on how to map hiking routes in OpenStreetMap and Mapillary.
  • Micah Cochran invites comments on his modified proposal concerning improvements to toilet tagging.
  • On the talk-fr mailing list, Florian Lainez suggests using the upcoming French presidential election as a good opportunity to map electoral billboards. A wiki page has been set up and questions are raised on polling stations tagging.
  • A question is raised on the tagging mailing list about restaurants that allow customers to bring in their own alcoholic drinks.
  • Some mappers wonder why thoroughly mapped amenities like schools, churches and other buildings do not show up as expected. This was discussed (de) (automatic translation) in detail on the German forum, and on the Tagging mailing list.
  • Manohar Erikipati (manoharuss) writes about OpenStreetMap Changeset Analyzer (OSMCHA) and how to use it to review recent changes in your area using filters.

Community

  • Steve Coast launched a Kickstarter campaign to fix and stabilize OSM Stats and already reached his goal of 1000 USD. However, it also received some criticism.
  • Mapbox Cities is a mentorship program for cities. For research projects on public sector open datasets and OpenStreetMap, Mapbox wants to learn about the local government datasets that are most interesting for you to import, the pain points that you face while interacting with government open data stakeholders, and more on your experience with local government contacts. If you are interested in answering these questions, please take up the survey for OpenStreetMap Contributors and there is one for local government officials too.
  • Mikel Maron publishes his notes on the Local Chapter Congress that took place during SotM 2016.
  • Joost Schouppe publishes his analysis of the most frequently used OpenStreetMap editors.
  • Tyumen Oblast is small region in Western Siberia. The evolution of local mappers in this region is analyzed by user mavl in his blog.

Imports

  • Ivan Garcia wants to import borders in Indonesia on behalf of HOT.
  • User ff5722 suggests in his user diary to import the treecover2010 dataset by USGS outside of Europe. Vincent de Phily, Christoph Hormann and SK53 explain in their comments what the issues of this dataset are and why it should not be imported.

OpenStreetMap Foundation

  • OSM UK has officially applied for Local Chapter. Paul Norman asks for comments on the British request.

Events

  • The call for papers of FrOSCon conference (Free and Open Source Conference) in Siegburg, Germany is open (and will close on 23. May). FOSSGIS e.V., the German de-facto local OSM organisation, cooperates (automatic translation) with the organizers for GIS and OSM related topics.

Humanitarian OSM

  • The HOT board was renewed. Meet the members. Melanie Eckle from HeiGIT / GIScience Research Group Heidelberg was elected along with Ahasanul Hoque and Pete Masters.
  • ‘The Conversation’ discusses the role of social media in the improvement of human life, and also addresses the ‘Missing Maps’ project.
  • A new Missing Maps Hosting Tool helps mapathon organizers register the event and contact other volunteers.
  • The HOT Mapathons in Belgium were not as successful as hoped, judging by the statistics on continued participation. However, Joost Schouppe writes that they were more successful in other aspects – they have united the community and recruited volunteers while organizing SotM (2016) last autumn.

Maps

  • OpenStreetMap Carto released version 3.2.0, and landuse=farm will no longer be displayed.
  • Michael Spreng presents the Swiss map of CHFreeWiFi which is based on OSM. The WeeklyOSM editors do not issue any political opinion.

Open Data

  • ChristianSW points out that this year Wiki Loves Earth will take place in May. It is an annual photo competition for nature conservation and natural phenomena. Images can be uploaded to Wiki Commons.
  • The twitter account openOV claims that the timetable data of the integrated transport authority of Aachen, Germany, is public domain.

Software

  • [1] Geohacker writes about preparing accurate history and caching OpenStreetMap changesets.

Programming

  • User pathmapper asks on GitHub what will happen with Maputnik in the future because Lukas Martinelli joined Mapbox. Lukas responds that he is looking for a new maintainer.

Releases

Software Version Release date Comment
PyOsmium 2.12.1 11/04/2017 Four extensions, two changes and three bugfixes.
Mapbox GL JS v0.35.1 12/04/2017 Five bug fixes.
Maps.me iOS * 7.2.4 12/04/2017 No info.
Locus Map Free * 3.23.1 13/04/2017 Bugfix release.
Mapillary iOS * 4.6.13 13/04/2017 Two bugs fixed.
OpenJUMP 1.11 13/04/2017 New features and bug fixes.
osm2pgsql 0.92.1 13/04/2017 Two important bugs fixed.
OpenLayers 4.1.0 14/04/2017 Many changes, please read release info.
Traccar Server 3.11 14/04/2017 SMS support added.
Mapillary Android * 3.48 15/04/2017 Tuning GPX processing, some bugs fixed.
MapContrib 1.7.7 17/04/2017 Some smaller changes.
Maps.me Android * var 17/04/2017 100,000 objects added. Incl. restaurants, shops and POI.

Provided by the OSM Software Watchlist. Timestamp: 2017-04-17 18:40:55+02 UTC

(*) unfree software. See: freesoftware.

OSM in the media

  • The Euronice trendblog writes (de) (automatic translation) about the possibility of HERE and OpenStreetMap as viable alternatives to Google Maps.

Other “geo” things

  • Descartes Labs are developing a geo-search engine to search for similar objects in satellite images. Currently, the record is limited to the US.
  • Marketwired reports about Helsinki Metropolitan Transportation Authority’s New Journey Planner being an interactive map and is based on OpenStreetMap.

Upcoming Events

Where What When Country
Cochabamba Taller Hrrmtas. digitales de mapas para estudiantes de Psicología (UMSS) 20/04/2017-24/04/2017 bolivia
Kyoto 【西国街道#03】桜井駅跡と島本マッピングパーティ 22/04/2017 japan
Misiones Charla Mapas Libres en FLISoL, Posadas 22/04/2017 argentina
Bremen Bremer Mappertreffen 24/04/2017 germany
Graz Stammtisch Graz 24/04/2017 austria
Kinmen Shang Yi Airport Do mapping Kinmen by youself 24/04/2017-25/04/2017 taiwan
Zaragoza Mapatón Humanitario – Mapeado Colaborativo y Dpto. Geografía de la Universidad de Zaragoza 25/04/2017 spain
Viersen OSM Stammtisch Viersen 25/04/2017 germany
Dusseldorf Stammtisch Düsseldorf 26/04/2017 germany
Leuven First Leuven Monthly OSM Meetup/Missing Maps 26/04/2017 belgium
Antwerp Missing Maps at IPIS 26/04/2017 belgium
Lübeck Lübecker Mappertreffen 27/04/2017 germany
Urspring Stammtisch Ulmer Alb 27/04/2017 germany
Heidelberg Semester Start Missing Maps mapathon for World Malaria Day 2017 27/04/2017 germany
Vancouver Vancouver mappy hour 28/04/2017 canada
Ouro Preto Mapatona Estrada Real 01/05/2017 brazil
Rostock Rostocker Treffen 02/05/2017 germany
Stuttgart Stuttgarter Stammtisch 03/05/2017 germany
Helsinki Monthly Missing Maps mapathon at Finnish Red Cross HQ 04/05/2017 finland
Dresden Stammtisch 04/05/2017 germany
Passau Mappertreffen 08/05/2017 germany
Taipei OSM Taipei Meetup, MozSpace 08/05/2017 taiwan
Rome Walk4Art II 08/05/2017 italy
Avignon State of the Map France 2017 02/06/2017-04/06/2017 france
Kampala State of the Map Africa 2017 08/07/2017-10/07/2017 uganda
Champs-sur-Marne (Marne-la-Vallée) FOSS4G Europe 2017 at ENSG Cité Descartes 18/07/2017-22/07/2017 france
Curitiba FOSS4G+State of the Map Brasil 2017 27/07/2017-29/07/2017 brazil
Boston FOSS4G 2017 14/08/2017-19/08/2017 USA
Aizu-wakamatsu Shi State of the Map 2017 18/08/2017-20/08/2017 japan
Boulder State of the Map U.S. 2017 19/10/2017-22/10/2017 united states
Buenos Aires FOSS4G+State of the Map Argentina 2017 23/10/2017-28/10/2017 argentina
Lima State of the Map LatAm 2017 29/11/2017-02/12/2017 perú

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Nakaner, Peda, Polyglot, Rogehm, Spec80, SrrReal, YoViajo, derFred, jcoupey, jinalfoflia, keithonearth, widedangel.

by weeklyteam at April 21, 2017 02:25 PM

April 20, 2017

Wikimedia Foundation

Sharing a live experience of the world: Luis Álvarez

Álvarez at Wikimedia Conference 2016. Photo by Jason Krüger, CC BY-SA 4.0.

I would like more people to upload sound files with the sounds of the streets, the sounds of animals, the music they hear in the places where they live, soundscapes such as Murray Schafer … but with sounds of markets and squares. Sounds are alive and part of humanity. When we share them, we give them life.

Since joining the Wikimedia movement three years ago, Luis Álvarez has contributed thousands of edits on the Spanish Wikipedia, in addition to photos that he took and uploaded to Wikimedia Commons, the free media repository.

Furthermore, he has invested much time in recording and uploading different types of audio files. His sound contributions include self-composed music that accompanies educational videos (like the one below), ambient sounds that can add life to relevant Wikipedia articles or be remixed for different purposes, sound effects that he made, sonic experiments, recording public domain music performances, and more.

Separate from his composing, Álvarez is a university teacher at the Autonomous University of Aguascalientes. He is presently working on his PhD in Sociocultural Studies.

Álvarez learned about Wikipedia and its sister projects while studying for his first university degree in communications. Later on, he became more interested in the culture of sharing and decided to devote more time to it.

“Upon starting my postgraduate studies, I immersed myself in studying the Remix phenomenon,” Álvarez recalls. “I started with music and sounds, but then I realized that there are other communities doing the same thing in other fields, like free software and remixing videos. I was searching for a project with a more stable community that was creating a valuable product. When I met the Wikimedia community in Mexico, I felt that it could be what I was looking for.” He continues:

I became part of that community, which has changed my life completely. I used to like the idea of sharing what I did, and learning from what others did, but now I can practice it every day by uploading files, editing and creating articles.

Video by Kameraprojekt Graz, CC BY-SA 4.0. Music and sound effects by Luis Álvarez.

 

Álvarez’ use of audio files has developed from “recording sounds and mixing them with music, poems, or any other sound,” to using sound as a documenting tool. A quote from the American composer John Cage has really resonated with him: “When I hear traffic, the sound of traffic—here on Sixth Avenue, for instance—I don’t have the feeling that anyone is talking. I have the feeling that sound is acting. And I love the activity of sound.”

An illustration of sound as a documenting tool can be found in Wikipedia’s article on the Church of San Marcos, where Álvarez added a recording of the church bell. The sound gave a dynamic tone to the article.

“I would like more people to upload sound files,” Álvarez explains, “with the sounds of the streets, the sounds of animals, the music they hear in the places where they live, soundscapes such as Murray Schafer … but with sounds of markets and squares. Sounds are alive and part of humanity; if we share them, we give them more life.”

Some of the pieces of music Álvarez made for projects outside the Wikimedia movement were uploaded by him to Wikimedia Commons. “I try to upload samples or several tracks that compose one musical piece to make them easier to reuse,” he explains.

So far, Álvarez has uploaded over 5,000 files to Wikimedia Commons, of which nearly 150 are audio. Different media outlets have used some of his photos, but he is frustrated that they often don’t attribute this work to him. “I write to them to rectify this, not only because I want the recognition, but to help them understand that identifying the author is part of … the culture of sharing,” says Álvarez.

To Álvarez, free knowledge sharing is not only providing an easier option for knowledge seekers; it is a way to give everyone the opportunity to stand up for their unique views.

“We have been told that we must be spectators,” says Álvarez, “when we can also be ‘spect-actors’ as Augusto Boal, founder of the Oppressed Theater, said. Being part of history is what allows us to share; and though it seems trivial, uploading a photograph that we like or a sound that evokes a feeling helps this community grow.”

Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation

Iván Martínez assisted with this profile.

by Samir Elsharbaty at April 20, 2017 08:35 PM

Wiki Education Foundation

Women Scientists in Blue During the Year of Science

Last year, one of the goals we set for the Year of Science campaign was to foster the development of biographies of women scientists on Wikipedia. Throughout the year we saw great work from students and Visiting Scholars, but in this post I’d like to highlight the the results of a Year of Science “virtual editathon” co-organized by WikiProjects Women in Red and Women Scientists. The Celebrating Women Scientists Online Editathon ran from April-December 2016, leading to the development of hundreds of articles about women in science.

On Wikipedia, as with most sites that use wiki software, a link to another page on the same wiki appears in blue text. If the page does not exist, the link color is red. Because wikis are almost always collaborative projects, creating a link to a nonexisting article can make a useful indicator to other users that there is an opportunity to contribute. When Wikipedia was just getting its start back in the early 2000s, most articles were littered with redlinks pointing to lots of important topics that had yet to be written about.

Red links are less common these days, but there are still many, many notable subjects that have not yet been covered, and many topic areas which are, as a whole, inadequately represented. Subjects may be omitted for a number of reasons, often tied to one or more forms of Wikipedia’s systemic bias. One area which Wikipedia has long struggled is in its coverage of women, due in some part to the fact that women only comprise roughly 10-20% of the people who write Wikipedia.

WikiProject Women in Red is a project formed by Rosie Stephenson-Goodknight (recently announced as Wikipedia Visiting Scholar at Northeastern University) and Roger Bamkin in 2015 to address the underrepresentation of women. The “red” is a reference to “redlink”, and the project aims to change links to notable women’s names from red to blue. It has attracted hundreds of participants who have written thousands of articles and generated lists of thousands more red names, organized according to field, nationality, time period, etc. When it started, 15% of biographies on Wikipedia were about women. A couple years later, that number is up to 16.9% — an impressive gain, but there’s clearly a lot more work to be done.

A few years before the formation of Women in Red, another WikiProject got its start: WikiProject Women Scientists, created to address the same gender gap on Wikipedia, but focused on scientists. Emily Temple-Wood started the project in 2012 while an undergraduate molecular biology student. Now at medical school, she continues to both write articles and advocate for the representation of women in science on Wikipedia.

Working together, the Wikipedia community embraced the Year of Science and either created or improved articles on hundreds of women scientists during the virtual editathon. Many now include new Featured Pictures and many others were highlighted in the Did You Know section of Wikipedia’s Main Page. Below are some of the women whose contributions are now better represented on Wikipedia thanks to these efforts. Thanks again to WikiProjects Women in Red and Women Scientists!

 

Margaret_D._Foster,_in_Lab,_4_October_1919
Margaret D. Foster (1895-1970), chemist recruited to work on the Manhattan Project; first female chemist to work for the U.S. Geological Survey.
Image: Margaret D. Foster, in Lab, 4 October 1919.jpg, by National Photo Company, restored by Adam Cuerden, public domain, via Wikimedia Commons.
Dr_Elizabeth_Truswell
Elizabeth Truswell (b. 1941), former Chief Scientist at the Australian Geological Survey Organisation who researched the floral history of Antarctica and developed methods to study sub-ice geology.
Image: Dr Elizabeth Truswell.jpg, by MichaelJHood, CC BY-SA 4.0, via Wikimedia Commons.
Salinee_Tavaranan
Salinee Tavaranan, mechanical engineer, winner of a 2014 Cartier Women’s Initiative Award for work on renewable energy in remote areas of Thailand.
Image: Salinee Tavaranan.jpg, by PopTech, CC BY-SA 2.0, via Wikimedia Commons.
819px-Kitty_Joyner_-_Electrical_Engineer_-_GPN-2000-001933
Kitty Joyner (1916-1993), electrical engineer who worked with NASA throughout her career; first woman engineer at NASA, and the first woman to graduate from the University of Virginia’s engineering program.
Image: Kitty Joyner – Electrical Engineer – GPN-2000-001933.jpg, by NACA, public domain, via Wikimedia Commons.
Rosemary Askin (b. 1949), geologist specializing in Antarctic palynology; first woman from New Zealand to lead a research project in Antarctica.Image: MG 6885Rosie1970.jpg, by Rosieaskin, CC BY-SA 4.0, via Wikimedia Commons.
Rosemary Askin (b. 1949), geologist specializing in Antarctic palynology; first woman from New Zealand to lead a research project in Antarctica.
Image: MG 6885Rosie1970.jpg, by Rosieaskin, CC BY-SA 4.0, via Wikimedia Commons.
Barbara_McClintock_(1902-1992)_shown_in_her_laboratory_in_1947
Barbara McClintock (1902-1992), winner of the 1983 Nobel Prize in Physiology or Medicine for her discovery of transposition.
Image: Barbara McClintock (1902-1992) shown in her laboratory in 1947.jpg, by the Smithsonian Institution, restored by Adam Cuerden, no known copyright restrictions, via Wikimedia Commons.
Alice_C._Evans,_National_Photo_Company_portrait,_circa_1915
Alice Catherine Evans (1881-1975), microbiologist who researched bacteriology at the U.S. Department of Agriculture.
Image: Alice C. Evans, National Photo Company portrait, circa 1915.jpg, by National Photo Company Collection, restored by Adam Cuerden, no known copyright restrictions, via Wikimedia Commons.
Diana_Wall_portrait.jpeg
Diana Wall, environmental scientist and soil ecologist whose research concerns ecosystem processes, soil biodiversity, ecosystem services, and how they are affected by climate change.
Image: Diana Wall portrait.jpeg, by Byron Adams, CC BY-SA 3.0, via Wikimedia Commons.
1223px-Margarete_Zuelzer_als_Studentin_in_Heidelberg
Margarete Zuelzer (1877-1943), biologist and zoologist specializing in the study of protozoa.
Image: Margarete Zuelzer als Studentin in Heidelberg.jpg, via Wikimedia Commons.
Glenda_Gray_SA
Glenda Gray (b. 1962), recipient of the Order of Mapungubwe, the South African government’s highest honor, for “Her excellent life-saving research in mother-to-child transmission of HIV and AIDS that has changed the lives of people in South Africa and abroad. Her work has not only saved lives of many children, but also improved the quality of life for many others with HIV and AIDS.”
Image: Glenda Gray SA.jpg, by Simon Fraser University Communications, CC BY 2.0, via Wikimedia Commons.
Dr._In-Young_Ahn_at_the_Korean_Antarctic_Station,_King_Sejong_in_October_2015
In-Young Ahn, benthic ecologist, principal research scientist for the Korea Polar Research Institute, and teh first South Korean woman to visit Antactica.
Image: Dr. In-Young Ahn at the Korean Antarctic Station, King Sejong in October 2015.jpg, by Inyoungahn, CC BY-SA 4.0, via Wikimedia Commons.
Jan_at_Carlini
Jan Strugnell, evolutionary molecular biologist at the Centre for Sustainable Tropical Fisheries and Aquaculture in James Cook University.
Image: Jan at Carlini.jpg, by Iracooke, CC BY-SA 4.0, via Wikimedia Commons.
Irma_Levasseur
Irma LeVasseur (1877-1964), physician who was the first French-Canadian woman to become a doctor; pioneer in pediatric medicine.
Image: Irma Levasseur.png, author unknown, public domain, via Wikimedia Commons.

 

 

by Ryan McGrady at April 20, 2017 06:01 PM

Gerard Meijssen

#Wikidata user stories - Suggesting Henry Putnam, a great #Librarian

As software suggest what articles to write, it is relevant to understand what logic it is based on. Phenomena like the "six degrees of separation" made popular around Kevin Bacon has its scientific approach in graph theory "betweenness centrality". This is used as a basis in the research that what articles are important and what automated suggestions to make.

Mr Putnam is one of the more relevant librarians. He developed an eponymous classification system, continued its development as the Librarian of Congress (it is still in use), was twice president of the American Library Association and was a knight of the order of the Polar Star. When weight is applied to references to a person, all this is of relevance in the right setting.

When an article is to be written or improved, it helps when it can be suggested what it is that can be improved. By including statements in Wikidata suggestions can be made based in the local language. Facts like date of birth and death are also easy and obvious.

So when people consider a particular subject to be of universal relevance, it helps when associated subjects are well developed in Wikidata. When for all the presidents of the American Library Association many facts like where they studied, where they worked and what awards they received are included. When this is done for all the people who share categories, the betweenness of many influential librarians increases. This will have its influence on what is suggested for people to do.
Thanks,
       GerardM

by Gerard Meijssen (noreply@blogger.com) at April 20, 2017 09:42 AM

April 19, 2017

Wiki Education Foundation

Authorship Highlighting

We’ve just released a new Dashboard feature: Authorship Highlighting.

This update to the Dashboard’s Article Viewer shows the current version of a live article, and now highlights which student added which parts of the text.

To try it, just head to the Articles tab on your course page. In the list of Articles Edited, click any individual row to expand a drop-down menu revealing a row of buttons, then select the Current Version w/ Authorship Highlighting button. The Article Viewer will open, and at the bottom you’ll see a legend with student usernames. Once it finishes loading (which can take a while, depending on how old the article is) the text will be color-coded by student. Text with no highlighting was added by other editors. You can also move your mouse cursor over some text to pop up a tooltip with the name of the contributing student.

Authorship_image_1
Clicking on a row will expand it to display these buttons. Select “Current Version w/ Authorship Highlighting”.
Authorship_image_2
At the bottom of the Article Viewer, you’ll find a legend with the usernames of each student who edited the article. Once it finishes loading — this can take can a while for older articles — the article text will be color-coded by student. In this example, the 4 students working on the article contributed most of the text. The unadorned text came from other editors.

The Authorship Highlighting tool is aimed at making it easier to visualize and evaluate student work; better tools for evaluation and grading is the most common category of feature request from instructors. This update should be especially useful for evaluating group work — showing how each student in a group has “touched” a given article.

We’ve built Authorship Highlighting on top of a set of amazing data analysis and data visualization tools created by Felix Stadthaus, Maribel Acosta, and Fabian Flöck: wikiwho and whoCOLOR. It works by calculating which words in an article were added by which user and within which edit.

Please try it out and let us know what you think!

by Sage Ross at April 19, 2017 06:58 PM

Wikimedia Foundation

Could posting about women’s history grow our female audience for the future?

When I was a journalism student, Soledad O’Brien was one of my heroes, so getting a comment from her for this blog post was both exciting and sobering. Even this defiant newscaster, who refused advice to change her ethnically mixed name for the sake of television, butts heads with the gender gap on Wikipedia.

Personally, I see it every day.

As a member of the Wikimedia Foundation’s Communications team, I post biographies from Wikipedia on Facebook, Twitter, and the Wikimedia Blog. I post about inventors of cinema, a philosopher who laughed to death, and a musician who turned the world a little more purple with his music and very blue with his death.

You might notice that all of those examples were men. There’s no harm in posting about a man, in and of itself—but when you add it up, you find that as of 2016, only about one in six Wikipedia biographies were of women. High-quality biographies about women, especially those in fields outside of the entertainment industry, are relatively scarce.

Our Facebook page, followed by 5.5 million people, reflects this in its audience. Of those 5.5 million, 71 percent are men to 29 percent women (as of March). We gained more men than women even during Women’s History Month in 2015—an additional 54,615 of them, to be precise.  If we lose ground then, when can we possibly make a dent in the gender gap of our Facebook fan base?

Why does that matter? Facebook is where we reach people who like Wikipedia, but may not yet be aware of the Wikimedia movement, how it works, and ways to get involved. It’s a window into our movement disguised as a showcase of our content. Last year’s Women’s History Month, seemingly a perfect opportunity to post about and reach women, was a disappointment. Posting profiles about notable women to a heavily male audience drew catcalls and even death threats.

We have our work cut out for us when it comes to building an inclusive environment that welcomes everyone, regardless of gender. We can do better—and we did, with some help from Rosie Stephenson-Goodknight, the 2016 co-Wikipedian of the Year. When we featured Rosie in an experimental Facebook post promoted to women during December’s English-language fundraiser, we weren’t quite sure what would happen. At worse, we expected to get at least some likes for the post. But then something unusual happened: more than 1,400 women followed our page. Any page that would promote Rosie was apparently good enough for them, and it showed that we weren’t reaching women who wanted to like us, who wanted to join the Wikimedia movement in their own way.

This led us to a simple question: if a promoted post of Rosie alone could make a difference in our demographics, what would happen if we spent all of Women’s History Month promoting posts of biographies of women? Could a modest budget of less than $50 a day break through to women we weren’t reaching?

We asked the Facebook community for suggestions and featured notable women from more than 20 nations. One of those women was Mónica Mayer, an artist and activist who co-founded Mexico’s first feminist art collective, whom we featured on March 24. In 2015, Mayer took her first foray into Wikipedian culture by organizing an editathon to improve biographies of Mexican women feminists and artists last year. A gender-gap activist adding articles about women to the Spanish Wikipedia.

We posted about remarkable women all month—and not all were saintly. We posted about teenage Nobel Peace Prize winner and Pakistani activist Malala Yousafzai, but also deadly accurate Soviet sniper Lyudmila Pavlichenko. We posted about controversial writers Chimamanda Ngozi Adichi and Ayn Rand.

Of course, we received some of the same old derailing questions we used to, like  “when is Men’s History Month?” (Note: International Men’s Day is in November, as you can learn on Wikipedia. The article is more than 7,000 words longer than Women’s History Month, and nearly 6,000 words more than International Women’s Day.)

Read that a dozen times. It may make you question your faith in what you’re doing. Luckily we spoke to journalist Leslie Stahl, who urged young women especially to find work that resonates with them.

Further encouragement came from Susan Wojcicki, the CEO of YouTube who was once described by Time as the most powerful woman on the internet.

Here at the Wikimedia Foundation, we don’t have the power to legally modify dollar bills. (Although as a non-profit, we gladly accept them.) But we can support Wikimedians like Rosie when they organize edit-a-thons, and we can help increase visibility for inspirational projects like WikiProject Women in Red.

Still, while the motives may be there, did it work? Did all of this effort help close the gender gap by even a little bit in our Facebook audience? We weren’t sure what to expect. When you have 5.5 million followers, making a good-sized dent would require a lot of people. Our worst fear was that we would lose ground again, like in 2015.

That didn’t happen. In this year’s Women’s History Month, the gender gap on Wikipedia’s Facebook page shrank by 100,224 – we picked up that many more women fans than men during March. And the conversations about women’s history changed dramatically as women liked, shared, and commented on the page 30 percent more than men, a 70 percent change from the month before.

A social media campaign does not magically “fix” the gender gap on Wikipedia. As of publishing time, we’re still 68% percent men to 32% percent women.   Still, we feel it makes clear improvements. Changing the conversation within our community—making Wikipedia feel less like a “boy’s club” and more like a free market of knowledge—invites more critique, more collaboration, and more participation.

Aubrie Johnson is a social media associate on the Wikimedia Foundation’s communications team. If you follow us on Twitter and Facebook, you have read her writing many times.

The images of O’Brien, Stahl, and Wojcicki are all courtesy of the respective subjects. The image of Mónica Mayer is by Iván Martinez/Wikimedia Mexico, CC BY-SA 4.0. The images in the gif are all in the public domain.

by Aubrie Johnson at April 19, 2017 06:33 PM

Gerard Meijssen

#Wikidata user stories - the sum of all #knowledge


Map showing all places English Wikipedia covers


Map showing all places GeoNames covers

They say "a picture paints a thousand words". There is no argument; English Wikipedia covers only so much. With such a lack of coverage it is impossible to understand what is missing and its relevance particularly to people who do not read English.

LSJbot has created lots of articles for the places GeoNames knows about in several Wikipedias. As a consequence through the backdoor much of the missing information enters Wikidata. There have been some rumblings among Wikidatans that the GeoNames data is not perfect.. But hey, let's make "Be bold", a Wikipedia quality a Wikidata quality as well.

For many Wikipedians, the notion of bot generated articles is an anathema. For others the fact that there is so much that we do not cover is as problematic. The good news is that more information in Wikidata will enable us to predict what is lacking in content. We only need to acknowledge that Wikipedia is not the sum of all knowledge.. yet.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at April 19, 2017 06:43 AM

#Wikidata user story - Suggestions to #Wikipedia editors

Exciting is the #research done on "suggestions to Wikipedia editors". There is a paper and a great presentation. The bottom line is that when you know what to suggest to people; when you make it personal, the result is what you would hope. Consider, 3.2 times the number of articles created and two times more articles created than without personalised recommendations.

There is math involved, obviously, but the gist is that when suggestions are in line with previous activities, people will be triggered to do more. When you listen to the presentation, this first experiment asks people to translate from English. The assumption is that English covers more than most.

The slides of the presentation include visualisations showing the coverage of several Wikipedias. When you consider them, it becomes clear where the Wikimedia projects are challenged.

Leila Zia, the presenter makes it clear; all this would not be possible without Wikidata. One thing where Wikidata is different from the assumptions of the research is that there is an increasing number of subjects that have no links to Wiki(m/p)edia articles at all. Many of these are connected to existing content as they share common statements, statements like "profession: soccer player" of "award received: whatever award".

When totally new subjects are to be considered, there is already plenty that might be suggested in Wikidata itself.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at April 19, 2017 05:20 AM

April 18, 2017

Wiki Education Foundation

Wiki Ed at the NWSA Chair and Director meeting in Chicago

In early March, I returned to the National Women’s Studies Association’s (NWSA) regional meeting of directors and department chairs in Chicago. We attended this meeting for the third year because it gives us an opportunity to learn about initiatives within women’s studies departments and to share updated strategies and resources with highly influential faculty.

How students can share knowledge with the world

After partnering with NWSA for the last two years, the meeting was a great opportunity to share the impact women’s studies students have made on Wikipedia and its gender content gap as part of our partnership. To date, more than 3,000 students in 140 women’s studies classrooms have added 1.88 million words to Wikipedia. Their contributions help to reduce the gender disparity in the encyclopedia, address bias on Wikipedia, and correct misleading information. Together, we’re helping the masses understand complex concepts in women’s, gender, and sexuality studies.

A powerful learning experience for students

We shared preliminary results from Research Fellow Zach McDowell’s research on student learning outcomes during a Wikipedia assignment, and attendees were excited to learn more as we publish results. One session at the meeting focused on curriculum transformation, and we discussed the challenges of addressing relations of power in the undergraduate classroom. Several Wiki Ed instructors have identified the way editing Wikipedia empowers students to produce knowledge as one reason they engage in this project, making Wikipedia-writing assignments a great fit for instructors looking to bring this learning experience into the classroom.

Support from Wiki Ed

For department chairs, the promise of making a big impact on the world isn’t always enough. Even when committed to developing new curricula within their own class or department, they’re also sorting out budget details and how to support instructors who want to try out new pedagogical tools. One great thing about Wiki Ed—as unique as we are to describe to instructors—is that we can provide some of that support for free from outside of the university.

Wiki Ed exists to bring higher education resources to Wikipedia and its readers. We’ve already proven students can make a powerful impact on Wikipedia’s content. When we ask instructors to join our initiative, we’re asking them to build a new project into their curriculum. Luckily, we already have Wikipedia expertise, trainings for students, and the Dashboard to help students learn how to participate in a meaningful way.

If you’re an instructor interested in joining Wiki Ed’s Classroom Program, email us at contact@wikiedu.org.

by Jami Mathewson at April 18, 2017 08:49 PM

Wikimedia Tech Blog

Editing will temporarily pause for a failover test

Photo by Hong Zhang, public domain/CC0.

During the next month, all Wikimedia wikis will be placed into read-only mode for a short period on two days. This action will allow the Wikimedia Foundation’s engineers to test services in the secondary data center in Texas (referred to as “codfw”) and to do planned maintenance.

The secondary data center is a replica of our primary cluster in Virginia.  The main purpose of this data center is to improve the reliability and failover capabilities of Wikipedia and all of our sites for users around the world.  Both data centers maintain full, up-to-date copies of the databases for Wikipedia and other projects, plus many other services.  In case of any type of disaster at the primary data center in Virginia, the Technical Operations team expects to be able to transfer all traffic to the secondary data center in Texas within minutes.

Upcoming test

We are planning a test to find out how quickly and reliably we can transfer all application server traffic and tightly coupled service dependencies to the secondary data center. Teams in Technology, and several outside of it, first performed this type of test in April 2016. Since then, the Technology department has improved its procedures and automated several steps, and we are now planning to run this test for a minimum of two weeks.  This two-week window should also permit us to do some planned maintenance at the primary server site. At the end of the test period, we will transfer all of the traffic and services back to the primary service center again.

The process of switching data centers is scheduled for Wednesday, 19 April at 14:00 UTC and Wednesday, 3 May at 14:00 UTC.  Any changes to this schedule will be noted on our Wikitech calendar.

Effect of this test on editors and other contributors to our sites

Ideally we’d make this switchover without affecting our users, but limitations in MediaWiki, the software that powers our wikis, prevent that at this time.  When we switch from one datacenter to the other, we will have to place all wikis in read-only mode for a short time. We expect this step to take approximately 20 to 30 minutes each time.

During those weeks, we will also be halting all non-essential code deployments. This means that the regular MediaWiki deployment process will be stopped, and no other non-critical deployments will be done during the two test weeks.

The process for this test is complex, but we learned a lot from doing this last year, and we are hoping to make this process even simpler, faster, and more secure in the future.  We hope to not only greatly reduce the disruption for our users and the time needed to make the switch, but also to reduce the amount of manual effort necessary.  We appreciate your patience while we improve this essential infrastructure that helps us to keep useful information from the projects available on the Internet, free of charge, in perpetuity.

Faidon Liambotis, Principal Operations Engineer, Technical Operations
Wikimedia Foundation

You can read about a previous similar and successful failover test in a blog post from April 2016.

by Faidon Liambotis at April 18, 2017 06:16 PM

Why I spend my Sundays photographing Kolkata

Photo by Rajashree Talukdar, CC BY-SA 4.0.

One thing I remember very clearly from my childhood was how my parents would search for books that I would find interesting and engaging. Avid readers themselves, the books they bought for me would only be added to all of the ones we already had. Still, they used to spend hours in College Street, the one stop place for book lovers of Calcutta (now Kolkata) to find the right one for me.

I now understand that my parents wanted me to start with books that had beautiful illustrations and lots of images. They knew that I would find them fascinating, and the books would help me learn how to read.

I have to confess that they were absolutely right.

When I look back and go through the books my parents gave me, they were mostly folktales with dramatic illustrations or children’s general knowledge books, full of bright and colourful images. (At this time, the internet had yet to arrive and encyclopedias were simply too expensive.) So these books were our gateway to the world of fascinating facts and virtual voyages: I came to know about Machu Picchu, the great pyramid of Giza, the leaning tower of Pisa, and many more such places. And without the stunning images accompanying the articles, it would have been difficult for me to visualize those beauties, if not impossible.

Affordable internet for domestic use arrived in our city when I was in college in the early 2000s. Along with small wonders like email and instant messaging, we learned about internet searching. Although the results were rarely as informative as we can all get today, they were enough to keep us hooked. We were learning new things every day, and that is how we came across a new site on one particularly fine afternoon: Wikipedia, the online free encyclopedia.

Before Wikipedia, encyclopedias were fat volumes of leather-bound books. Wikipedia would go on to change many conceptions, but at the time it was unable to fully impress me. On the one hand, I was excited to find an article about our hometown; on the other, I was disappointed to see a sheer lack of images.

Over time, Wikipedia became more and more of an everyday online activity, and I watched as the number of articles about places near me were created and expanded. Still, there were never enough images to satisfy me.

Photo by Sumit Surai, CC BY-SA 3.0.

Then came a day out of the blue that changed a lot of things in my life. I joined a photowalk called Wikipedia Takes Kolkata, the second to be held in the city, and uploaded several photos from it. Several months later, I found that one of them was added to an article on the English Wikipedia. That feeling, that someone values your work enough to add it to a page read by thousands of people each year, is hard to explain. That I contributed to beautify an article about a place in our city was a huge for me.

I was already going out on weekends to capture snapshots of my city, particularly the lesser-known places like heritage buildings, for my blog. Transitioning this to Wikimedia Commons was not an incredible change of pace, and my subject areas slowly grew larger to include street scenes, foods, holiday destinations, festivals, and events. I donated any image that I thought could hold educational value to Commons.

But the question of what has motivated me to continue contributing photos is still interesting to me. I am not a professional photographer. It is not my primary source of income, although I do get paid occasionally. So why do I give them to Commons for free?

There are many reasons:

  • I don’t think keeping unused images squirreled away on my hard disk would do anyone any good.
  • If I can share them on social media, then why not on Commons, where they can be put to good use? By donating them in this way, my images help make the Wikipedia articles about things close to me beautiful.
  • I feel that the images I’ve uploaded to Commons are safe from hardware crashes. They are there at the highest uploaded resolution, where I can re-download them again whenever I want to.
  • They have a copyright license I agree with, and when they are used elsewhere, I get credit. I have allowed them to be used freely, but I will always remain the creator.

But I have to admit that I’ve felt the most rewarded when my images started getting “quality” and “featured” status. These markers adorn the best images Commons has to offer, and they are awarded only after undergoing a voting process where Wikimedians from all over the world offer their opinions. That they would select my photos has given me a lot of confidence—whatever we do in our lives, appreciation and recognition are things that make the road ahead much smoother, and these quality badges have motivated me both on and off Wikimedia.

Sumit Surai, Wikimedian

You can see Sumit’s best photos on Wikimedia Commons.

by Sumit Surai at April 18, 2017 04:35 AM

April 17, 2017

Wiki Education Foundation

Learning and sharing at Wikimedia Conference 2017

Wiki Ed runs programs connecting Wikipedia to higher education in the United States and Canada — but the world is larger than just the boundaries of our programs. There are many organizations and individual volunteers around the world who, like Wiki Ed, also run programs to form relationships between Wikipedia and educational institutions, associations, and organizations, or otherwise coordinate initiatives aimed at improving the availability and quality of information under a free license on Wikimedia projects. Participating in this global network enables us to learn from what others have done, share our learnings, and help move the strategy of the Wikimedia movement forward from our perspective.

All of these were on the agenda at the end of March when Executive Director Frank Schulenburg and I traveled to Berlin, Germany, to attend the Wikimedia Conference. This annual gathering brings together program leaders from organizations who work to improve Wikipedia content globally. This year’s conference was especially important, as it was the kick off for the Wikimedia strategy process. While Frank joined the Strategy track of the conference, I participated in the conference tracks aimed at building program capacity and learning from partnerships.

For me, the best part of the conference was collaborating with other program leaders working at the intersection of Wikipedia and education worldwide. It’s a really good opportunity to learn about new programs and new ideas, understand best practices for a variety of contexts, and likewise share our own learnings and the work we’ve been doing. I met with members of the Wikipedia Education Collaborative, a group of people leading programs in the education space, and had a number of opportunities to share our collected knowledge and experiences, including leading a discussion workshop on thinking about program impact and leading an impromptu session on how to use the Programs & Events Dashboard, a version of Wiki Ed’s Dashboard software available for program leaders and event organizers anywhere.

Frank participated in the Strategy track, which sought to begin to answer the question, “What do we want to build or achieve together over the next 15 years?” by coming up with a few key theme statements. Three statements related to education emerged out of the process. These themes, as well as others generated through organized groups and individual contributors, will be consolidated during this initial strategy cycle. Next month a second cycle will identify the top five thematic clusters.

Of course, the structured conference presentations are only some of what you get from events like this. Hallway conversations, informal meet-ups, and meals provided great opportunities to talk with others in the broader Wikimedia community. These conversations were invaluable to us as we embark on our own annual planning process for Wiki Ed. A huge thank you to everyone we interacted with who made the conference so meaningful for Frank and me, and a special thank you to Wikimedia Deutschland for their excellent conference organization skills. We look forward to collaborating more with the global community at Wikimania 2017.

Image: Wikimedia Conference 2017 by René Zieger – 238.jpg, by René Zieger for Wikimedia Deutschland e.V., CC BY-SA 4.0, via Wikimedia Commons.

by LiAnna Davis at April 17, 2017 07:02 PM

Gerard Meijssen

#Wikidata user story - #DBpedia, #death and #Federation

Federation between DBpedia and Wikidata became possible. As a consequence, the results of a query that runs on DBpedia can be linked to Wikidata.

Some time ago people at DBpedia created a wonderful query that shows differences between DBpedia and the Dutch and Greek Wikipedia. It received approval from the Dutch Wikipedia community.

With federation something much more interesting became possible; a federated query comparing Wikidata with one DBpedia at a time. When the query runs, current data from Wikidata and DBpedia is presented.  When a Wikipedia associated with  DBpedia changes, DBpedia may import the differences from a RSS-feed and consequently running the query again will show the latest differences.

Updating information about one particular type of statement like date of death, place of death or whatever, will always be based on the current differences.. Experiencing the results in this way is truly motivating. Federation is an instrument that can helps us improve the quality of either federated system.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at April 17, 2017 12:49 PM

#Wikidata user story - #Wikipedia #diversity and diversity #research

Diversity, especially the "gender gap" is one of the best researched subjects of Wikipedia. There are many projects that have it as their goal to diminish the gap they object to.

Wikidata has the best and most up to date information about any Wikipedia. People are updating Wikidata all the time, typically its information is based on a Wikipedia.

Take gender; many a Wikipedia has a category for this so it is easy to update Wikidata based on what is in such categories. When a researcher is interested in the articles where Wikidata does not have such information, articles will be found and it is appreciated when Wikidata is updated by them as part of these activities. As a rule, the percentage of "humans" with no known gender is dropping anyway.

When a Wikipedia editor has an interest in female scientists that do not have an article in English, it is easy enough to have a query for that. Not all female scientists with or without a Wikipedia article can be found this way but it is just a matter of adding them in Wikidata. When another editor is interested in female scientists with no article in German of Kannada, it is just one change in the same query.
Thanks,
        GerardM

by Gerard Meijssen (noreply@blogger.com) at April 17, 2017 09:04 AM

Why #Wikidata? Because it is useful!

Wikidata was useful from the start. It provides a service to all Wikipedias and after the startup, it now provides the same service to Commons and Wikisource. It connects information about the same subject, they are the interwiki links.

The next phase was to connect these subjects. This is an internal Wikidata project and it not really used. This data could be useful but it is not always up to date and the requirements for the primary use cases are not realistic and almost impossible to fulfil. The challenge is to provide sourced information for every statement.

The challenge is: how do we provide a use for the Wikidata data. How do we get people to actually use Wikidata, have an interest in the data and maintain what is in their interest.

Software developers create "user stories" to explain what their software is to achieve. Why not write user stories that show how Wikidata can already be used and expand the stories on how to be even more useful and usable?
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at April 17, 2017 08:21 AM

#Wikidata user story - the #library

The OCLC is an organisation combining most of the libraries in the world. It used to connect to the English Wikipedia but as Wikidata connects all Wikipedias, the OCLC does a better job linking to Wikidata. Through Wikidata it can link to articles about authors in any language.

For many authors the connection between VIAF, the system used by the OCLC and Wikidata is still missing. Many people are adding VIAF identifiers and once a month the data is imported and all the new data pops up.

Best practice at English Wikipedia has it that an {{authority control}} template is added in the reference section of people. When a VIAF identifier is added in Wikidata not only a VIAF identifier but also Worldcat information is shown (the example is for William Keepers Maxwell Jr.). Doing this is possible for any Wikipedia.

Now to expand on this; when a reader opts in, we could show if a book of an author is available in the local library.. What do you think?
Thanks,
       GerardM

by Gerard Meijssen (noreply@blogger.com) at April 17, 2017 08:20 AM

Tech News

Tech News issue #16, 2017 (April 17, 2017)

TriangleArrow-Left.svgprevious 2017, week 16 (Monday 17 April 2017) nextTriangleArrow-Right.svg
Other languages:
العربية • ‎čeština • ‎Deutsch • ‎English • ‎español • ‎فارسی • ‎suomi • ‎français • ‎עברית • ‎italiano • ‎日本語 • ‎한국어 • ‎polski • ‎português do Brasil • ‎русский • ‎svenska • ‎українська • ‎Tiếng Việt • ‎中文

April 17, 2017 12:00 AM

April 16, 2017

Wikimedia Foundation

Community digest: The UNESCO Challenge aims to help preserve World Heritage Sites; news in brief

Photo by Diliff, CC BY-SA 2.5.

There are over 1,000 heritage sites in the world: the Egyptian pyramids, the Great Wall of China and the Colosseum of Rome are just a few examples. Many of the most visited monuments in the world are listed under UNESCO’s World Heritage Sites. We need your help to make sure that there is adequate information about them on Wikipedia.

The UNESCO Challenge is a writing competition on Wikipedia where the participants will create and improve articles about heritage sites in different languages. The competition starts on the International Day for Monuments and Sites on 18 April and lasts for one month.

The event is organized by Wikimedia Sweden (Sverige), the independent chapter that supports Wikimedia projects in Sweden that is working with the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Swedish National Heritage Board.

In addition to the participant efforts in writing, UNESCO will release images of the World Heritage Sites on Wikimedia Commons, the free media repository. Moreover, freely-licensed information about the heritage sites will be provided by UNESCO. The material contributed by UNESCO can be used by the UNESCO Challenge participants to help expand and illustrate their articles.

Anyone can join the UNESCO Challenge. To participate, a Wikipedia editor will need to register their name on the project page, pick a heritage site from the list to write about (preferably those most at risk), and start editing. Participants can edit in any language on Wikipedia.

Competitors will get points for the content they add, more points for newly created articles, and much more for high-quality content. The more points a participant earns, the more of an opportunity they have to win.

Wikimedia Sverige and UNESCO are collaborating on the Connected Open Heritage project. Many of the world heritage sites are facing critical dangers of war, climate change, lacking maintenance and more. To make sure that the most thorough and accurate information about world heritage sites is preserved for future generations, we are helping collect as much data, images and information as possible.

You can make a change by expanding an article, updating its information, or adding an image. You will increase the coverage of the world heritage sites, win fabulous prizes from the Wikimedia store, and gain recognition for your work.

Eric Luth, Project Manager
Wikimedia Sweden (Sverige)

As the images are being uploaded, you will be able to find them in Category:Images from UNESCO 2017-04 on Commons.

 In brief

Photo by Mardetanha, CC BY-SA 4.0.

Cannabis project celebrates 420: Every year on April 20 (“420”), users of cannabis—perhaps better known by its derivative marijuana—celebrate the plant. This year, the English Wikipedia’s WikiProject Cannabis is organizing an inaugural “420 collaboration” to inspire the creation and improvement of cannabis-related content on Wikipedia and other Wikimedia projects.

The campaign will be held from April 15 to April 30, with an emphasis on 420 itself. Organizers Jason Moore and User:The Hammer of Thor emphasize that they are looking for “neutral, appropriately sourced facts”: “We want Wikipedia to have accurate and reliable information about cannabis,” they said. You can learn more about this initiative on their campaign page.

Wikipedia Education Program pilot starts in Iran: The Wikipedia community in Iran held introductory workshops to the students and educators of Shahid Chamran University in Ahvaz. The workshops helped answer the audience questions about Wikipedia in addition to giving practical training on simple editing techniques.

Board elections open: Self-nominations are now being accepted for the 2017 Wikimedia Foundation Board of Trustees Elections. In these elections, the Wikimedia community will select three board members for the term of 2017-2020.

The Board of Trustees is the ultimate governing authority of the Wikimedia Foundation and the decision-making body responsible for the long-term sustainability of the Foundation.

Self-nominations are being accepted until 20 April 2017. Information about the elections and the timeline are available on meta.

Library of Congress releases photos of 19th-century African-American women: The Library of Congress held a project to digitize photos of many African American women activists with little fame from the nineteenth century. The portraits collection from William Henry Richards, includes advocates for African American rights, some of the earliest African American educators and more.

Wikipedians have already started uploading the photos to Wikimedia Commons and using them to illustrate the relevant Wikipedia articles.

Picture of the year competition final round is open: From now through 20 April, Wikipedians will vote for the 2016 photo of the year. The final round started on April 7 where only Wikipedians with some editing history are entitled to choose up to three photos. More information and voting are available on Wikimedia Commons.

First non-English periodical newspaper in India is now on Wikimedia Commons: Published in the Armenian language between September 1794 and February 1796, Azdarar was the first non-English periodical newspaper in India. Wikipedian Bodhisattwa Mandal has uploaded a copy of all 18 volumes to Wikimedia Commons.

Looking back: In 2014, the Wikimedia community lost the dedicated Wikipedian, educator, and community leader Adrianne Wadewitz. Three years after her death, Wikipedians are still recalling their memories with her and leaving tribute messages on her talk page on Wikipedia.

Beginner editors workshop in Cairo: Last week, Egypt Wikimedians user group held an introductory workshop for those interested in editing Wikipedia to help them understand basic editing skills.

New Wikimedia project: The Eastern Punjabi Wikisource has been approved as the newest Wikimedia project. The catalyst for the project came during the Wikipedia 15 celebrations held last year, and as of publishing time the site has 777 total pages. Satdeep Gill, co-founder of the Punjabi Wikimedians user group, told us that “this project starts a new journey for the digitization and free online distribution of published works in Punjabi language.” It will fill a needed niche, he says, as the Punjabi language has no centralized online library for freely licensed works, assuming that they are even online.

———

Compiled and edited by Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation

by Samir Elsharbaty and Eric Luth at April 16, 2017 08:04 PM

Gerard Meijssen

#Wikipedia - The death of Lanier Meaders

Mr Meaders was a notable potter who died in February 1998 according to folkpottery.com. The English Wikipedia article however is in two minds about his death. Yes he is dead but when did he die?

According to the category he was one of the living death for 10 years. In the text the year of his demise is correctly stated as 1998. By googling for a source another date was found.

As I am not an English Wikipedian, I do not know how to indicate sources in English Wikipedia. The date of death in Wikidata does have a reference. The question is how differences like the dates of death of Mr Meaders are found and improve the consistency in the information that we provide in all of our projects.
Thanks,
      GerardM

NB the information in Wikidata on Mr Meaders is not complete.

by Gerard Meijssen (noreply@blogger.com) at April 16, 2017 08:15 AM

April 14, 2017

Weekly OSM

weeklyOSM 351

04/04/2017-10/04/2017

mmds proof of concept

Completely marked street by mmd’s proof of concept 1 | © OpenStreetMap Contributors CC-BY-SA 2.0

Mapping

  • Walter Nordmann analysed the extent of the improper tagging landuse=farm, which has to be changed. His NoFarm-Map helps to discover such areas.
  • Version 0.9.9 of Vespucci is now available! Simon Poole shares the news along with some details.
  • User Rogehm brings up (de) the discussion about the already mapped building=conservatory on the German forum. A OSM wikipage was created.
  • User dktue would like to map (de) rescue guides. An internationally recognized tag is still being discussed, since these can be called “MRCC”, deviant from the proposed “ECC”, or different.

Community

  • In an interview to OpenCage Data, Vivien Deparday speaks about the state of OSM and its community in Sri Lanka – their current work and future plans.
  • Ruben Lopez made an app to report the flooded streets in different regions of Peru, based on the one for Chennai.
  • Taïs Grippa released a new statistic tool for mapathons and shows some examples of the last Belgian National Mapathon 2017.
  • On the Talk mailing list, the Engineering Working Group asked what would be the most important thing to change. There are many suggestions made.

Imports

  • Chetan Gowda shares updates about the San Francisco Building Height Import project. As they begin validation, he invites the community to join.

OpenStreetMap Foundation

  • Simon Poole, head of the License Working Group of the OSMF, commented on our (fake) news of April 1st.
  • Frederik Ramm reported the status of the preparations of FOSSGIS to become a OSMF local chapter. Christoph Hormann brought some important points into the conversation.

Events

  • You can give your inputs on which sessions you would like to see at State of The Map 2017 through the community survey.
  • The Costa Rican OSM community invites you to join them to map the bars in San José, as part of the Maperespeis activities. Beers not included 😉
  • Resistance GIS, an upcoming free mini-conference at Portland State University, aims to explore how GIS, open data and its visualizations can empower communities, social movements and civil resistance struggles.
  • Submit talk proposals and scholarship applications for the first ever State of the Map Africa at Uganda in July 2017.

Humanitarian OSM

  • In order to be prepared for disasters, HOT in Indonesia teaches authorities and non-profit associations how to map with JOSM. A pilot project for risk assessment of threats, weaknesses and capacities is to be established in the regions of Barru and Wajo (South Sulawesi). (automatic translation)
  • HOT calls for help to fight the malaria on the occasion of World Health Day on April 7th.

Software

  • [1] User mmd writes a post showcasing a small proof of concept to demonstrate @jotpe’s proposal on GitHub.

Programming

  • Mapzen presents a new, fast and accurate parser for address input.
  • User daniel-j-h writes about Open Source Routing Machine (OSRM) supporting bearing constraints and highlights their use-cases and effects on routing.

Releases

Software Version Release date Comment
Osmium Tool 1.6.1 2017-03-06 Two changes and three fixes.
GpsMaster 0.63.00 2017-04-06 OpenCycleMap enhanced, OpenTopoMap added and more changes.
Komoot Android * var 2017-04-07 Route planning and search reworked.
Komoot iOS * 9.0.1 2017-04-07 Improvement of performance and further small changes.
Mapbox GL JS v0.35.0 2017-04-07 Seven new features and 12 bugfixes.
Mapillary iOS * 4.6.12 2017-04-07 Changes for different cameras and a bug fixed.
QGIS 2.18.6 2017-04-07 No infos.
Mapillary Android * 3.45 2017-04-08 Don’t rotate the UX for left handed to fix the preview upside down problem.
Kurviger Free * 10.0.19 2017-04-09 Various improvements.
StreetComplete 0.7 2017-04-09 Fix of version 0.6 with the fixed changeset problem.
libosmium 2.12.1 2017-04-10 Many changes and bugfixes. Please read change log.
MapContrib 1.7.3 2017-04-10 Many changes since 1.6.1, please read release infos.
Maps.me Android * var 2017-04-10 Changes to the GUI, more hotels, beaches, parking, cameras and WLAN hotspots added.
Maps.me iOS * 7.2.3 2017-04-10 100,000 objects added. Incl. restaurants, shops and POI.
Locus Map Free * 3.23.0 2017-10-04 Many improvements, please read releaseinfo.

Provided by the OSM Software Watchlist. Timestamp: 2017-04-10 15:08:55+02 UTC

(*) unfree software. See: freesoftware.

Did you know …

OSM in the media

  • In a YouTube video, The Crowd & The Cloud discuss the need of up-to-date maps for disaster relief, highlighting the great importance of OSM.

Other “geo” things

  • The Indian government is trying to organize the agricultural assets of the country by better geotagging and mapping and would like to use the open data application Bhuvan.
  • US president Donald Trump’s planned wall in the Mexico border would cut ecosystems and lead to irreversible damage to the already endangered flora and fauna in the region. The Vox report uses OSM-based maps.

Upcoming Events

Where What When Country
Manila MapAm❤re #PhotoMapping San Juan, San Juan 13/04/2017-16/04/2017 philippines
Tokyo 東京!街歩き!マッピングパーティ:第7回 小石川後楽園 15/04/2017 japan
Vicopisano Collaborative mapping lab, Festa Dèi Camminanti 2017 15/04/2017 italy
Manila FEU YouthMappers Mapillary Workshop, Manila 17/04/2017 philippines
Bonn Bonner Stammtisch 18/04/2017 germany
Scotland Edinburgh 18/04/2017 united kingdom
Lüneburg Mappertreffen Lüneburg 18/04/2017 germany
Nottingham Nottingham Pub Meetup 18/04/2017 uk
Moscow Schemotechnika 09 18/04/2017 russia
Karlsruhe Stammtisch 19/04/2017 germany
Portland Portland Mappy Hour 19/04/2017 united states
Osaka もくもくマッピング! #05 19/04/2017 japan
Colorado Springs Humanitarian Mapathon Colorado State University, Fort Collins 19/04/2017 us
Augsburg Augsburger Stammtisch 20/04/2017 germany
Leoben Stammtisch Obersteiermark 20/04/2017 austria
Zaragoza Mapeado Colaborativo 21/04/2017 spain
Kyoto 【西国街道#03】桜井駅跡と島本マッピングパーティ 22/04/2017 japan
Misiones Charla Mapas Libres en FLISoL, Posadas 22/04/2017 argentina
Bremen Bremer Mappertreffen 24/04/2017 germany
Graz Stammtisch Graz 24/04/2017 austria
Kinmen Shang Yi Airport Do mapping Kinmen by youself 24/04/2017-25/04/2017 taiwan
Zaragoza Mapatón Humanitario – Mapeado Colaborativo y Dpto. Geografía de la Universidad de Zaragoza 25/04/2017 spain
Dusseldorf Stammtisch Düsseldorf 26/04/2017 germany
Leuven First Leuven Monthly OSM Meetup/Missing Maps 26/04/2017 belgium
Antwerp Missing Maps at IPIS 26/04/2017 belgium
Lübeck Lübecker Mappertreffen 27/04/2017 germany
Urspring Stammtisch Ulmer Alb 27/04/2017 germany
Vancouver Vancouver mappy hour 28/04/2017 canada
Ouro Preto Mapatona Estrada Real 29/04/2017-01/05/2017 brazil
Avignon State of the Map France 2017 02/06/2017-04/06/2017 france
Kampala State of the Map Africa 2017 08/07/2017-10/07/2017 uganda
Champs-sur-Marne (Marne-la-Vallée) FOSS4G Europe 2017 at ENSG Cité Descartes 18/07/2017-22/07/2017 france
Curitiba FOSS4G+State of the Map Brasil 2017 27/07/2017-29/07/2017 brazil
Boston FOSS4G 2017 14/08/2017-19/08/2017 USA
Aizu-wakamatsu Shi State of the Map 2017 18/08/2017-20/08/2017 japan
Boulder State of the Map U.S. 2017 19/10/2017-22/10/2017 united states
Buenos Aires FOSS4G+State of the Map Argentina 2017 23/10/2017-28/10/2017 argentina
Lima State of the Map LatAm 2017 29/11/2017-02/12/2017 perú

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Peda, Polyglot, Rogehm, Spec80, SrrReal, derFred, jinalfoflia, vsandre, wambacher.

by weeklyteam at April 14, 2017 11:40 PM

April 13, 2017

Wikimedia Foundation

You can go anywhere on the Wikimedia projects, but where is Wikimedia going?

The Wikimedia movement is building a bridge to our future. We hope you will join us. Photo by Thomas Wolf, CC BY-SA 3.0.

Wikipedia started as a simple idea: an online encyclopedia that was open for anyone to contribute, freely. And without any particular plan, we grew into a constellation of individuals, activities, and organizations. That simple idea—that everyone should be able to freely share in knowledge—proved to have a gravity of its own, pulling brilliant minds and institutions into its orbit. A remarkable movement built up around us.

Today, the Wikimedia projects are among the most beloved and popular websites in the world—and the largest collaborative knowledge resource in human history. Hundreds of millions of people visit the Wikimedia projects every month. Our global movement includes millions of volunteers who have edited over time, more than 100 affiliates, millions of donors, and thousands of partner institutions around the world.

We believe our mission is as important as it is ever been, because we believe free knowledge is more important than ever.

Today, the Wikimedia projects can take you almost anywhere—but where is the Wikimedia movement itself going? How will projects like Wikipedia change over the next 15 years? What do we want to achieve together? To answer those questions, the Wikimedia movement has launched Wikimedia 2030: a global discussion to define Wikimedia’s future role in the world. Our goal is to come together as a movement—contributors, affiliates, readers, donors, partners—around a direction that will guide our work over the next 15 years.

Everyone who values Wikipedia and the other Wikimedia projects is invited to participate. You can find more opportunities to join us at the end of this post.

Video by Victor Grigas, CC BY-SA 4.0. You can also view it on Vimeo or Youtube.

Wikimedia belongs to all of us. We all have a stake in the future of Wikimedia.

Wikimedia has experienced tremendous growth over the last 16 years. This growth has been possible because of an open model that allows anyone to participate. Because of this, Wikipedia belongs to everyone.

Much of the world has come to rely on free access to neutral, reliable information on Wikimedia projects. We as a movement have a responsibility to sustain and protect that access. We also have a responsibility to respond to the world as it changes, so people in every part of the world can benefit from free knowledge for generations to come.

Wikimedia projects are accessed by more than a billion devices every month, but we know we are serving a small portion of the world’s population. Our projects are available in hundreds of languages, but a majority of our content is concentrated in a small few. Millions of people have access to the Internet, but billions more have yet to come online. The web is better populated, but it is also more commercial. We have more sources of information, but fewer common truths.

These are challenges and opportunities, and our vision calls for us to engage them. We believe in a world in which every single human can share in the sum of all knowledge. Over the next 15 years, we want to get closer to that vision by coming together as a movement around a shared direction.

Charting the path of our movement

Movements work together, plan together, and align together around core values. So do we. Movements also affect significant social change. And for many people, that is what we do as well. We drive change towards greater openness, greater sharing, a richer commons, more knowledge available to more people. At their best, movements take advantage of their power and engage directly with their weaknesses.

#Wikimedia2030 is designed to be inclusive of many voices from every part of the globe, whether you are an editor, reader, affiliate, partner, or donor. The process will engage people across a variety of channels, including on-wiki discussions, in-person events, individual interviews, qualitative and quantitative research, and more. We hope that anyone who is interested can engage in their own way, and gain something from the process.

In this process we have five goals:

  1. Identify as a movement a cohesive direction that aligns and inspires us all on our path to 2030.
  1. Build trust, goodwill, and alignment within our movement. Participate in a legitimate, transparent, open process based on shared power, not hierarchy.
  1. Better understand the people and institutions that form our movement, those we are not yet reaching, and how their needs may change over the next 15 years.
  1. Build a shared understanding of what it means to be a movement, how others outside of us can take part, and what it will take to increase our movement’s impact. Unite around how to grow to achieve our vision.
  1. Build relationships to expand and enrich our movement and prospective partners.

Over this calendar year, we will be hosting conversations about our vision for the future. We will be conducting research on the current and potential future for free knowledge around the world. We will engage volunteer contributors, movement affiliates, readers, donors, institutions, and experts who have a stake in free knowledge. We will challenge our assumptions and learn from each other. Just like on Wikipedia, we will chart our path through open dialogue, fact-based information, and iteration.

By this year’s Wikimania, being held in Montreal in August 2017, our aim is to have consensus around a number of themes that will culminate in a strategic direction for our future. This will help frame a discussion on how we work together moving forward.

Engaging people in hundreds of languages and locations is a monumental undertaking.

Coming to consensus on a long-term strategic direction for a global movement that supports some of the world’s most beloved websites is no small feat. With that in mind, our movement has put time, resources, and energy into building a process that will work for our unique needs. We started to design the process behind Wikimedia 2030 in July, after the Wikimedia Foundation Board of Trustees tasked the organization’s leadership with developing a plan for facilitating a discussion on the future of Wikimedia.

We assembled a core strategy team to shepherd the overall process and keep all groups involved and engaged. This core team includes williamsworks, a strategy consulting firm with more than a decade of experience working with nonprofits, companies, and philanthropists around the world. It also includes Wikimedians, affiliate members, and Foundation staff, each with responsibility for different stakeholders. We conducted research on strategy processes from other movements, reviewed Wikimedia’s past strategy processes, and worked in consultation with members of the Wikimedia movement to design this process.

The process for shaping conversations was designed in collaboration with a Community Process Steering Committee composed of volunteers from 10 countries who have deep experience with Wikimedia. With the Steering Committee, we have designed a framework that includes voices from across the movement, over three phases of discussion: (1) discuss the future of the movement and generate themes, (2) identify the top 5 thematic clusters and understand their meaning, and (3) refine the top 3-5 thematic clusters into a cohesive direction and explore their implications.

To organize this movement-wide discussion, we are organizing our conversations across four “tracks” of information sharing and dialogue that meet the unique needs of those different audiences. The tracks include:

  1. Organized groups within our movement, including Wikimedia movement affiliates (chapters and user groups) and committees
  1. Individual contributors to the Wikimedia projects: writers, editors, photographers, developers, and more
  1. Current and future readers and institutional partners in higher awareness regions, like Australia, Canada, France, Germany, Japan, Russia, and the United States (for examples)
  1. Current and future readers and institutional partners in lower awareness regions, including countries like Nigeria, India, Egypt, Indonesia, Mexico, and Brazil

Chart by Blanca Flores, CC BY-SA 4.0.

Where we go from here

Our work on Wikimedia 2030 has begun. We started with conversations with volunteer editors and affiliate organizations. The first discussions are underway on Meta-Wiki and taking place at in-person meetups around the world. On dozens of project wikis, and in many offline conversations happening in the coming month, we are asking “What do we want to build or achieve together over the next 15 years?”

Community members from across the movement are engaging in the process already. To date, nearly 20 community coordinators are liaising with local volunteers in multilingual discussions around the world, and 85 affiliate organizations are actively engaged in community discussions. Many of these individuals and groups came together in late March at the Wikimedia Conference in Berlin for facilitated strategy conversations.

In the coming weeks and months, we will hold similar conversations with readers, donors, and partner organizations through events, research, and interviews. Our goal is to understand the key trends that matter to the many stakeholders of our movement, from emerging technology platforms to changing media consumption habits, and welcome people into the process. We will engage experts who have an eye on the future of global knowledge, education, technology, and community building. We will learn from readers around the world about their relationship with Wikimedia projects and what they’d like to see in the future. The information we learn will be incorporated back into community discussions and the overall synthesis process. As always, and in true Wikimedia spirit, we will share everything we learn in public.

We will publish regular updates and share ways to get involved.

How you can get involved

In the coming months we will be sharing many ways to get involved, from social media to online discussions. You have a say in Wikimedia’s future, and we want to hear it! Here are some ways to get involved right now:

Are you an individual contributor, for example an editor, developer, or researcher?

Are you part of an organized group actively engaged in Wikimedia, like a chapter, user group, or committee?

Do you read Wikipedia or use any of the other Wikimedia projects, like Wikimedia Commons, Wiktionary, Wikisource, or Wikivoyage?

  • In the coming months, we will engage Wikimedia readers on our social media channels on Facebook and Twitter. We will ask questions about the future of Wikimedia and launch an essay contest to imagine what Wikimedia will be like in 2030.

Are you with an institution that is a Wikimedia partner or has a stake in the future of the Wikimedia movement?

  • We are speaking with partner institutions through interviews and events over the coming months.
  • If you are a partner institution and want to make sure we speak with you, email wikimediastrategy@wikimedia.org.

———

Thank you for helping us move towards a future in which every single human being can freely share in the sum of all knowledge.

I believe this is the start of many important conversations. I look forward to them and want to thank you in advance for taking part in them.

Katherine Maher, Executive Director
Wikimedia Foundation

by Katherine Maher at April 13, 2017 07:55 PM

Wiki Education Foundation

Wiki Ed Visits UCSF and UC Berkeley

Wiki Ed supports thousands of students from hundreds of classes each term, but we rarely get to meet these students face-to-face. We get to know them through the millions of words they add to Wikipedia and through feedback from their instructors. Thanks to our course Dashboard, we’re able to support all of these students without ever stepping foot in a classroom, but it’s a privilege when we’re invited to make an in-person visit.

UCSF_Wiki_Ed_campus_visit_spring_2017
Helaine Blumenthal and Amin Azzam at UCSF.

At the end of February, Educational Partnerships Manager, Jami Mathewson, and I visited Dr. Amin Azzam’s class of fourth year medical students at the University of California, San Francisco. At the end of their medical schooling, Amin’s students are in a unique position. As he says, they’re far enough along in their medical training that they have the confidence to share their expertise, but they’re not so far along in their careers that “they can no longer speak English.” Wiki Ed has been working with Amin and his students since Fall 2014, and in that time his students have improved articles ranging from hepatitis to bacteremia. In recent years, Wikipedia has become a leading source of medical information on the Internet. By improving medical content on Wikipedia, Amin’s students can potentially affect the lives of millions of people without ever stepping foot in an exam room. We talked to Amin about the history of the program at UCSF, how his students use Wiki Ed’s resources to make contributions that meet Wikipedia’s strict requirements for editing medical content, and, importantly, how their contributions can empower others to make informed medical decisions.

Later that week, on March 3, Outreach Manager, Samantha Weald and I visited a very different type of class at the University of California, Berkeley. This term, Naniette Coleman, a PhD student in the Sociology Department, has organized a student working group around the theme of privacy literacy, a project jointly funded by the Center for Technology, Society, and Policy and the Center for Long-Term Cybersecurity. The vast majority of students we support are at the undergraduate level, and the students Samantha and I talked to in Berkeley were mostly freshmen and sophomores. As an informal discussion, we explored the impact that these students, even so early in their educational careers, can have on Wikipedia. Hardly older than Wikipedia itself, Naniette’s students can’t remember an Internet landscape without it. They’ve been using it regularly since middle school, and they were eager to learn how to contribute themselves. Drawing on their diverse interests and backgrounds, they’ll be exploring several facets of privacy from the legal to the theoretical, both within the U.S. and internationally. Unlike most of the courses we support that take place on a term-by-term basis, Naniette hopes to make this a year-round project.

Whether freshmen who aren’t yet sure of their majors, or graduate students specializing in a highly focused field, the students we support all have the capacity to make meaningful contributions to Wikipedia. Thank you again to Amin and Naniette for inviting us to meet with your students. In truth, we learn as much from them as they do from us.

by Helaine Blumenthal at April 13, 2017 07:41 PM

Wikimedia Tech Blog

Share your photography with the world with the newly improved Wikimedia Commons Android app

Photo by Matthias Süßen, CC BY-SA 3.0

In this day and age, many people use a phone as their primary camera. However, this can pose an issue for Wikimedians when they want to upload their images to Wikimedia Commons—do they transfer them to a regular computer, attempt to negotiate the mobile web pages, or hack together another solution?

The Wikimedia Commons Android app offers a solution: it allows contributors to easily submit photos directly from their phone to Commons without needing to use a computer or a web browser. Version 2.0 of the app has now been released, one and a half years after the app was revived by the community. The new version contains several new features alongside improvements that make the app smoother and more convenient to use and multiple bug fixes.

In the new version of the app, you can:

  • Categorize your images much more easily: The app will automatically offer category suggestions based on the location where the image was taken and the title that you chose for your image.
  • View nearby places that need images: Browse nearby locations that need images so that you can target your photo trips towards locations that are lacking in photos. This way, you can help Wikipedia have images for all articles, and discover beautiful places close to you.
  • Be notified if you have submitted this image before: Can’t remember if you have already uploaded a particular image? No problems, just select the image and you will be notified if a duplicate is found in the Commons database.
  • Get your friends started with contributing to Commons: It’s easier than ever to get started with contributing to Commons – your friends can sign up for a Commons account within the app. The new tutorial gives them a quick primer on what type of photos Commons does and doesn’t accept.
  • Select licenses directly from the upload screen: Licenses have been updated to include CC-BY 4.0 and CC-BY-SA 4.0, and you can now select your license directly when uploading.
  • Switch to a light theme: You can now choose between the old night mode or a new light theme which is more suited to daytime or outdoor conditions. This can be toggled in Settings.
  • Participate in beta testing: Sign up for beta testing to help us test the app and get new features before they are released to the public!

Anyone can download the app for free, with no ads or in-app purchases, from the Google Play Store or F-Droid. The app currently has over 2000 active installs, and roughly 6000 files were uploaded through it in the last quarter. More information can be found on its website; its source code is freely licensed on GitHub under the Apache License 2.0. As this is a community-maintained app, feedback and help from volunteers is always welcome.

Josephine Lim, Commons Android app maintainer and IEG recipient

by Josephine Lim at April 13, 2017 06:20 PM

Wikimedia UK

Reflections on a Wikipedia assignment – Reproductive Medicine

Reproductive Medicine undergraduates – September 2016 (CC-BY-SA)

This was originally posted on Ewan McAndrew’s blog where he writes about his role as the University of Edinburgh’s Wikimedian in Residence

Wikipedia as an important source of health information and not medical advice.

“The Internet, especially Wikipedia, had proven its importance in everyday life. Even the medical sector is influenced by Wikipedia’s omnipresence. It has gained considerable attention among both healthcare professionals and the lay public in providing medical information. Patients rely on the information they obtain from Wikipedia before deciding to seek professional help. As a result, physicians are confronted by a professional dilemma as patients weigh information provided by medical professionals against that on Wikipedia, the new provider of health information….

We state that Wikipedia should not be viewed as being inappropriate for its use in medical education. Given Wikipedia’s central role in medical education as reported in our survey, its integration could yield new opportunities in undergraduate education. High-quality medical education and sustainability necessitates the need to know how to search and retrieve unbiased, comprehensive, and reliable information. Students should therefore be advised in reflected information search and encouraged to contribute to the “perpetual beta” improving Wikipedia’s reliability. Therefore, we ask for inclusion in medical curricula, since guiding students’ use and evaluation of information resources is an important role of higher education. It is of utmost importance to establish information literacy, evidence-based practices, and life-long learning habits among future physicians early on, hereby contributing to medical education of the highest quality.
Accordingly, this is an appeal to see Wikipedia as what it is: an educational opportunity. This is an appeal to academic educators for supplementing Wikipedia entries with credible information from the scientific literature. They also should teach their protégés to obtain and critically evaluate information as well as to supplement or correct entries. Finally, this is an appeal to medical students to develop professional responsibility while working with this dynamic resource. Criticism should be maintained and caution exercised since every user relies on the accuracy, conscientiousness, and objectivity of the contributor.”(Herbert et al, BMC Medical Education, 2015)

Reproductive Medicine Wikipedia assignment at Edinburgh University – September 2016

Reproductive Medicine undergraduates – collaborating to create Wikipedia articles.

In September 2016, Reproductive Biology Honours students undertook a group research project to research, in groups of 4–5 students with a tutor, a term from reproductive biomedicine that was not yet represented on Wikipedia. All 38 were trained to edit Wikipedia and they worked collaboratively both to undertake the research and produce the finished written article. The assignment developed the students’ information literacy, digital literacy, collaborative working, academic writing & referencing and ability to communicate to an audience. The end result was 8 new articles on reproductive medicine which enriches the global open knowledge community and will be added to & improved upon long after they have left university creating a rich legacy to look back upon.

One of the new articles, high-grade serous carcinoma, was researched and written by 4th year student, Áine Kavanagh.

Rather than a writing an assignment for an audience of one (the course tutor) and never read again, Aine’s article can be viewed, built on and expanded by an audience of millions. Since creating the article in September 2016, the article has now been viewed 6,993 times.

Since September 2016 the article has amassed nearly 7,000 views, and growing day by day.

Guest post:

Reflections on a Wikipedia assignment

BY ÁINE KAVANAGH.
Reproductive Medicine students – September 2016

The process of writing a Wikipedia article involved me trying to answer the questions I was asking myself about the topic. What was it? Why should I care about it? What does it mean to society? I also needed to make the answers to those questions clear to other people who can’t see inside my head.

It then moved onto questions I thought other people might ask about the topic. Writing for Wikipedia is really an exercise in empathy and perspective. Who else is going to want to know about this and what might they be interested in about it?

Is what I’m writing accessible and understandable? Am I presenting it in a useful way? It’s an incredibly public piece of writing which is only useful if it serves the public, so trying to put yourself in the frame of someone who’s not you reading what you’ve written is important (and possibly the most difficult part).

It’s also about co-operation from the get-go. You can’t post a Wikipedia article and allow no one else to edit it. You are offering something up to the world. You can always come back to it, but you can never make it completely your own again. The beauty of Wikipedia is in groupthink, in the crowd intelligence it facilitates, but this means shared ownership, which can be hard to get your head around at first.

It’s a unique way of writing, and some tips for other students starting out on a Wikipedia project is to not be intimidated. Wikipedia articles in theory can be indefinitely long and dense and will be around for an indefinitely long time, so writing a few hundred words can seem like adding a grain of sand to a desert. But if the information is not already there then you are contributing – and what is Wikipedia if not just a big bunch of contributions?

There’s also the fear that editors already on Wikipedia will swoop down and denounce your article as completely useless – but the beauty of storing information is that you can never really have too much of it. There’s no-one who can truly judge what is and isn’t worthy of knowing*.

*There’s no-one who can judge what’s worth knowing, but the sum of human knowledge needs to be organised, and so there are actually guidelines as to what a Wikipedia article is (objective account of a thing) and is not (platform for self-promotion).

by Ewan McAndrew at April 13, 2017 11:43 AM

Gerard Meijssen

#Wikidata - People die; implications for another #policy approach

People die, notable people die. It is natural and it happens all the time. Many a #Wikipedia has a category for the people who died in a specific year. Such categories are what makes a wonderful tool by Pasleim tick. It shows those Wikidata items that have no date of death while a Wikipedia knows about the demise of the person involved.

This is a wonderful tool; it allows Wikidata to take care of those who died and update its data. It leaves us with another option and add one more tool. A tool that checks if the date of death exists in the Wikipedias that do not have such a category.

Consider this; a date of death is relevant when you consider the "Biographies of Living People". Having complete information for people is important. So why not flip our approach to the BLP and provide tools to improve the existing information in all of our projects?

First things first; the objective is to signal the death of a person. As is the current policy, it is up to every project to do with it as it likes. What should follow is looking for sources when one is available and preferably add at least one to Wikidata for re-use.

What are the benefits; a positive approach to maintenance and invite people to do something that actually matters now. It is an invitation to read the article and see what more can be done to get in into shape.

When the date for a death exists in an article, the article will be removed from the articles that need attention. There are plenty of valid approaches to this.

Improving user engagement is one of the objectives of the Wikimedia Foundation itself. I really want the WMF to include active engagement where it makes a difference and be as pro active as it can in this field. This is a positive approach and that is what we badly need.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at April 13, 2017 06:21 AM

April 12, 2017

Wikimedia Foundation

Sharing Indian culture with the world through Wikipedia: Ashish Bhatnagar

The Hindi Wikipedia’s featured article on the Ganges River was principally the work of Ashish Bhatnagar. Photo by Babasteve, CC BY SA 2.0.

Lucknow, the capital city of the state of Uttar Pradesh in India, a city has long been known for its opulent artistic heritage and for embracing different cultures.

Bara Imambara (a congregation hall for Shia ceremonies), is one of the city’s largest historic buildings. The Imambara, constructed between 1784 and 1791 by nearly 22,000 workers, is an outstanding example of Mughal architecture. Rumi Darwaza, another monument in Lucknow, is a historic gateway to the Old City. It was modeled after the Sublime Porte in Istanbul.

Lucknow boasts several other impressive styles of architecture, authentic music, diverse cuisine, and a history of storytelling that helped establish its place in Indian and regional history.

On the Hindi Wikipedia, Lucknow is the subject of a featured article, meaning that it is recognized by Wikipedia editors for its quality. The history of the city and its culture, climate, people, and traditions are all covered on one page. The article was written by many volunteers, but it was primarily developed into a featured article by Wikipedian Ashish Bhatnagar.

In fact, Bhatnagar’s self-described most fulfilling experiences on Wikipedia have come from the three featured articles he’s helped write: Lucknow, Ganges River, and Microbiology. But how did he get to the site in the first place? “I was searching for something on the internet when I came across the English Wikipedia,” Bhatnagar recalls. “After using the website for a while, I noticed a link to the Hindi Wikipedia on the right-hand column. Most of the Hindi articles were [very] short at that time… The article on the Yamuna River only stated, ‘It is a river in India.’ That was in 2005 or 2006.”

The substandard level of articles on the Hindi Wikipedia motivated Bhatnagar, who is an advocate for his language and Indian culture. “I thought about how much we learn from [information shared on the] internet, so why don’t we give back to it?” he asked.

To give back, Bhatnagar has edited Wikipedia over 53,000 times and created over 10,000 new articles on Wikipedia over the past decade. He is also now an administrator on the Hindi Wikipedia, where he helps with tasks behind the scenes, like template creation and technical issues. “Technical challenges were brain food for me,” he says.

While many Wikipedia editors hesitate to create very short articles (“stubs“) on Wikipedia, Bhatnagar believes they are a good base to build on. “Many people join the Hindi Wikipedia community. They contribute for a short time, create some new content that might be useless for some, but we can make something out of this ‘useless text.’  We can guide those contributors until they learn how to contribute longer quality content.”

Bhatnagar and his fellow editors on the Hindi Wikipedia had a successful experience improving these short articles. Their project was called Aaj Ka Aalekh, in which they expanded each article to a minimum of 200 words with a picture and a few references, links and categories. They helped improve many articles without having to shoulder the burden of working for long time on a lengthy article.

When not online, Bhatnagar is an electronics engineer who currently works for the Airports Authority of India. When not working and not on Wikipedia, can sometimes be found at Wikipedia events encouraging new people to edit.

Photo by आशीष भटनागर, CC BY-SA 4.0.

 

Interview by Syed Muzammiluddin, Wikimedia Community Volunteer
Profile by Samir Elsharbaty, Digital Content Intern, Wikimedia Foundation

by Samir Elsharbaty and Syed Muzammiluddin at April 12, 2017 07:22 PM

Wiki Education Foundation

Rosie Stephenson-Goodknight, Visiting Scholar at Northeastern University

We’re excited to welcome our newest Wikipedia Visiting Scholar, Rosie Stephenson-Goodknight! She is hosted by the Women Writers Project, part of Northeastern University’s Digital Scholarship Group.

Rosie is a prolific content contributor (editing as User:Rosiestep) and has received extensive recognition, including being named 2016 co-Wikipedian of the Year, for her advocacy of important Wikipedia-related issues and coordination of major community projects like the Teahouse and Women in Red.

Rosie_Stephenson-Goodknight
Rosie Stephenson-Goodknight.
Image: Rosie Stephenson-Goodknight.jpg, by Victor Grigas, CC BY-SA 3.0, via Wikimedia Commons.

Wikipedia struggles with several issues related to systemic bias, stemming from a variety of internal and external factors. Two prominent and interrelated examples are the underrepresentation of women in Wikipedia’s content and among its editing community (according to conservative estimates, at least 80% of the people who write Wikipedia are men). Biographies of women are more likely to be omitted, and those that do exist are more likely than their male counterparts to focus on things like their relationships and personal lives.

One of the most successful projects working to address Wikipedia’s gender gap is Women in Red, which Rosie co-founded in 2015 along with its counterpart, WikiProject Women. The “in Red” part of the name comes from one of the defining features of wiki software — when a link to another page title is created, links to pages that exist are displayed in blue and links to pages that do not exist are displayed in red. The goal of Women in Red is to identify notable women whose wikilinked names appear in red, and create articles about them such that they are displayed in blue instead. Women in Red has received coverage by the BBC, The Guardian, ABC Online, and Time, and was a finalist for the UN’s 2016 GEM-TECH Awards.

Before WikiProjects Women and Women in Red, however, there was WikiProject Women Writers, which Rosie started in 2014, stemming from her own passion for the subject on which she has since focused much of her own editing. In telling The Union about her fondness for 19th century books, she explained that “In the back of my mind, my thought process is ‘If I don’t write about this, who will? If I don’t do it now, when will I do it?’ That kind of mantra plays in the back of my head. It’s kind of like, I’ve found a gem in history. I need to do something with this gem. If I don’t, it might be lost for all times.”

Like many other Wikipedians, however, Rosie’s editing is hindered when the sources she needs are inaccessible without institutional access or paying high fees. That’s why we like the Wikipedia Visiting Scholars program so much — it forms connections between Wikipedians and educational institutions, giving the editor access to otherwise inaccessible resources to use in the improvement of articles in a topic area of mutual interest. When we found out that Rosie and Northeastern’s Women Writers Project were both interested to participate in the program, it was a clear match.

The Women Writers Project is a long-term collaboration researching, collecting, encoding, sharing, and disseminating information about early women’s writing. The project began in 1988 at Brown University, with funding from the National Endowment for the Humanities, and in 2013 moved into Northeastern University’s Digital Scholarship Group. As Visiting Scholar, Rosie will receive remote access to Northeastern University Libraries resources as well as the resources and staff expertise of the Women Writers Project.

Coordinating Visiting Scholars at Northeastern is Amanda Rust, Digital Humanities Librarian and Assistant Director of the Digital Scholarship Group, who explains that they’re participating in the program “because we have a strong commitment to open access and public scholarship projects, and the work of people like Rosie helps bring information that may be hard to find (only in print, expensive scholarly journals, or odd pockets of the internet) to a broad, global audience. Rosie’s subject focus, women and writing before 1900, is part of our larger commitment to preserving the history of underrepresented groups, and we look forward to learning more from Rosie about how she accomplishes such an impressive extent of research and writing, and what kinds of research resources are most helpful to experienced encyclopedians like her. Rosie will be supported by our reference librarians as well as scholars with the Women Writers Project, which is a long-standing community of people who have worked to preserve and share scholarship on the history of women and writing.”

You can read the Northeastern University Libraries Digital Scholarship Group’s announcement here.

We’re looking for others to get involved. If you’re a Wikipedian or educational institution interested to learn more about participating in the Visiting Scholars program, see the Visiting Scholars page of our website or send an email to me at ryan@wikiedu.org.

Image: Northeastern University – 10.JPG, by User:Piotrus, CC BY-SA 3.0, via Wikimedia Commons.

by Ryan McGrady at April 12, 2017 05:37 PM

This month in GLAM

This Month in GLAM: March 2017

by Admin at April 12, 2017 05:29 PM

April 11, 2017

Wiki Education Foundation

Writing about Muslim women in sport

Rebecca Godard contributed to Wikipedia as a student editor in Diana Strassmann’s Poverty, Justice, and Human Capabilities course at Rice University in Fall 2016. In this post she reflects on her experience working on the article Muslim women in sport, which was promoted to a Good Article in November and appeared in the Did You Know section of Wikipedia’s Main Page in January.

Before entering this class, I was quite sceptical about the prospect of editing Wikipedia. In my previous educational environments, I was taught that Wikipedia was unreliable and unsuitable for rigorous academic research. I had never given much thought to the people who contribute to Wikipedia, and certainly never imagined myself becoming one. Through this class, I have gained insight into the workings of Wikipedia and into the need for contributors focused on social justice issues. I have thoroughly enjoyed both creating a new article and interacting with other Wikipedians, and the experience has helped me develop skills that will be extremely valuable in my future academic endeavours.

One of the most difficult aspects of writing my article was maintaining a neutral position. I care very deeply about sports, and am outraged at the ways that many Muslim women are excluded or discouraged from participating. Unlike a traditional research paper, writing for Wikipedia required me to set aside my own feelings on the topic and present the existing scholarship on the topic in the most balanced manner possible. This was particularly difficult for me when I was writing about hijab bans from international governing bodies like FIFA (association football) and FIBA (basketball). I see these policies as ludicrous and unabashedly discriminatory, an opinion that was difficult to keep out of my work. Through a conscious effort to remain neutral and some advice from my peer reviewers, however, I was able to accurately describe the situation in a way that did not reveal my own biases. By finding academic sources on the subject rather than simply using news articles, I could ensure that I was grounding my work in scholarship rather than in my opinions. This strategy was helpful in creating my entire article, but specifically in the sections that were focused on factors affecting sports participation, media portrayal, and empowerment through sports.

Another important component of my Wikipedia experience was the opportunity to interact with other Wikipedians. While I have not been in contact with very many Wikipedians, those I have communicated with have been incredibly supportive and helpful. Montedia and Eperoton both gave me excellent advice on how to best develop my page, including providing me with specific articles and other resources. Several other users helped me edit content that had been moved to my page from the Women in Islam page, as it falsely assumed that all athletes from predominantly Muslim countries (like Kosovare Asllani and Dinara Safina) were themselves Muslim. Wiki Ed employee Sage Ross was extremely helpful when I found myself in unexpected situations, such as discovering racist vandalism on another page or seeing that someone had created a Russian-language version of my article on the English Wikipedia site. Finally, my peer reviewers gave me very honest and helpful critiques of my article, and even helped me find information to supplement my content. In short, I would not have been able to develop my article to the extent that I have without the support of other Wikipedians. I have come to deeply appreciate the collaborative nature of the site, as it allowed me to produce far better work than I could have on my own.

The one thing that I wish could have been different about my Wikipedia experience is the lack of response that I received from WikiProject groups. My article is part of WikiProject Islam and WikiProject Women’s sports, but I have struggled to interact with users from either group. I never got any responses to the queries that I posted early in the semester (while I was still writing my proposal), which could have been quite useful in the editing process. Additionally, I requested that WikiProject Women’s sports reevaluate my page’s start-class rating, as I have added to it significantly since it was designated as such. To date, I have not heard back from any of the users who are part of the WikiProject, and the rating of my page has not changed. Even though the response from these project groups was lacking, I still felt that I had plenty of support and resources, both from within the class and from other users on Wikipedia.

In summary, I am grateful for the opportunity to contribute to Wikipedia. Using Wikipedia in a university setting is innovative and a refreshing break from the usual midterms and papers. Additionally, it was encouraging to see my classmates and I add to the number of women, people of colour, LGBT individuals, and social justice-oriented users. The current underrepresentation of such groups leads to a lack of content that is relevant to the Poverty, Justice, and Human Capabilities minor at Rice. By creating my article, I felt that I was taking a small step towards greater equality and representation on Wikipedia. I also developed skills that are important both to academic pursuits and to life in general, including neutral writing and collaboration. Finally, I enjoyed developing and researching my article, and I even made some edits on other, unrelated pages. I hope to continue being an active Wikipedia user and contributing to the knowledge and scholarship represented there.

Image: Iran national football team training, Azadi Stadium 29.08.2016 09.jpg, by Hamed Malekpour, CC-AT 4.0, via Wikimedia Commons.

by Guest Contributor at April 11, 2017 04:14 PM

Wikimedia UK

#WMUKED17 – Wikimedia UK Education Summit: a roundup

By Lucy Crompton-Reid, Wikimedia UK Chief Executive

I was delighted to be part of planning and delivering Wikimedia UK’s Education Summit in partnership with Middlesex University in February and wanted to share some notes, insights and presentations from that event with a broader audience than the 45 or so students, educators, academics and Wikimedians that were able to attend in person. This is something of a long read so please feel free to dip in and out, to look at the tweets from the day and to explore the excellent slides produced by a range of speakers.

<noscript>[<a href=”//storify.com/josephinefraser/wikimedia-uk-education-summit” target=”_blank”>View the story “Wikimedia UK Education Summit #WMUKED17 ” on Storify</a>]</noscript>

Melissa Highton, Director of Learning, Teaching and Web Services and Assistant Principal for Online Learning at the University of Edinburgh, gave a rousing start to the summit, using her keynote speech to advocate for Wikimedians in Residence at universities. With digital capabilities now a key component in student employability, and driving innovation in the economy, Melissa’s argument was that higher education institutions can’t afford not to have a Wikimedian on their team! The work in Edinburgh has improved the quality and quantity of open knowledge, embedded information literacy skills in the curriculum and made it easier to develop authentic learning experiences for larger bodies of students. For Edinburgh undergraduates, the opportunity to edit Wikipedia means that they are part of a worldwide open source software project, which Melissa sees as a significant authentic learning opportunity. The work enables students to understand sources and copyright and also “leads into discussions about the privilege and geography of knowledge”, as well as questions about neutrality.

Melissa also spoke about gender inequality in science and technology, and the role that working with Wikimedia can play in tackling structural barriers for women working in academia, particularly in relation to Athena Swann initiatives. She noted that the kind of work a Wikimedian in Residence will do can deliver successful, measurable outcomes on gender equality; and added that she feels “academics are missing a trick if they are not factoring Wikipedia into public engagement and understanding”.

To close, Melissa touched on some of the challenges inherent in working with the Wikimedia community, and the need for a resident to help negotiate and navigate the challenges of editing Wikipedia as a structured group activity. As she put it, “Wikimedians will save you from Wikimedians.”

Melissa’s high-level overview of the university wide impact of and strategic case for a Wikimedian in Residence was complemented brilliantly by Stefan Lutschinger’s more practical but no less compelling keynote speech focused on his own approach to Wikipedia in the curriculum. Stefan is Associate Lecturer in Digital Publishing at Middlesex University with whom Wikimedia UK worked closely in planning the event. He gave a detailed account of how the module he has developed and run for three years – with input from volunteers Ed Hands and Fabian Tompsett – is building digital literacy and confidence amongst his students and enhancing academic practice. He also touched upon Wikidata, as a resource that enables undergraduates to “understand the architecture, the anatomy, of data”, and ended his speech by sharing his ambition to make editing Wikipedia a mandatory part of the curriculum for first year students at Middlesex.

Richard Nevell leading a workshop at #WMED17 – image by John Lubbock

Following these excellent speeches the summit broke into three workshop spaces, with the volunteer Nav Evans and Wikimedian in Residence at Edinburgh University, Ewan McAndrew, running a practical workshop on Wikidata; Wikimedia UK’s Richard Nevell and Hephzibah Israel, Lecturer in Translation Studies at Edinburgh, giving a presentation on Wikipedia in the Classroom and the use of the Outreach Dashboard; and an unconference space facilitated by Andy Mabbett. I attended the latter and participated in a wide-ranging discussion with a group of established Wikimedians and one or two people from the university sector, which explored instructional design and materials for developing editing skills, the challenges of adapting resources for different learning styles and the need to be explicit about the benefit of editing in terms of research and analytical skills, plus next steps and potential partners for the UK Chapter.

After the morning workshops we moved into Lightning Talks, with Fabian Tompsett kicking us off by talking about his residency at the May Day Room. In particular he highlighted the potential offered by Wikisource. This is sometimes seen as a repository for older materials but we should be encouraging more academics to upload their materials and papers.

It was fantastic to have a number of presentations from Stefan’s students, including Behlul, Adrianna and Lauryna, who talked about their experiences of working on Wikimedia as part of their Media Studies course. Behlul shared the creation of a pirate pad to edit articles as a group, and noted that he now views Wikimedia as a platform for different learning opportunities rather than just somewhere to gain information. Adrianna focused on Fake news vs Wikipedia and was “fascinated by what a reliable source of information Wikipedia actually turns out to be…contributions can be traced and authors are accountable. Tens of thousands of Wikipedia editors act as watch dogs”. She also quoted the Wikimedia Foundation’s Executive Director, Katherine Maher, who describes the projects as a “public park for knowledge.”

Educator and Wikimedian Charles Matthews gave a presentation on a new online learning resource that he is currently developing, with input from Wikimedia UK trustee Doug Taylor, based on the idea of questions as data. He is interested in annotation, collaboration and translation of educational materials, with robust metadata that tells you more about the resource such as to what extent it has been tested in the classroom and how has it been used successfully. To make this project work will require a big database of questions that Charles and Doug are hoping to crowd source, with the aim of having a link to relevant questions from the sidebar on any given Wikipedia article.

Clem Rutter also highlighted the potential to make better use of existing technologies to support the use of Wikimedia as a tool for teaching and learning. He gave a short introduction to his Portal for Learning, which draws on his substantial experience as a secondary school teacher and his existing links and relationships with the formal education community.

Ewan McAndrew gave an energetic and comprehensive account of his work at Edinburgh University, focusing on the successful introduction of Wikipedia in the Classroom assignments in a number of departments. He sees Wikipedia as a powerful tool for educators and not something that has to be additional to their practice and described the work of Translation Studies MA students contributing in one language and translating into a different language using the content translation tool, noting how allowing students to take ownership of this work was a critical motivating factor. He also shared outcomes from the World Christianity course, in which students edited Wikipedia to present a more holistic, broader worldview of Christianity, which otherwise tends to be written about with a western bias.

Ewan is very pleased that 65% of event attendees have been women, a key target audience for his events given the gender gap highlighted by Melissa earlier in the day. He feels that “we need to demystify Wikipedia and make it accessible, share good practice and not reinvent the wheel” when working across universities. With this in mind Ewan is in the process of creating and sharing resources, videos and lesson plans for educators.

Judith Scammell closed the lightning talks by giving her perspective as a librarian based at St George’s University in London. She is in the early stages of getting staff and students to use Wikipedia but feels that it is ideal for building the 21st century learning skills of enquiry, creativity, participation and digital Literacy. Judith has been inspired by fellow librarian Natalie Lafferty, based at Dundee university, who is involving technology in learning and shares her insights through her blog e-LiME.

Wikimedians working at #WMED17 – image by John Lubbock

Following lunch and networking, the attendees of the summit again broke into three workshop sessions, with another unconference space, a presentation by Dr Martin Poulter and Liz McCarthy on working together on a Wikimedian in Residence programme at Bodleian Libraries and now across the University of Oxford, and Josie Taylor and Lorna Campbell leading a session on curating Wikimedia’s educational resources. I very much enjoyed hearing from Wikimedian in Resident Martin and Liz – Web and Digital Media Manager at Bodleian Libraries – about the success of the residency in terms of correcting bias. In his initial residency, Martin focused on sharing the 8000 files that he felt were of most interest and that represented hidden histories, with these now having had nearly 50 million page views. In the new phase of the residency he is working across the whole university, building relationships at an early stage with a dozen big research projects to build in openness from the start, and linking research outputs and educational materials. They are now thinking more about interdisciplinary practice and feel there is potential benefit for every department of the university, with Liz commenting that hosting a Wikimedian in Residence is “an obvious path on the way to public engagement”.

A talk in the main room at #WMED17 – image by John Lubbock

Finally, we gathered together at the end of the day in a plenary discussion to share key points from each session, and to start thinking about future developments. Martin Poulter encouraged everyone to take the next step in implementing any new ideas that have emerged from the summit, and Nav Evans encouraged people to create their own Histropedia timeline. I hope that everyone who attended was able to take away at least one thing that they can do at an individual level and that those in positions of influence are thinking about how they can create change at an institutional level. For Wikimedia UK, some key action points emerged, including the need to:

  • Develop and share our thinking in terms of education, particularly how we prioritise this work and what support we can offer teachers and learners
    Support existing Wikimedia education projects and nurture new ideas.
  • Build on the work that’s been started in terms of curating and creating resources and redeveloping the education pages on the Wikimedia UK site.
  • Continue to provide opportunities for people working within education and Wikimedia to come together virtually and in person to share practice.
  • Share models of good practice, case studies and learning.

If you’re interested in how Wikimedia can play in role in education and support learners to contribute to the Wikimedia projects, please email us at education@wikimedia.org.uk.

by Lucy Crompton-Reid at April 11, 2017 12:11 PM

April 10, 2017

Wikimedia Tech Blog

Search: all of the new bright and shiny objects

Photo by gnuckx, public domain/CC0.

Ever started searching for something on Wikipedia and wondered—really, is that all that there is? Does it feel like you’re somehow playing hide and seek with all the knowledge that’s out there?

Wouldn’t it be great to see articles or categories that are similar to your search query and maybe some related images or links to other languages in which to read that article?  Or, perhaps you just want to read and contribute to projects other than Wikipedia but need a jump start with a few short summaries from sister projects.

Even if you simply enjoy seeing interesting snippets and images, based off of your search query, then you’ll really like what we have in store. We’re starting to test out some really cool features that will enable some fun and fascinating clicking—down the rabbit hole of Wikipedia. But first, let’s look at what we’ve been doing over the last couple of years.

Back end search

The Discovery Search team has been doing tons of work creating, updating, and finessing the search back end to enhance search queries. There have been many complex things that have happened, things like: adding ascii-folding and stemming, detecting when a visitor might be typing in a language that is different than the Wikipedia that they are on, switching from tf-idf to BM25, dropping trailing question marks, and updating to ElasticSearch version 5. Whew!

We have much more planned in the coming months—machine learning with ‘learning to rank’, investigating and deploying new language analyzers, and, after doing an exhaustive analysis, removing quotes within queries. We’ll also be interacting closely with the new Structured Data team in their upcoming work on Commons to make freely licensed images accessible and reusable across the web.

Front end search

After all that back end search awesomeness, we needed to spruce up the part that the majority of our readers and editors actually interface with: the search results page! We started brainstorming during the late summer of 2016 on what we could do to make search results better—to easily find interesting, relevant content and to create a more intuitive viewing experience. We designed and refined numerous ideas on how to improve the search results page and received lots of good feedback from the community.

Empowered by that feedback, we began testing, starting with a display of results from the Wikimedia sister projects next to the regular search results. The idea for this test was to enable discovery into other projects—projects that our visitors might not have known about—by displaying interesting results in small snippets. The sidebar display of the sister projects borrows from a similar feature that is already in use on the Italian, Catalan and French Wikipedias. We’ve run a couple tests on the sister project search results with detailed analysis completed and, after a bit of final touches to the code, we will release the new functionality into production on all Wikipedias near the end of April 2017.

The sister projects are an integral part of the Wikimedia family and the associated links denoting each project are often found near the footer of the front page of each Wikipedia. The Wikimedia sister projects are:

Our next test will be to add in additional information and related results for each search query. This will be in the form of an ‘explore similar’ link that, when someone interacts with the link, an expanded display will appear with related pages, categories and links to the article in other languages—all of which might lead to further knowledge discovery. We know that not every search query will return exactly what folks were looking for, but we feel that adding links to similar, but related information would be helpful and, possibly, super interesting!

We also plan on doing a few more tests in the coming year:

  • Test a new display that will show the pronunciation of a word with its definition and part of speech—all from existing data in Wiktionary. Initially, this will be in English only.
  • Test placing a small image (from the article) next to each search result that is displayed on the page.
  • Test an additional feature that will use a new metadata display in the search box that is located on the top right of most pages in Wikipedia, similar to what happens on the Wikipedia.org portal page when a user starts typing into the search box.

For the more technical minded, there is a way to test out these new features in your own browser. For the sister project search results, it will require a bit of URL manipulation; but for the explore similar and Wiktionary widget, you’ll need a Wikipedia account and be able to create (or edit) your common.js file. Detailed information is available on Mediawiki.

Once the testing, analysis and feedback cycle is done for each new feature, we’d like to slowly implement them into production on all Wikipedias throughout the rest of the year. We’re really hoping that these enhancements will deepen the usefulness of search results and enable our readers and editors to be even more productive and inspired!

Deborah Tankersley, Product Manager, Discovery Product and Analysis
Wikimedia Foundation

by Deborah Tankersley at April 10, 2017 06:44 PM

Magnus Manske

Comprende!

tl;dr: I wrote a quiz interface on top of a MediaWiki/WikiBase installation. It ties together material from Wikidata, Commons, and Wikipedia, to form a new educational resource. I hope the code will eventually be taken up by a Wikimedia chapter, as part of an OER strategy.


The past

There have been many attempts in the WikiVerse to get a foot into the education domain. Wikipedia is used extensively in this domain, but it is more useful for introductions to a topic, and as a reference, rather than a learning tool. Wikiversity was an attempt to get into university-level education, but even I do not know anyone who actually uses it. Wikibooks has more and better contents, but many wikibooks are mere sub-stub equivalents, rather than usable, fully-fledged textbooks. There has been much talk about OER, offline content for internet-challenged areas, etc. But the fabled “killer app” has so far failed to emerge.

Enter Charles Matthews, who, like myself, is situated in Cambridge. Among other things, he organises the Cambridge Wikipedia meetup, and we do meet occasionally for coffee between those. In 2014, he started talking to me about quizzes. At the time, he was designing teaching material for Wikimedia UK, using Moodle, as a component in Wikipedia-related courses. He quickly became aware of the limitations of that software, which include (but are not limited to) general software bloat, significant hardware requirements, and hurdles in re-using questions and quizzes in other contexts. Despite all this, Moodle is rather widely used, and the MediaWiki Quiz extension is not exactly representing itself as a viable replacement.

A quiz can be a powerful tool for education. It can be used by teachers and mentors to check on the progress of their students, and by the students themselves, to check their own progress and readiness for an upcoming test.

As the benefits are obvious, and the technical requirements appeared rather low, I wrote (at least) two versions of a proof-of-concept tool named wikisoba. The interface looked somewhat appealing, but storage is a sore point. The latest version uses JSON stored as a wiki page, which needs to be edited manually. Clearly, not an ideal way to attract users these days.

Eventually, a new thought emerged. A quiz is a collection of “pages” or “slides”, representing a question (of various types), or maybe a text to read beforehand. A question, in turn, consists of a title, a question text (usually), possible answers, etc. A question is therefore the main “unit”, and should be treated on its own, separate from other questions. Questions can then be bundled into quizzes; this allows for re-use of questions in multiple quizzes, maybe awarding different points (a question could yield high points in an entry-level quiz, but less points in an advanced quiz). The separation of question and quiz makes for a modular, scalable, reusable architecture. Treating each question as a separate unit is therefore a cornerstone of any successful system for (self-)teaching and (self-)evaluation.

It would, of course, be possible to set up a database for this, but then it would require an interface, constraint checking, all the things that make a project complicated and prone to fail. Luckily, there exists a software that already offers adequate storage, querying, interface etc. I speak of WikiBase, the MediaWiki extension used to power Wikidata (and soon Commons as well). Each question could be an item, with the details encoded in statements. Likewise, a quiz would be an item, referencing question items. WikiBase offers a powerful API to manage, import, and export questions; it comes with build-in openness.

The present

There is a small problem, however; the default WikiBase interface is not exactly appealing for non-geeks. Also, there is obviously no way to “play” a quiz in a reasonable manner. So I decided to use my recent experience with vue.js to write an alternative interface to MediaWiki/WikiBase, designed to generate questions and quizzes, and to play a quiz in a more pleasant way. The result has the working title Comprende!, and can be regarded as a fully functional, initial version of a WikiBase-driven question/quiz system. The underlying “vanilla” WikiBase installation is also accessible. To jump right in, you can test your biology knowledge!

There are currently three question types available:

  • Multiple-choice questions, the classic
  • “Label image” presents an image from Commons, letting you assign labels to marked points in the image
  • Info panels, presenting information to learn (to be interspersed with actual questions)

All aspects of the questions are stored in WikiBase; they can have a title, a short text, and an intro section; for the moment, the latter can be a specific section of a Wikipedia article (of a specific revision, by default), but other types (Commons images, for example) are possible. When used in “info panel” type questions (example), a lot of markup, including images, is preserved; for intro sections in other question types, it is simplified to mere text.

Live translating of interface text.

Wikidata is multi-lingual by design, and so is Comprende!. An answer or image label can be a text stored as multi-lingual (or monolingual, in WikiBase nomenclature) strings, as a Wikidata item reference, giving instant access to all the translations there. Also, all interface text is stored in an item, and translations can be done live within the interface.

Questions can be grouped and ordered into a quiz. Everyone can “play” and design a quiz (Chrome works best at the moment), but you need to be logged into the WikiBase setup to save the result. Answers can be added, dragged around to change the order, and each question can be assigned a number of points, which will be awarded based on the correct “sub-answers”. You can print the current quiz design (no need to save it), and most of the “chrome” will disappear, leaving only the questions; instant old-fashioned paper test!

While playing the quiz, one can see how many points they have, how many questions are left etc. Some mobile optimisations like reflow for portrait mode, and a fixed “next question” button at the bottom, are in place. At the end of the quiz, there is a final screen, presenting the user with their quiz result.

To demonstrate the compatibility with existing question/quiz systems, I added a rudimentary Moodle XML import; an example quiz is available. Another obvious import format to add would be GIFT. Moodle XML export is also on the to-do-list.

The future

All this is obviously just a start. A “killer feature” would be a SPARQL setup, federating Wikidata. Entry-level quizzes for molecular biology? Questions that use Wikidata answers that are chemicals? I can see educators flocking to this, especially if material is available in, or easily translated into, their language. More questions types could emphasise the strength of this approach. Questions could even be mini-games etc.

Another aspect I have not worked on yet is logging results. This could be done per user, where the user can add their result in a quiz to a dedicated tracking item for their user name. Likewise, a quiz could record user results (automatically or voluntarily).

One possibility would be to live for the questions, quizzes etc. in a dedicated namespace on Wikidata (so as to not contaminate the default namespace). That would simplify the SPARQL setup, and get the existing community involved. The Wikitionary-related changes on Wikidata will cover all that is needed on the backend; the interface is all HTML/JS, not even an extension is required, so next to no security or integration issues. Ah, one can dream, right?

by Magnus at April 10, 2017 09:09 AM

Tech News

Tech News issue #15, 2017 (April 10, 2017)

TriangleArrow-Left.svgprevious 2017, week 15 (Monday 10 April 2017) nextTriangleArrow-Right.svg
Other languages:
العربية • ‎বাংলা • ‎čeština • ‎Deutsch • ‎English • ‎español • ‎فارسی • ‎suomi • ‎français • ‎עברית • ‎italiano • ‎日本語 • ‎한국어 • ‎polski • ‎português do Brasil • ‎русский • ‎svenska • ‎українська • ‎Tiếng Việt • ‎中文

April 10, 2017 12:00 AM

April 09, 2017

User:Legoktm

Wikimania submisison: apt install mediawiki

I've submitted a talk to Wikimania titled apt install mediawiki. It's about getting the MediaWiki package back into Debian, and efforts to improve the overall process. If you're interested, sign up on the submissions page :)

by legoktm at April 09, 2017 04:22 PM

April 08, 2017

Gerard Meijssen

#WhiteHouse Fellows - Mrs Margarita Colmenares

Mrs Margarita Colmenares is a White House Fellow. A message was posted on Twitter that her article had been created and to support the message, it was easy enough to add her on Wikidata as well. The article mentioned that she was a White House Fellow and adding one layer of additional information is one way of making a person more relevant.

Adding this fellowship and adding other people who were a fellow was easy enough. The Wikipedia article referred to the website of the White House for information and when you visit its website you will be thanked for having an interest in this subject.

At a time like this it is good to consider Archive.org.  Its crawler worked well at some dates for other dates the message you will see is: "Got an HTTP 301 response at crawl time".

Anyway.. Together, the information at whitehouse.gov and at archive.org provide enough of a reference.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at April 08, 2017 08:19 AM

April 07, 2017

Sumana Harihareswara

Inclusive-Or: Hospitality in Bug Tracking

Lindsey Kuper asked:

I’m interested in hearing about [open source software] projects that have successfully adopted an "only insiders use the issue tracker" approach. For instance, a project might have a mailing list where users discuss bugs in an unstructured way, and project insiders distill those discussions into bug reports to be entered into the issue tracker. Where does this approach succeed, and where does it fail? How can projects that operate this way effectively communicate their expectations to non-insider users, especially those users who might be more accustomed to using issue trackers directly?
More recently, Jillian C. York wrote:

...sick of "just file a bug with us through github!" You realize that's offputting to your average users, right?

If you want actual, average users to submit bugs, you know what you have to do: You have to use email. Sorry, but it's true.

Oh, and that goes especially for high-risk users. Give them easy ways to talk to you. You know who you are, devs.

Both Kuper and York get at: How do we open source maintainers get the bug reports we need, in a way that works for us and for our users?

My short answer is that open source projects should have centralized bug trackers that are as easy as possible to work in as an expert user, and that they should find automated ways to accept bug reports from less structured and less expert sources. I'll discuss some examples and then some general principles.

Dreamwidth logo Dreamwidth: Dreamwidth takes support questions via a customer support interface. The volunteers and paid staff answering those questions sometimes find that a support request reveals a bug, and then file it in GitHub on the customer's behalf, then tell her when it's fixed. (Each support request has a private section that only Support can see, which makes it easier to track the connection between Support requests and GitHub issues, and Support regulars tend to have enough ambient awareness of both Support and GitHub traffic to speak up when relevant issues crop up or get closed.) Dreamwidth users and developers who are comfortable using the GitHub issue tracker are welcomed if they want to file bugs there directly instead.

Dreamwidth also has a non-GitHub interface for feature suggestions: the suggestions form is the preferred interface for people to suggest new features for Dreamwidth. Users post their suggestions into a queue and a maintainer chooses whether to turn that suggestion into a post for open discussion in the dw-suggestions community, or whether to bounce it straight into GitHub (e.g., for an uncontroversial request to whitelist a new site for media embedding or add a new site for easy cross-site user linking, or at the maintainer's prerogative). Once a maintainer has turned a suggestion into a post, other users use an interface familiar to them (Dreamwidth itself) to discuss whether they want the feature. Then, if they and the maintainer come to consensus and approve it, the maintainer adds a ticket for it to GitHub. That moderation step has been a bottleneck in the past, and the process of moving a suggestion into GitHub also hasn't yet been automated.

Since discussion about site changes needs to include users who aren't developers, Dreamwidth maintainers prefer that people use the suggestions form; experienced developers sometimes start conversations in GitHub, but the norm (at least the official norm) is to use dw-suggestions; I think the occasional GitHub comment suffices for redirecting these discussions.

Zulip logo Zulip: We use GitHub issues. The Zulip installations hosted by Kandra Labs (the for-profit company that stewards the open source project) also have a "Send feedback" button in one of the upper corners of the Zulip web user interface. Clicking this opens a private message conversation with feedback-at-zulip.com, which users used more heavily when the product was younger. (We also used to have a nice setup where we could actually send you replies in-Zulip, and may bring that back in the future.)

I often see Tim Abbott and other maintainers noticing problems that new users/customers are having and, while helping them (via the zulip-devel mailing list, via the Zuliping-about-Zulip chat at chat.zulip.org, or in person), opening GitHub issues about the issue, as the next step towards a long-term fix. But -- as with the Dreamwidth example -- it is also fine for people who are used to filing bug reports or feature requests directly to go ahead and file them in GitHub. And if Tim et alia know that the person they're helping has that skill and probably has the time to write up a quick issue, then the maintainers will likely say, "hey would you mind filing that in GitHub?"

We sometimes hold live office hours at chat.zulip.org. At yesterday's office hour, Tim set up a discussion topic named "warts" and said,

I think another good topic is to just have folks list the things that feel like they're some of our uglier/messier parts of the UI that should be getting attention. We can use this topic to collect them :).

Several people spoke up about little irritations, and we ended up filing and fixing multiple issues. One of Zulip's lead developers, Steve Howell, reflected: "As many bug reports as we get normally, asking for 'warts' seems to empower customers to report stuff that might not be considered bugs, or just empower them to speak up more." I'd also point out that some people feel more comfortable responding to an invitation in a synchronous conversation than initiating an asynchronous one -- plus, there's the power of personal invitation to consider.

As user uptake goes up, I hope we'll also have more of a presence on Twitter, IRC, and Stack Overflow in order to engage people who are asking questions there and help them there, and get proto-bug reports from those platforms to transform into GitHub issues. We already use our Twitter integration to help -- if someone mentions Zulip in a public Tweet, a bot tells us about it in our developers' livechat, so we can log into our Twitter account and reply to them.

MediaWiki logo 1MediaWiki and Wikimedia: Wikipedia editors and other contributors have a lot of places they communicate about the sites themselves, such as the technical-issues subforum of English Wikipedia's "Village Pump", and similar community-conversation pages within other Wikipedias, Wikivoyages, etc. Under my leadership, the team within Wikimedia Foundation's engineering department that liaised with the larger Wikimedia community grew more systematic about working with those Wikimedia spaces where users were saying things that were proto-bug reports. We got more systematic about listening for those complaints, filing them as bugs in the public bug tracker, and keeping in touch with those reporters as bugs progressed -- and building a kind of ambassador community to further that kind of information dissemination. (I don't know how well that worked out; I think we built a better social infrastructure for people who were already doing that kind of volunteer work ad hoc, but I don't know whether we succeeded in recruiting more people to do it, and I haven't kept a close eye on how that's gone in the years since I left.)

We also worked to make it easy for people to report bugs into the main bug tracker. The Bugzilla installation we had for most of the time that I was at Wikimedia had two bug reporting forms: a "simple" submission form that we pointed most people to, with far fewer fields, and an "advanced" form that Wikimedia-experienced developers used. They've moved to Phabricator now, and I don't know whether they've replicated that kind of two-lane approach.

A closed-source example: FogBugz. When I was at Fog Creek Software doing sales and customer support, we used FogBugz as our internal bug tracker (to manage TODOs for our products,* and as our customer relationship manager). Emails into the relevant email addresses landed in FogBugz, so it was easy for me to reply directly to help requests that I could fix myself, and easy for me to note "this customer support request demonstrates a bug we need to fix" and turn it into a bug report, or open a related issue for that bug report. If I recall correctly, I could even set the visibility of the issue so the customer could see it and its progress (unusual, since almost all our issue-tracking was private and visible only within the company).

Debian logo An interface example: Debian. Debian lets you report bugs via email and via the command-line reportbug program. As the "how to use BTS" guide says,

some spam messages managed to send mails to -done addresses. Those are usually easily caught, and given that everything can get reverted easily it's not that troublesome. The package maintainers usually notice those and react to them, as do the BTS admins regularly.

The BTS admins also have the possibility to block some senders from working on the bug tracking system in case they deliberately do malicious things.

But being open and inviting everyone to work on bugs totally outweighs the troubles that sometimes pop up because of misuse of the control bot.

And that leads us to:

General guidelines: Dreamwidth, Zulip, MediaWiki, and Debian don't discourage people from filing bug reports in the official central bug tracker. Even someone quite new to a particular codebase/project can file a very helpful and clear bug report, after all, as long as they know the general skill of filing a good bug report. Rather, I think the philosophy is what you might find in hospitable activism in general: meet people where they are, and provide a means for them to conveniently start the conversation in a time, place, and manner that's more comfortable for them. For a lot of people, that means email, or the product itself.

Failure modes can include:

  • a disconnect among the different "places" such that the central bug tracker is a black hole and nothing gets reported back to the more accessible place or the original reporter
  • a feeling of elitism where only special important people are allowed to even comment in the main bug tracker
  • bottlenecks such that it seems like there's a non-bug-tracker way to report a question or suggestion but that process has creaked to a halt and is silently blocking momentum
  • bottlenecks in bug triage
  • brusque reaction at the stage where the bug report gets to the central bug tracker (e.g., "oh that's a duplicate; CLOSE" without explanation or thanks), which jars the user (who's expecting more explicit friendliness) and which the user perceives as hostile

Whether or not you choose to increase the number of interfaces you enable for bug reporting, it's worth improving the user experience for people reporting bugs into your main bug tracker. Tedious, lots-of-fields issue tracker templates and UIs decrease throughput, even for skilled bug reporters who simply aren't used to the particular codebase/project they're currently trying to file an issue about. So we should make that easier. You can provide an easy web form, as Wikimedia did via the simplified Bugzilla form, or an email or in-application route, as Debian does.

And FLOSS projects oughta do what the Accumulo folks did for Kuper, too, saying, "I can file that bug for you." We can be inclusive-or rather than exclusive-or about it, you know? That's how I figure it.


* Those products were CityDesk, Copilot, and FogBugz -- this was before Kiln, Stack Overflow, Trello, and Glitch.

Thanks to Lindsey Kuper and Jillian C. York for sparking this post, and thanks to azurelunatic for making sure I got Dreamwidth details right.

April 07, 2017 07:36 PM

Amir E. Aharoni

Amir Aharoni’s Quasi-Pro Tips for Translating the Software That Powers Wikipedia

As you probably already know, Wikipedia is a website. A website has content—the articles; and it has user interface—the menus around the articles and the various screens that let editors edit the articles and communicate to each other.

Another thing that you probably already know is that Wikipedia is massively multilingual, so both the content and the user interface must be translated.

Translation of articles is a topic for another post. This post is about getting all of the user interface translated to your language, as quickly and efficiently as possible.

The most important piece of software that powers Wikipedia and its sister projects is called MediaWiki. As of today, there are 3,335 messages to translate in MediaWiki, and the number grows frequently. “Messages” in the MediaWiki jargon are strings that are shown in the user interface, and that can be translated. In addition to core MediaWiki, Wikipedia also has dozens of MediaWiki extensions installed, some of them very important—extensions for displaying citations and mathematical formulas, uploading files, receiving notifications, mobile browsing, different editing environments, etc. There are around 3,500 messages to translate in the main extensions, and over 10,000 messages to translate if you want to have all the extensions translated. There are also the Wikipedia mobile apps and additional tools for making automated edits (bots) and monitoring vandalism, with several hundreds of messages each.

Translating all of it probably sounds like an enormous job, and yes, it takes time, but it’s doable.

In February 2011 or so—sorry, I don’t remember the exact date—I completed the translation into Hebrew of all of the messages that are needed for Wikipedia and projects related to it. All. The total, complete, no-excuses, premium Wikipedia experience, in Hebrew. Every single part of the MediaWiki software, extensions and additional tools was translated to Hebrew, and if you were a Hebrew speaker, you didn’t need to know a single English word to use it.

I wasn’t the only one who did this of course. There were plenty of other people who did this before I joined the effort, and plenty of others who helped along the way: Rotem Dan, Ofra Hod, Yaron Shahrabani, Rotem Liss, Or Shapiro, Shani Evenshtein, Inkbug (whose real name I don’t know), and many others. But back then in 2011 it was I who made a conscious effort to get to 100%. It took me quite a few weeks, but I made it.

Of course, the software that powers Wikipedia changes every single day. So the day after the translations statistics got to 100%, they went down to 99%, because new messages to translate were added. But there were just a few of them, and it took me a few minutes to translate them and get back to 100%.

I’ve been doing this almost every day since then, keeping Hebrew at 100%. Sometimes it slips because I am traveling or I am ill. It slipped for quite a few months because in late 2014 I became a father, and a lot of new messages happened to be added at the same time, but Hebrew is back at 100% now. And I keep doing this.

With the sincere hope that this will be useful for translating the software behind Wikipedia to your language, let me tell you how.

Preparation

First, let’s do some work to set you up.

  • Get a translatewiki.net account if you haven’t already.
  • Make sure you know your language code.
  • Go to your preferences, to the Editing tab, and add languages that you know to Assistant languages. For example, if you speak one of the native languages of South America like Aymara (ay) or Quechua (qu), then you probably also know Spanish (es) or Portuguese (pt), and if you speak one of the languages of the former Soviet Union like Tatar (tt) or Azerbaijani (az), then you probably also know Russian (ru). When available, translations to these languages will be shown in addition to English.
  • Familiarize yourself with the Support page and with the general localization guidelines for MediaWiki.
  • Add yourself to the portal for your language. The page name is Portal:Xyz, where Xyz is your language code.

Priorities, part 1

The translatewiki.net website hosts many projects to translate beyond stuff related to Wikipedia. It hosts such respectable Free Software projects as OpenStreetMap, Etherpad, MathJax, Blockly, and others. Also, not all the MediaWiki extensions are used on Wikimedia projects; there are plenty of extensions, with thousands of translatable messages, that are not used by Wikimedia, but only on other sites, but they use translatewiki.net as the platform for translation of their user interface.

It would be nice to translate all of it, but because I don’t have time for that, I have to prioritize.

On my translatewiki.net user page I have a list of direct links to the translation interface of the projects that are the most important:

  • Core MediaWiki: the heart of it all
  • Extensions used by Wikimedia: the extensions on Wikipedia and related sites
  • MediaWiki Action Api: the documentation of the API functions, mostly interesting to developers who build tools around Wikimedia projects
  • Wikipedia Android app
  • Wikipedia iOS app
  • Installer: MediaWiki’s installer, not used in Wikipedia because MediaWiki is already installed there, but useful for people who install their own instances of MediaWiki, in particular new developers
  • Intuition: a set of different tools, like edit counters, statistics collectors, etc.
  • Pywikibot: a library for writing bots—scripts that make useful automatic edits to MediaWiki sites.

I usually don’t work on translating other projects unless all of the above projects are 100% translated to Hebrew. I occasionally make an exception for OpenStreetMap or Etherpad, but only if there’s little to translate there and the untranslated MediaWiki-related projects are not very important.

Priorities, part 2

So how can you know what is important among more than 15,000 messages from the Wikimedia universe?

Start from MediaWiki most important messages. If your language is not at 100% in this list, it absolutely must be. This list is automatically created periodically by counting which 600 or so messages are actually shown most frequently to Wikipedia users. This list includes messages from MediaWiki core and a bunch of extensions, so when you’re done with it, you’ll see that the statistics for several groups improved by themselves.

Now, if the translation of MediaWiki core to your language is not yet at 18%, get it there. Why 18%? Because that’s the threshold for exporting your language to the source code. This is essential for making it possible to use your language in your Wikipedia (or Incubator). It will be quite easy to find short and simple messages to translate (of course, you still have to do it carefully and correctly).

Getting Things Done, One by One

Once you have the most important MediaWiki messages 100% and at least 18% of MediaWiki core is translated to your language, where do you go next?

I have surprising advice.

You need to get everything to 100% eventually. There are several ways to get there. Your mileage may vary, but I’m going to suggest the way that worked for me: Complete the easiest piece that will get your language closer to 100%! For me this is an easy way to strike an item off my list and feel that I accomplished something.

But still, there are so many items at which you could start looking! So here’s my selection of components that are more user-visible and less technical, sorted not by importance, but by the number of messages to translate:

  • Cite: the extension that displays footnotes on Wikipedia
  • Babel: the extension that displays boxes on userpages with information about the languages that the user knows
  • Math: the extension that displays math formulas in articles
  • Thanks: the extension for sending “thank you” messages to other editors
  • Universal Language Selector: the extension that lets people select the language they need from a long list of languages (disclaimer: I am one of its developers)
    • jquery.uls: an internal component of Universal Language Selector that has to be translated separately for technical reasons
  • Wikibase Client: the part of Wikidata that appears on Wikipedia, mostly for handling interlanguage links
  • VisualEditor: the extension that allows Wikipedia articles to be edited in a WYSIWYG style
  • ProofreadPage: the extension that makes it easy to digitize PDF and DjVu files on Wikisource
  • Wikibase Lib: additional messages for Wikidata
  • Echo: the extension that shows notifications about messages and events (the red numbers at the top of Wikipedia)
  • MobileFrontend: the extension that adapts MediaWiki to mobile phones
  • WikiEditor: the toolbar for the classic wiki syntax editor
  • ContentTranslation extension that helps translate articles between languages (disclaimer: I am one of its developers)
  • Wikipedia Android mobile app
  • Wikipedia iOS mobile app
  • UploadWizard: the extension that helps people upload files to Wikimedia Commons comfortably
  • Flow: the extension that is starting to make talk pages more comfortable to use
  • Wikibase Repo: the extension that powers the Wikidata website
  • Translate: the extension that powers translatewiki.net itself (disclaimer: I am one of its developers)
  • MediaWiki core: the base MediaWiki software itself!

I put MediaWiki core last intentionally. It’s a very large message group, with over 3000 messages. It’s hard to get it completed quickly, and to be honest, some of its features are not seen very frequently by users who aren’t site administrators or very advanced editors. By all means, do complete it, try to do it as early as possible, and get your friends to help you, but it’s also OK if it takes some time.

Getting All Things Done

OK, so if you translate all the items above, you’ll make Wikipedia in your language mostly usable for most readers and editors.

But let’s go further.

Let’s go further not just for the sake of seeing pure 100% in the statistics everywhere. There’s more.

As I wrote above, the software changes every single day. So do the translatable messages. You need to get your language to 100% not just once; you need to keep doing it continuously.

Once you make the effort of getting to 100%, it will be much easier to keep it there. This means translating some things that are used rarely (but used nevertheless; otherwise they’d be removed). This means investing a few more days or weeks into translating-translating-translating.

You’ll be able to congratulate yourself not only upon the big accomplishment of getting everything to 100%, but also upon the accomplishments along the way.

One strategy to accomplish this is translating extension by extension. This means, going to your translatewiki.net language statistics: here’s an example with Albanian, but choose your own language. Click “expand” on MediaWiki, then again “expand” on “MediaWiki Extensions”, then on “Extensions used by Wikimedia” and finally, on “Extensions used by Wikimedia – Main”. Similarly to what I described above, find the smaller extensions first and translate them. Once you’re done with all the Main extensions, do all the extensions used by Wikimedia. (Going to all extensions, beyond Extensions used by Wikimedia, helps users of these extensions, but doesn’t help Wikipedia very much.) This strategy can work well if you have several people translating to your language, because it’s easy to divide work by topic.

Another strategy is quiet and friendly competition with other languages. Open the statistics for Extensions Used by Wikimedia – Main and sort the table by the “Completion” column. Find your language. Now translate as many messages as needed to pass the language above you in the list. Then translate as many messages as needed to pass the next language above you in the list. Repeat until you get to 100%.

For example, here’s an excerpt from the statistics for today:

MediaWiki translation stats exampleLet’s say that you are translating to Malay. You only need to translate eight messages to go up a notch (901 – 894 + 1). Then six messages more to go up another notch (894 – 888). And so on.

Once you’re done, you will have translated over 3,400 messages, but it’s much easier to do it in small steps.

Once you get to 100% in the main extensions, do the same with all the Extensions Used by Wikimedia. It’s over 10,000 messages, but the same strategies work.

Good Stuff to Do Along the Way

Never assume that the English message is perfect. Never. Do what you can to improve the English messages.

Developers are people just like you are. They may know their code very well, but they may not be the most brilliant writers. And though some messages are written by professional user experience designers, many are written by the developers themselves. Developers are developers; they are not necessarily very good writers or designers, and the messages that they write in English may not be perfect. Keep in mind that many, many MediaWiki developers are not native English speakers; a lot of them are from Russia, Netherlands, India, Spain, Germany, Norway, China, France and many other countries, and English is foreign to them, and they may make mistakes.

So report problems with the English messages to the translatewiki Support page. (Use the opportunity to help other translators who are asking questions there, if you can.)

Another good thing is to do your best to try running the software that you are translating. If there are thousands of messages that are not translated to your language, then chances are that it’s already deployed in Wikipedia and you can try it. Actually trying to use it will help you translate it better.

Whenever relevant, fix the documentation displayed near the translation area. Strange as it may sound, it is possible that you understand the message better than the developer who wrote it!

Before translating a component, review the messages that were already translated. To do this, click the “All” tab at the top of the translation area. It’s useful for learning the current terminology, and you can also improve them and make them more consistent.

After you gain some experience, create a localization guide in your language. There are very few of them at the moment, and there should be more. Here’s the localization guide for French, for example. Create your own with the title “Localisation guidelines/xyz” where “xyz” is your language code.

As in Wikipedia, Be Bold.

OK, So I Got to 100%, What Now?

Well done and congratulations.

Now check the statistics for your language every day. I can’t emphasize how important it is to do this every day.

The way I do this is having a list of links on my translatewiki.net user page. I click them every day, and if there’s anything new to translate, I immediately translate it. Usually there is just a small number of new messages to translate; I didn’t measure precisely, but usually it’s less than 20. Quite often you won’t have to translate from scratch, but to update the translation of a message that changed in English, which is usually even faster.

But what if you suddenly see 200 new messages to translate? It happens occasionally. Maybe several times a year, when a major new feature is added or an existing feature is changed.

Basically, handle it the same way you got to 100% before: step by step, part by part, day by day, week by week, notch by notch, and get back to 100%.

But you can also try to anticipate it. Follow the discussions about new features, check out new extensions that appear before they are added to the Extensions Used by Wikimedia group, consider translating them when you have a few spare minutes. At the worst case, they will never be used by Wikimedia, but they may be used by somebody else who speaks your language, and your translations will definitely feed the translation memory database that helps you and other people translate more efficiently and easily.

Consider also translating other useful projects: OpenStreetMap, Etherpad, Blockly, Encyclopedia of Life, etc. Up to you. The same techniques apply everywhere.

What Do I Get for Doing All This Work?

The knowledge that thanks to you people who read in your language can use Wikipedia without having to learn English. Awesome, isn’t it? Some people call it “Good karma”.

Oh, and enormous experience with software localization, which is a rather useful job skill these days.

Is There Any Other Way in Which I Can Help?

Yes!

If you find this post useful, please translate it to other languages and publish it in your blog. No copyright restrictions, public domain (but it would be nice if you credit me and send me a link to your translation). Make any adaptations you need for your language. It took me years of experience to learn all of this, and it took me about four hours to write it. Translating it will take you much less than four hours, and it will help people be more efficient translators.


Filed under: Free Software, localization, Wikipedia

by aharoni at April 07, 2017 07:34 PM

Weekly OSM

weeklyOSM 350

28/03/2017-03/04/2017

TextGeopedia © OpenStreetMap contributors © Wikipedia CC-BY-SA

Geopedia now with Youtube. 1 | © Geopedia © OpenStreetMap contributors © Wikipedia CC-BY-SA

The OSM April fools of 2017

  • The map editor is now also available in a variant for companies with optimized look and feel.
  • weeklyOSM was first to report how OSMF managed to sell our OSM-Data to Google 😉
  • OSM uses WGS84 as a geodetic reference system, which is not coupled to the motion of the earth plates. In order to compensate for the differences between the most recent reference systems, such as the ETRS89 introduced/implemented in Europe, the coordinates in the OSM database are now being corrected. New versions are not created. This was reported on the official OSMF blog.

Mapping

  • In an answer to a beginner’s question on the German forum, user Galbinus shows (de) very succinctly, how to create a new roundabout using iD. User Polyglot shows how to get the job done using JOSM.
  • A blog post by Marc Gemis on how to add data about immovable heritage structures in Belgium to crowdsourced projects such as OSM (HistOSM and Historic Places) and Wikidata.
  • Multiple German mappers criticize (de) (automatic translation) the app StreetComplete for creating too many changesets. The app aims to add missing tags, but creates one changeset for each changed tag (2 new tags on one object = 2 changesets). Their author is already in touch with the community. See also the bug report on GitHub.
  • John Whelan asks how to deal with landuse=residential within landuse=residential areas that are probably an artefact of the often very rural but nevertheless flatly populated areas, for example in Africa and are reinforced by certain HOT tasks.
  • Grant Slater’s experiments continue with highly accurate GPS measurements based on the OpenSource tool RTKLIB and he inquires if there are other mappers who experiment with Real-time Kinematic (RTK) or offer or need an RTCM stream (GPS correction signals).

Community

Imports

  • Approximately 9.8 million building footprints are made available by Microsoft under ODbL licence. On the Talk-US mailing list, there was a discussion on whether this data could be imported into OSM.
  • Christoph Hormann discovered a very questionable MapRoulette challenge for the location correction of islands in polar regions, which degrades data quality. In his user blog (numerous comments) he asks whether such challenges should be treated as mechanical edits.

Events

  • The State of the Map Asia 2017 is looking for a logo.
  • The SotM Working Group starts early to search for venues for the State of the Map 2018.

Humanitarian OSM

  • A letter from Mocoa, Colombia: – Thank you for the collaboration, 48 hours later we have the first results on the map for the work of humanitarian teams on the ground. Now we need your help to sustain our humanitarian mapping unit (UMH) that raises post disaster data on the ground through donations.
  • Benjamin Herfort from the University of Heidelberg has published an article in the GIScience News Blog titled “10 Million Contributions: It’s time for MapSwipe Analytics!”. He refers, among other things, to several activities:

    and a scientific article
    “Towards evaluating the mobile crowdsourcing of geographic information about human settlements.” AGILE 2017 International Conference on Geographic Information Science. ” written by Herfort, B., Reinmuth, M., Porto de Albuquerque M.J. and Zipf, A.

  • The OpenAerialMap project now has a much simpler way to upload new images.
  • Russell Deffner alerts us of the second round of the mapping challenge to eradicate malaria.

Maps

  • [1] Michael Schön showed (de) an updated version of the Geopedia in a lightning talk at the FOSSGIS conference, which now also integrates Youtube and contains some new interesting features.
  • Alexander Matheisen explains on the OpenRailwayMap mailing list, why he will remove the experimental support of vector tiles from openrailwaymap.org.

switch2OSM

  • Mapbox writes about the new feature launched by Twitter that will enable businesses on it to privately share and request locations from their customers.

Software

  • Osmconvert can now also perform simple tagging changes on OSM extracts. See the changed documentation for more information.
  • Robin Boldt and Emux worked together to create a mobile app for Kurviger route planner. Users can test a public beta release for Android systems.

Programming

  • David Valdmann writes in the Mapzen blog about the curved lettering, which were introduced with Tangram.js 0.12.

Releases

Software Version Release date Comment
Magic Earth * 7.1.17.12 2017-03-28 Enhanced audio control, better search, performance improvements and bug fixes.
MapContrib 1.6.1 2017-03-28 Raised the payload limit to 50 MB.
Gnome Maps 3.24.0 2017-03-29 Many changes and improvements, please read change logs.
Komoot Android * var 2017-03-29 Minor enhancements.
Naviki Android * 3.57 2017-03-29 State notification with all connected Smart Bike systems, user interface revised.
Naviki iOS * 3.57 2017-03-29 User interface revised, bugfixes.
OSM Contributor 3.0.4 2017-03-29 No infos.
OSRM Backend 5.6.5 2017-03-29 Some bugfixes.
Jungle Bus 1.1 2017-03-30 Bus Contributer now named Jungle Bus.
SQLite 3.18.0 2017-03-30 12 enhancements and five bugfixes.
Cruiser for Android * 1.4.18 2017-04-02 Graphic updated and various improvements.
Cruiser for Desktop * 1.2.18 2017-04-02 No infos.
JOSM 11826 2017-04-02 See release infos.
Mapillary Android * 3.41 2017-04-02 Record GPX track for every sequence, map option in sequence grid view, camera stability improvements.

Provided by the OSM Software Watchlist. Timestamp: 2017-04-03 16:21:53+02 UTC

(*) unfree software. See: freesoftware.

Did you know …

OSM in the media

  • A Slate article tries to show why burgers have become so expensive in Paris, using many different maps and visualizations based on OSM and the list of French companies, recently published as open data.

Other “geo” things

  • Google Map Maker is history since the end of March. TURN ON also reported about it and thinks about an alternative if one does not want to provide their information to Google.
  • According to Heise online, the camera vehicles are on the road again in Europe.
  • The flight simulator X-Plane-11 is now available. The terrain is based on a current OpenStreetMap.

Upcoming Events

Where What When Country
Fribourg SOSM Annual General Meeting and mapping party 08/04/2017 switzerland
Popayán #MappingPartyTulcan (Scout Mappers) 08/04/2017 colombia
Rennes Atelier de découverte 09/04/2017 france
Rennes Mapathon Missing Maps à Bréteil, Montfort 09/04/2017 france
Rennes Réunion mensuelle 10/04/2017 france
Rome Incontro Mappatori 10/04/2017 italy
Lyon Rencontre mensuelle libre 11/04/2017 france
Nantes Rencontres mensuelles 11/04/2017 france
Munich Münchner Stammtisch 11/04/2017 germany
Taipei OSM Taipei Meetup, MozSpace 11/04/2017 taiwan
Essen Stammtisch 13/04/2017 germany
Paris Paris Mapathon Missing Maps 13/04/2017 france
Manila MapAm❤re #PhotoMapping San Juan, San Juan 13/04/2017-16/04/2017 philippines
Berlin 106. Berlin-Brandenburg Stammtisch 14/04/2017 germany
Tokyo 東京!街歩き!マッピングパーティ:第7回 小石川後楽園 15/04/2017 japan
Manila FEU YouthMappers Mapillary Workshop, Manila 17/04/2017 philippines
Bonn Bonner Stammtisch 18/04/2017 germany
Scotland Edinburgh 18/04/2017 united kingdom
Lüneburg Mappertreffen Lüneburg 18/04/2017 germany
Nottingham Nottingham Pub Meetup 18/04/2017 uk
Moscow Schemotechnika 09 18/04/2017 russia
Karlsruhe Stammtisch 19/04/2017 germany
Portland Portland Mappy Hour 19/04/2017 united states
Osaka もくもくマッピング! #05 19/04/2017 japan
Leoben Stammtisch Obersteiermark 20/04/2017 austria
Zaragoza Mapeado Colaborativo 21/04/2017 spain
Kyoto 【西国街道#03】桜井駅跡と島本マッピングパーティ 22/04/2017 japan
Misiones Charla Mapas Libres en FLISoL, Posadas 22/04/2017 argentina
Bremen Bremer Mappertreffen 24/04/2017 germany
Graz Stammtisch Graz 24/04/2017 austria
Kinmen Shang Yi Airport Do mapping Kinmen by youself 24/04/2017-25/04/2017 taiwan
Avignon State of the Map France 2017 02/06/2017-04/06/2017 france
Kampala State of the Map Africa 2017 08/07/2017-10/07/2017 uganda
Champs-sur-Marne (Marne-la-Vallée) FOSS4G Europe 2017 at ENSG Cité Descartes 18/07/2017-22/07/2017 france
Curitiba FOSS4G+State of the Map Brasil 2017 27/07/2017-29/07/2017 brazil
Boston FOSS4G 2017 14/08/2017-19/08/2017 USA
Aizu-wakamatsu Shi State of the Map 2017 18/08/2017-20/08/2017 japan
Boulder State of the Map U.S. 2017 19/10/2017-22/10/2017 united states
Buenos Aires FOSS4G+State of the Map Argentina 2017 23/10/2017-28/10/2017 argentina
Lima State of the Map LatAm 2017 29/11/2017-02/12/2017 perú

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Nakaner, Peda, Polyglot, Rogehm, Spec80, SrrReal, YoViajo, derFred, jinalfoflia, keithonearth, wambacher.

by weeklyteam at April 07, 2017 07:33 PM

Wikimedia Foundation

MisinfoCon: The internet’s biggest properties converge to fight fake news

Photo by Aubrie Johnson, public domain/CC0.

There is no end to examples of fake news cited by Wikipedia articles. The list of premature obituaries,for example, has grown considerably since the dawn of internet hoaxes thanks to how easily misinformation can promulgate.

Upworthy, KQED, Snopes, the Trust Project, and even the U.S. Department of Homeland Security all recently convened in Boston to answer one monumental question that’s been quietly looming over our heads: in a world bursting with free-flowing content, how do we stop the spread of misinformation?

In February, the Wikimedia Foundation joined a handful of media organization at the MIT Media Lab to lend their expertise at MisinfoCon, a summit and hack-a-thon dedicated to addressing the ever-growing problem of fake news online.

Propaganda, though often considered a bygone tool of marketing, is nothing new. Native advertising presents ads disguised as legitimate news articles. Clickbait disseminates through consumers trading false facts that appear to the naked eye to be true and verified. Seemingly innocuous memes go viral on social media, some containing falsehoods that churn furiously through news feeds. With more people getting their news from the internet than anywhere else, these are the problems most MisinfoCon attendees came to solve.

Media literacy organization First Draft demonstrated their new Google Chrome extension, NewsCheck, which lets viewers investigate an image or video’s authenticity together by completing a survey checklist and assessing the results. The Berkeley Institute for Data Science (BIDS) designed software to help anyone on the internet collaborate and fact check with others. During the summit, they referenced Wikipedia’s community of volunteer editors to better inform their workflow.

First Draft and BIDS already have credibility indicators in place, as do Wikipedia’s editors. While more educators, librarians, scientists, and engineers chipped away at their projects, a small cohort broke away to look at what makes content credible for online news as we know it.

All digital content contains some type of metadata—timestamps, file sizes, meta tags, etc. If we could attach metadata to any online content that would indicate its credibility, what would that include? We asked this and many other questions during the breakaway session, and came to several rough conclusions vaguely similar to Wikipedia’s guidelines for verifiability:

  • Origin and motivation: Who provided the claim, and when?
  • Byline: Who is taking credit for the claim’s research and writing?
  • Sourcing: Is it possible to track down the writer’s sources? Are they clearly attributed?
  • Cost of verification: Who does this article benefit financially?
  • Tone and typology: Does the content intend to inform, or convince? Is it descriptive or prescriptive?

Prototyping the “metadata of news” is still in the works. Wikipedians have been refining indicators for credibility for sixteen years, laying solid groundwork for the rest of the web at large. Organizations like the News Literacy Project are training middle and high school students to utilize media literacy skills, giving them the tools to investigate dubious claims, and encouraging them to teach older generations.

Although nearly every project revolved around social media, representatives from the two biggest platforms in the game were noticeably absent: Facebook and Twitter. Despite this, many tools unveiled at MisinfoCon have be used across platforms and across nations. Some projects proposed ideas to cut off revenue and incentives to sites promoting fake news, instead rewarding organizations that prioritize newsroom diversity. Another, Hypothe.sis, essentially adds a “Talk page” layer to every accessible page, from academia to memes allowing critical analysis of all the web has to offer. Even the fake bits.

Aubrie Johnson, Social Media Associate (Contractor)
Wikimedia Foundation

by Aubrie Johnson at April 07, 2017 05:11 PM

Gerard Meijssen

#Wikidata - #Perfection or #progress

When you consider the intention of the "BLP" or the "Biographies of Living People", you will find that it is defensive. It is the result of court cases brought against the Wikimedia Foundation or Wikipedians by living people. The result was a restrictive policy that intents to enforce the use of "sources" for all statements on living people.

The upside was fewer court cases and the downside; administrators who blindly applied this policy particularly in the big Wikipedias. Many people left, they no longer edit Wikipedia.

At Wikidata there are proponents of enforcing a BLP explicitly so that they have the "mandate" to block people when they consider them too often in violation of such a policy.

For a reality check; there are many known BLT issues in Wikidata that are not taken care of. There are tools like the one by Pasleim who make it easy to do so. There have been no external complaints about Wikidata so far but internal complaints, complaints about the quality of descriptions for instance, are easily waved away.

The implementation of a "DLP" or "Data of Living People" where "sources" are mandatory would kill much of the work done at Wikidata and will not have an effect on the existing backlog. Killing the backlog removes much of the usability of Wikidata and will prove to be even worse.

In order to responsibly consider new policies, first reflect on the current state of a project. What issues need to be addressed, what can be done to focus attention on the areas where it is most needed. How can we leverage what we know in other projects and in external sources. When it is really urgent make a cost analysis and improve the usability of our software to support the needed progress. And yes, stop insisting on perfection; it is what you aim for, No one of us is in a position to throw the first stone.
Thanks,
      GerardM


by Gerard Meijssen (noreply@blogger.com) at April 07, 2017 06:06 AM

Brion Vibber

ogv.js 1.4.0 released

ogv.js 1.4.0 is now released, with a .zip build or via npm. Will try to push it to Wikimedia next week or so.

Live demo available as always.

New A/V sync

Main improvement is much smoother performance on slower machines, mainly from changing the A/V sync method to prioritize audio smoothness, based on recommendations I’d received from video engineers at conferences that choppy audio is noticed by users much more strongly than choppy or out of sync video.

Previously, when ogv.js playback detected that video was getting behind audio, it would halt audio until the video caught up. This played all audio, and showed all frames, but could be very choppy if performance wasn’t good (such as in Internet Explorer 11 on an old PC!)

The new sync method instead keeps audio rock-solid, and allows video to get behind a little… if the video catches back up within a few frames, chances are the user won’t even notice. If it stays behind, we look ahead for the next keyframe… when the audio reaches that point, any remaining late frames are dropped. Suddenly we find ourselves back in sync, usually with not a single discontinuity in the audio track.

fastSeek()

The HTMLMediaElement API supports a fastSeek() method which is supposed to seek to the nearest keyframe before the request time, thus getting back to playback faster than a precise seek via setting the currentTime property.

Previously this was stubbed out with a slow precise seek; now it is actually fast. This enables a much better “scrubbing” experience given a suitable control widget, as can be seen in the demo by grabbing the progress thumb and moving it around the bar.

VP9 playback

WebM videos using the newer, more advanced VP9 codec can use a lot less bandwidth than VP8 or Theora videos, making it attractive for streaming uses. A VP9 decoder is now included for WebM, initially supporting profile 0 only (other profiles may or may not explode) — that means 8-bit, 4:2:0 subsampling.

Other subsampling formats will be supported in future, can probably eventually figure out something to do with 10-bit, but don’t expect those to perform well. 🙂

The VP9 decoder is moderately slower than the VP8 decoder for equivalent files.

Note that WebM is still slightly experimental; the next version of ogv.js will make further improvements and enable it by default.

WebAssembly

Firefox and Chrome have recently shipped support for code modules in the WebAssembly format, which provides a more efficient binary encoding for cross-compiled code than JavaScript. Experimental wasm versions are now included, but not yet used by default.

Multithreaded video decoding

Safari 10.1 has shipped support for the SharedArrayBuffer and Atomics APIs which allows for fully multithreaded code to be produced from the emscripten cross-compiler.

Experimental multithreaded versions of the VP8 and VP9 decoders are included, which can use up to 4 CPU cores to significantly increase speed on suitably encoded files (using the -slices option in ffmpeg for VP8, or -tile_columns for VP9). This works reasonably well in Safari and Chrome on Mac or Windows desktops; there are performance problems in Firefox due to deoptimization of the multithreaded code.

This actually works in iOS 10.3 as well — however Safari on iOS seems to aggressively limit how much code can be compiled in a single web page, and the multithreading means there’s more code and it’s copied across multiple threads, leading to often much worse behavior as the code can end up running without optimization.

Future versions of WebAssembly should bring multithreading there as well, and likely with better performance characteristics regarding code compilation.

Note that existing WebM transcodes on Wikimedia Commons do not include the suitable options for multithreading, but this will be enabled on future builds.

Misc fixes

Various bits. Check out the readme and stuff. 🙂

What’s next for ogv.js?

Plans for future include:

  • replace the emscripten’d nestegg demuxer with Brian Parra’s jswebm
  • fix the scaling of non-exact display dimensions on Windows w/ WebGL
  • enable WebM by default
  • use wasm by default when available
  • clean up internal interfaces to…
  • …create official plugin API for demuxers & decoders
  • split the demo harness & control bar to separate packages
  • split the decoder modules out to separate packages
  • Media Source Extensions-alike API for DASH support…

Those’ll take some time to get all done and I’ve got plenty else on my plate, so it’ll probably come in several smaller versions over the next months. 🙂

I really want to get a plugin interface so people who want/need them and worry less about the licensing than me can make plugins for other codecs! And to make it easier to test Brian Parra’s jsvpx hand-ported VP8 decoder.

An MSE API will be the final ‘holy grail’ piece of the puzzle toward moving Wikimedia Commons’ video playback to adaptive streaming using WebM VP8  and/or VP9, with full native support in most browsers but still working with ogv.js in Safari, IE, and Edge.

by brion at April 07, 2017 12:13 AM

April 06, 2017

Wikimedia Tech Blog

Wikimedia REST API hits 1.0

Take the drop of knowledge that you want, how you want it, when you want it. Photo by José Manuel Suárez, CC BY 2.0.

The Wikimedia REST API (try it on the English Wikipedia) offers access to Wikimedia’s content and metadata in machine-readable formats. Focused on high-volume use cases, it tightly integrates with Wikimedia’s globally distributed caching infrastructure. As a result, API users benefit from reduced latencies and support for high request volumes. For readers, this means that content in apps and on the web loads more quickly. Editors have a more fluid and intuitive VisualEditor experience. Researchers and bot authors can work with Wikimedia content at volume, using formats that are widely supported.

The release of version 1 officially sees the REST API ready for stable production use. After two years of beta production, serving approximately 15 billion requests per month, we are now publicly committing to the stability guarantees set out in our versioning policy. Each entry point has a stability level ranging from experimental to stable. Experimental end points are subject to change without notice, while changes to unstable end points will be announced well in advance. Stable entry points are guaranteed to keep working for the lifetime of the v1 API as a whole. To allow for minor changes in the returned content formats without breaking clients, content types are versioned, and content negotiation using the HTTP Accept header is supported.

———

The API documentation and sandbox is auto-generated from a specification, and makes it easy to discover and try out API end points.

Case study: Structured article HTML

The REST API simplifies working with content using structured and standardized formats. For article content, the Parsing team developed an HTML and RDFa specification exposing a host of structured information inside a regular HTML page. This information makes it possible to easily and correctly process complex content using regular HTML tools.

The VisualEditor WYSIWYG editor (see below) takes advantage of this information to power editing of complex content like template transclusions, media, and extension tags such as citations. The edited HTML is then saved via Parsoid, using its unique ability to cleanly convert edited HTML back to Wikitext syntax. Easy access to the full content information combined with the ability to edit is a huge simplification for anyone interested in working with Wikipedia and other Wikimedia projects’ article contents.

The VisualEditor edit environment uses the REST API to fetch structured HTML, switch to wikitext edit mode, and finally save changed HTML without introducing spurious wikitext diffs that would make reviewing changes difficult.

The REST API endpoints used for this are:

Case study: Page summaries

The upcoming page preview feature shows a brief summary of linked articles on hover. It fetches the data powering this preview from the REST API page summary end point.

One frequent need is compact summary information about an article in a structured format. To this end, the REST API offers a page summary end point. This endpoint is used to show quick previews for related articles in the Wikipedia Android App. Using the same API, the Reading web team is currently rolling out a similar page preview feature to the desktop web experience.

Other functionality

The Wikipedia Android app has more than eight million users across the globe, and is almost entirely powered by the REST API. The main screen shows a feed of the most interesting and noteworthy articles powered by a set of feed endpoints. Mobile-optimized content is loaded through the mobile-sections endpoints. In an article, the user can get definitions for for words using the definition endpoint offering structured Wiktionary data.

Since 2011, mobile hardware has improved faster than networks.

Some cross-project information is available at the special wikimedia.org domain. This includes mathematical formulae rendered by Mathoid to SVG, MathML or PNG (also available in each project’s API), as well as historical page view statistics for all projects in the metrics hierarchy.

Technical background

Over the last years, mobile client hardware and platform capabilities have improved at a faster pace than network bandwidth and latency. To better serve our users, we have reduced network use and improved the user experience by gradually shifting more frontend logic to clients. Starting with our Android and iOS apps, content and data is retrieved directly from APIs, and formatted on the client. As we gradually apply the same approach to the web by taking advantage of new web platform features like ServiceWorkers, our APIs are set to serve most of our overall traffic.

Wikimedia’s globally distributed caching network, with radio towers indicating the locations of datacenters. Colors of small circles indicate which datacenter clients are mapped to via GeoDNS. The asia presence is planned, but not yet operational.

Large volume at low latency is the speciality of our globally distributed caching network. Over 96% of 120k-200k requests per second are served straight from caches, typically from a caching data center geographically close to the client. However, achieving such hit rates requires a clean and predictable URL structure. Our classic action API uses query strings and offers a lot of flexibility to users, but this flexibility also limits the effectiveness of caching. In contrast, the REST API was designed to integrate with the caching layers from the start. Today, over 95.5% of REST API requests are served directly from cache. This directly improves the user experience, shaving dozens to hundreds of milliseconds off of the response time by fully processing most requests in the geographically closest caching data center.

Caching works extremely well to speed up the delivery of popular resources, but does not help with less popular ones. Expensive resources can take dozens of seconds to re-generate from scratch, which ties up server-side resources, and is very noticeable to users. Furthermore, some use cases like visual editing also need guaranteed storage of matching metadata to complete an edit. After using only caching for a while, we soon realized that we needed more than caching; we actually needed storage with explicit control over resource lifetimes. This storage would ideally be available in both primary data centers at the same time (active-active), scale well to accommodate relatively large content types like HTML, and have low operational overheads. After some research and discussion we chose Cassandra as the storage backend, and implemented a fairly flexible REST table storage abstraction with an alternate backend using SQLite.

HyperSwitch: OpenAPI (Swagger) spec driven implementation

The OpenAPI specification (formerly Swagger) is widely used to clearly document APIs in a machine-readable manner. It is consumed by many tools, including the REST API documentation sandbox, our own API monitoring tool, and many of our API unit tests. Typically, such specs are maintained in parallel with the actual implementation, which risks inconsistencies and creates some duplicated effort. We wanted to avoid those issues, so we decided to drive the API implementation entirely with OpenAPI specs using the hyperswitch framework. This move has worked very well for us, and has allowed us to easily customize APIs for 743 projects driven by a single configuration file. A variety of modules and filters implement distributed rate limiting, systematic metric collection and logging, storage backends, content-type versioning, and access restrictions.

Next steps

The v1 release is just the beginning for the REST API. Over the next year, we expect traffic to grow significantly as high-volume features are rolled out, and public adoption grows. Functionality will expand to support high-volume use cases, and experimental endpoints will graduate towards first unstable and then eventually stable status as we gain confidence in each endpoint’s usability.

One focus area over the next year will be preparing a more scalable storage backend for efficient archiving of HTML, metadata and wiki markup. Eventually, we would like to reliably offer the full edit history of Wikimedia projects as structured data via stable URIs, ensuring that our history will remain available for all to use, enabling use cases such as article citations.

We look forward to learning about the many expected and unexpected uses of this API, and invite you to provide input into the next API iteration on this wiki talk page.

Gabriel Wicke, Principal Software Engineer, Wikimedia Services
Marko Obrovac, Senior Software Engineer (Contractor), Wikimedia Services
Eric Evans, Senior Software Engineer, Wikimedia Services
Petr Pchelko, Software Engineer, Wikimedia Services

Wikimedia Foundation

by Gabriel Wicke, Marko Obrovac, Eric Evans and Petr Pchelko at April 06, 2017 07:42 PM

How we know what we know: The Initiative for Open Citations (I4OC) helps unlock millions of connections between scholarly research

Like a submarine far below the surface sends intelligence to stations on land, a web of scholarly citations underlies and connects our world of knowledge. Photo by Lt. Ed Early/US Navy, public domain/CC0.

Citations are the backbone of scholarly knowledge. They help researchers verify information, build on the existing knowledge we already know, and generate opportunity for new discoveries.

Citations are not only relevant to academia. They are the foundation for how we know what we know.

Until recently, the idea of creating a freely accessible repository of open citation data—i.e. data representing how scholarly works cite each other—has been hampered by restrictive and inconsistent licenses and by the lack of machine-readable reference data.

Today, we are proud to announce a key milestone toward unlocking the potential for open citation data.

———

The Wikimedia Foundation, in collaboration with 29 publishers and a network of organizations, including the Public Library of Science (PLOS), the Internet Archive, Mozilla, the Bill & Melinda Gates Foundation, the Wellcome Trust, and many others, announced the Initiative for Open Citations (I4OC), which aims to make citation data freely available for anyone to access.

Scholarly publishers deposit the bibliographic record and raw metadata for their publications to Crossref. Thanks to a growing list of publishers participating in I4OC, reference metadata for nearly 15 million scholarly papers in Crossref’s database will become available to the public without copyright restriction.1 This data includes bibliographic information (like the title of a paper, its author(s), and publication date), machine readable identifiers like DOIs (Digital Object Identifier, a common way to identify scholarly works), as well as data on how papers reference one another. It will help draw connections within scientific research, find and surface relevant information, and enrich knowledge in places like Wikipedia and Wikidata.

Unlike scholarly articles, citation data are not subject to copyright in the same way that articles themselves may be. Citation data typically rest in the public domain — free for anyone to access. Until recently, however, much of the citation data in the scientific research world has been difficult to find, surface, and access. “It is a scandal,” wrote David Shotton in Nature in 2013, “that reference lists from journal articles—core elements of scholarly communication that permit the attribution of credit and integrate our independent research endeavours—are not readily and freely available.”

Before the I4OC started, publishers releasing references in the open accounted for just 1% of the publications registered with Crossref.  As of the launch of the I4OC initiative, more than 40% of this data has become freely available.

As of March 2017, the fraction of publications with open references has grown from 1% to more than 40% out of the nearly 35 million articles with references deposited with Crossref (to date). Image by Dario Tarborelli, public domain/CC0.

Like sources cited within a Wikipedia article, references cited within a scholarly article can help build powerful discovery tools and a stronger foundation for open knowledge.

Volunteer contributors and software developers in the Wikimedia movement have been curating and incorporating scholarly citations into the Wikimedia projects for quite some time. The GeneWiki project has been linking reference sources to information about genes, proteins, and diseases in Wikipedia and Wikidata. Initiatives like WikiCite aim to create a bibliographic database in Wikidata to serve all Wikimedia projects. The LibraryBase project is building tools to better understand how information in Wikipedia is referenced and guide how editors identify and use references on Wikipedia. The WikiFactMine project is helping connect Wikidata statements in the field of biomedical sciences to scholarly literature.  Programmatic initiatives such as 1lib1ref are engaging librarians to add missing citations to Wikipedia, and services like Citoid are simplifying the discoverability and creation of citations for free knowledge.

These projects depend on the availability of open bibliographic and citation data. We expect I4OC will substantially contribute to all these initiatives.

Example of a partial citation graph for Laemmli (1970), one the most cited scholarly journal articles of all time. Graph generated from open citation data in Wikidata via a SPARQL query. Image by Dario Taraborelli, public domain/CC0.

Over the coming months, the organizations involved in I4OC will be working with different stakeholders to raise awareness of the availability of open citation data and evaluate how it can be reused, analyzed, and built upon. We  will provide regular updates on the growth of the public citations corpus, how the data is being used, additional stakeholders and participating publishers, and new services that are being developed.

Any publisher can freely license and share their reference data by enabling reference distribution via Crossref. For more information and details on how to get involved, please visit the I4OC website: https://i4oc.org or follow @i4oc_org on Twitter.

A joint press release about the announcement is available on the I4OC website.

Dario Taraborelli, Director, Head of Research, Wikimedia Foundation
Jonathan Dugan, WikiCite organizing committee

[1] As of March 2017, nearly 35 million articles with references have been registered with Crossref. Citation data from the Crossref REST API will be made available shortly after the announcement.

Founders

  • OpenCitations
  • Wikimedia Foundation
  • PLOS
  • eLife
  • DataCite
  • Centre for Culture and Technology, Curtin University

Participating publishers

  • American Geophysical Union
  • Association for Computing Machinery
  • BMJ
  • Co-Action Publishing
  • Cambridge University Press
  • Cold Spring Harbor Laboratory Press
  • Copernicus GmbH
  • eLife
  • EMBO Press
  • Faculty of 1000, Ltd.
  • Frontiers Media SA
  • Geological Society of London
  • Hamad bin Khalifa University Press (HBKU Press)
  • Hindawi
  • International Union of Crystallography
  • Leibniz Institute for Psychology Information
  • MIT Press
  • PeerJ
  • Pensoft Publishers
  • Portland Press
  • Public Library of Science
  • Royal Society of Chemistry
  • SAGE Publishing
  • Springer Nature
  • Taylor & Francis Group
  • The Rockefeller University Press
  • The Royal Society
  • Ubiquity Press, Ltd.
  • Wiley

Stakeholders

  • Alfred P. Sloan Foundation
  • Altmetric
  • Association of Research Libraries
  • Authorea
  • Bill & Melinda Gates Foundation
  • California Digital Library
  • Center for Open Science
  • Coko Foundation
  • Confederation of Open Access Repositories
  • ContentMine
  • Data Carpentry
  • Dataverse
  • dblp: computer science bibliography
  • Department of Computer Science and Engineering, University of Bologna
  • Dryad
  • Figshare
  • Hypothes.is
  • ImpactStory
  • Internet Archive
  • Knowledge Lab
  • Max Planck Digital Library
  • Mozilla
  • Open Knowledge International
  • OpenAIRE
  • Overleaf
  • Project Jupyter
  • rOpenSci
  • Science Sandbox
  • Wellcome Trust
  • Wiki Education Foundation
  • Wikimedia Deutschland
  • Wikimedia UK
  • Zotero

by Dario Taraborelli and Jonathan Dugan at April 06, 2017 04:33 PM

April 05, 2017

Wiki Education Foundation

The great “Women in geology” Wikipedia project

Glenn Dolphin is Tamaratt Teaching Professor in the Department of Geoscience at the University of Calgary. In this post he talks about assigning students to contribute to Wikipedia in his fall 2016 Introductory Geology course.

My name is Glenn. I was hired by the University of Calgary, in the Department of Geoscience, almost four years ago. My position is Tamaratt Teaching Professor in Geoscience. I have a Bachelor’s and Master’s degree in geology, but my PhD is in Science Education. I research learning in geology classrooms, and especially how to utilize the history of science to teach about science content and the process of science. The classes I teach are large enrollment (300-400 students) introductory classes. In response to the literature claims that lecture-only courses are not very effective at facilitating student learning, I decided to reconfigure my introductory geology course for non-science majors into a more active learning environment. I incorporated a number of strategies to achieve this, viz. breaking the entire class into small groups, using class time for small group writing exercises and discussions, a short Wikipedia project as well as a long-term Wikipedia project.

I completely restructured the traditional “rocks for jocks” course to highlight three storylines: The earth is a historical entity, that history is very, very long, and the earth is a dynamic system. In general, I presented content in a historically contextualized manner. In doing research for the course, it became quite obvious that though there was plenty of contributions by women to geology, the record of those contributions was sorely lacking.

During the course generation phase, I read a post concerning the Wikipedia “Year of Science”. I contacted the Wiki Education group and asked how I might incorporate Wikipedia into my class. I have found that when students are producing something for the “real world” as opposed to just the instructor for a grade, they work much harder to ensure quality. I spoke with Samantha about how to have students produce something that would be bigger than the course. Her confidence and energy convinced me that though the entire course was new, adding this particular project would not be onerous. It wasn’t.

I mapped out two different projects, one mandatory for all small groups in the course to contribute in a small way to the Wikipedia page of a “woman in geology”. The second was a long-term project with the same focus, but would incorporate a more substantial contribution. I was hesitant, at first, to incorporate these new projects, as I had no familiarity editing in Wikipedia, and I really did not want a lot of extra work and worry. This was also the biggest class that Wiki Ed would have had experience with (355 students). They were actually eager to see if they could support such an effort. I (virtually) met with Helaine and Ian who assured me that they would be my resource people in case I ran into difficulties. They did not disappoint.

They helped me build my Wikipedia course structure, which trainings to post, how to manage the timing of the projects and how to evaluate them. When it came time for the projects to run, I just directed the students to the Wikipedia course page and the rest was taken care of. During the running of the projects, Wiki Ed also instituted a mechanism making it possible to view each student’s contributions to the various Wiki pages. This was incredibly useful for evaluating the students’ work for each of the projects.

I received a lot of positive feedback on the assignments, because despite the few constraints, they were left pretty open. The course was mainly for non-science majors, so if they wanted to, students could focus on the science of the woman geologist, or some other aspect of the woman’s biography, related to the science (e.g. social or political forces, gender bias, etc.). One woman in the class, a communications major working in the media, took it upon herself to find one of the women in geology who was still living. She called her and interviewed her for the project. The student said it was a great experience to integrate her media training with a science course (of all things), and to create this new piece of knowledge for a much broader audience than one would normally expect from an introductory science course.

By the end of both projects we had edited over 80 different pages for women in geology and created almost 40 pages that didn’t exist before. Students were very excited about these aspects; first, that they were doing something that anyone in the world could see, and second, that they could actually create something that never existed before that was also available for the whole world to see. As of the writing of this blog post, those 83 articles have had close to 300 thousand views. When our science faculty got wind of the project, and its success, they ran a story about it in the University news.

Image: The Wikipedia project.jpg, by Susan Cannon, CC BY-SA 4.0, via Wikimedia Commons.

by Guest Contributor at April 05, 2017 08:14 PM

Gerard Meijssen

#Wikimedia and our #quality

In Berlin, the Wikimedia Foundation deliberated about the future. A lot of noble intentions were expressed. People went home glowing in the anticipation of all the good things they want. It is good to talk the talk and follow up and walk the walk.

A top priority for Wikidata is that it is used and useful. As it becomes more useful, quality becomes more of a priority for the people who use it. They will actively curate the data and remedy issues because they have a stake in the outcome.

So far Wikidata is largely filled with information from all the Wikipedias and this process can be improved substantially. For this to happen there is a need for more complete and up to date data. So what use can we give this data so that it gains use, and thereby gains value?

What if .. What if Wikidata could be used as an instrument to find the 4% of wiki links in Wikipedia that point to the wrong articles? With some minor changes to the MediaWiki software this can be done. This approach is described here for instance.. The beauty of this proposal is that not all the Wikipedians have to get involved, it is for those who care, for the rest it is mostly business as usual.

There are other benefits well. When it is "required" to add a source to a statement like "spouse of", it should be or is a requirement on the Wikipedia as well. When the source is associated with the Wiki link or red link for that matter, it should be possible for Wikidata to pick it up manually or with software.

When content of Wikidata more closely mirrors information of a Wikipedia in this way, it becomes easy and obvious to compare this information with other Wikipedias. Overall quality improves, but as relevant, the assurance we can give about our quality improves.

When we consider Wikimedia for the next 15 years, I expect that we will focus on quality and prevent bias not only by combining all our resources but also by reaching out to other trusted sources. By working together we will expose a lot more fake facts.
Thanks,
       GerardM

by Gerard Meijssen (noreply@blogger.com) at April 05, 2017 11:27 AM

April 03, 2017

Tech News

Tech News issue #14, 2017 (April 3, 2017)

TriangleArrow-Left.svgprevious 2017, week 14 (Monday 03 April 2017) nextTriangleArrow-Right.svg
Other languages:
العربية • ‎čeština • ‎Deutsch • ‎English • ‎español • ‎فارسی • ‎suomi • ‎français • ‎italiano • ‎日本語 • ‎한국어 • ‎polski • ‎português do Brasil • ‎русский • ‎svenska • ‎українська • ‎Tiếng Việt • ‎中文

April 03, 2017 12:00 AM

April 02, 2017

Wikimedia Foundation

The big bear of a mission to chronicle the 1948 Cleveland Indians

Image by Bowman Gum, public domain/CC0.

At the end of the long and tiring 1948 baseball season, the Cleveland Indians found themselves in a tie with the Boston Red Sox for first place. Both teams had the same win-loss record, and in 22 games against each other, each had won 11 games.

The stage was set for the first one-game playoff in the history of the American League (AL), one of baseball’s two top leagues. It would be a simple tiebreaker. One team would win the AL pennant, advance to the World Series, and play for the overall championship. The other would go home.

The Indians were a charter member of the AL in 1901, but had seen relatively little success in many of the years since. In 1948, they were fighting to win their first pennant and championship in nearly three decades.

To face the Red Sox, the team selected rookie pitcher Gene Bearden. Bearden had won 19 games that season, including two against the Sox, but one might assume that his arm would be tired after pitching a game just two days earlier. But he didn’t play like it. Bearden pitched the entire game, allowing 3 runs on 5 hits, while his teammates knocked home 8 on 13 hits.

The Indians went on from the tiebreaker to win the World Series. They have not won another one in the nearly seventy years since then.

———

Wikipedia editor, Cleveland native, and baseball fan Wizardman, who admits to being “spoiled” by the Indians’ success when he was growing up in the 1990s, is hoping that the Indians will win another title this season, which starts on April 3. They had a great chance as recently as last year, when they “heartbreakingly” fell exactly one win short to the Chicago Cubs.

In the meantime, Wizardman is focusing on the Indians’ 1948 season and its 45 related articles on Wikipedia, many with surprisingly fascinating storylines.

“Now that I’ve delved fairly deep into it, it has shocked me just how many stories make up each individual player,” Wizardman said. “Significant events certainly weren’t limited to just Bob Feller‘s wartime service or Larry Doby breaking the American League color barrier. You’ve got a guy who was later banned from the minor leagues, a guy who suffered a cerebral hemorrhage on the field, and one of the rare ballplayers who hit four homers in a game.”

And that’s not even getting into nicknames. Feller, the ace on Cleveland’s pitching staff, was known as “The Heater from Van Meter”, “Bullet Bob”, and “Rapid Robert.” Al Gettel went by “Two Gun.”  And Mike Garcia was affectionately called as “Big Bear,” giving a name to Wizardman’s ongoing mission to improve all of the articles related to the team in that year: Operation Big Bear.

The operation’s mascot is a literal bear (above) that Wizardman has named “Dwight,” a reference to The Office character and a conversation he once had. Photo by Simm, public domain/CC0.

Of all the articles he has or plans to write about the team, Wizardman says that Don Black is probably his favorite. It was the first article he worked on that related to the 1948 Indians, and the surprising details he found—a sobriety-fueled career turnaround, the cerebral hemorrhage—helped him decide to create Operation Big Bear. “Had I picked a boring ballplayer first,” he said, “who knows if I would have continued.”

Remaining interested and engaged with the content you’re writing about is important, because It can be difficult to write Wikipedia articles about this time period. The site’s content is built on the concept of verifiability, meaning that information added needs to be cited to reliable sources. “Readers must be able to check that any of the information within Wikipedia articles is not just made up,” the page says.

But many of the reliable sources from the 1940s are still copyrighted in the United States, so they are rarely available through free resources like Google Books, HaithiTrust, or the Internet Archive. That’s a major reason why, Wizardman notes, Wikipedia has a “recentist bend” that includes baseball, like the several Indians players from the 1990s with “good” articles written about them. For those in more recent times, there are more available reliable sources about their lives and career, many for free. For the opposite reason, “the same caliber of pre-Internet player is probably a stub [article],” he says.

Wizardman has found tricks to help him get around these difficulties. One major help has been access to the archives of Cleveland’s primary newspaper (the Plain Dealer) and Newspapers.com, the latter through the efforts of the Wikipedia Library. Another has been the lack of difficulty in writing what he calls “decent” baseball articles, as “the game is naturally very stat-based,” he says.

But on the other hand, “making the leap from relying on stats and telling the ballplayer’s story is much more difficult.” For example, Wizardman may have access to the Plain Dealer, but it is rare for a player to have stayed in Cleveland for his entire career. “Tackling other aspects,” he said, “such as being traded to another team where sources are harder to come by, … can be challenging.” With limitations like that, Wizardman has been forced to concede that at least 19 of his 45 planned articles will never be finished to the standards of Wikipedia’s highest “featured” quality level.

Wizardman has had help in Operation Big Bear from Zepppep, a editor who according to Wizardman “didn’t edit on Wikipedia long, but did do a few of the articles on the 1948 Indians, and helped to re-kickstart [Operation Big Bear] after I had moved on to other things.” Of special importance to Wizardman was Zepppep’s work on Bob Feller’s article, “the one player more than any I wanted to get to [featured status], both due to his significance and the shape of the article when I first read it 10 years or so ago (it was bad).”

You too can help Wizardman on his mission, lest it take until 2030 to finish. Head over to Operation Big Bear and start working on one of the red, orange, or yellow articles.

Ed Erhart, Editorial Associate
Wikimedia Foundation

by Ed Erhart at April 02, 2017 02:31 PM

Gerard Meijssen

#Wikidata - #Quality is a #perspective.

Forget absolutes. As an absolute quality does not exist for Wikidata. At best quality has attributes, attributes that can be manipulated, that interact. With 25,430,779 items any approach to quality will have a potentially negative quality effect when quality is approached from a different perspective.

Yet, we seek quality for our data and aim for quality to measurably improve. There are many perspectives possible and they have value, a value that is strengthened when it is combined with other perspectives.

At the Wikimedia Foundation, the "Biographies of Living Persons" or BLP has a huge impact. When you consider this policy, it is about biographies, a Wikipedia thing and this is not what Wikidata does. It is important to appreciate this as it is a key argument when a DLP "Data of Living Persons" is considered. Important is that the BLP focuses on articles for living people and its aim is to prevent law suits from articles that have a negative impact on living people.

Data is different, it is used differently and it has an impact in different ways.  Take for instance notability; a person may be notable and relevant because of having held an office or receiving an award. In order to complete information on the succession of an office or an award, it is therefore essential to include all persons involved in Wikidata. At the same time, when information is incomplete it can have an impact on a person as well. "you did not get that award because Wikidata does not say so".

Wikidata is incomplete and immature. Given the different perspectives on a DLP, most of them are not achievable in short order. The people who insist on a "source" for any statement will wipe most of the Wikidata statements and force it to a stand still. The people who insist on completeness have an impossible full time job for many years to come.

So what to do? Nothing is not an option but seeking ways to improve both quality and quantity is. A key value of Wikidata is its utility. The "Black Lunch Table" is one example of giving utility to Wikidata. They use Wikidata to manage the Wikipedia articles they want to write and expand on the notability of artists by including information on Wikidata. All the information helps people to write Wikipedia articles. Quality is important. Being included on the Black Lunch Table means something; artists are considered to be notable and worthy of a Wikipedia article.

Another example is using the links to authors so that people can read a book.

Given the size of Wikidata, it is impossible to get everything right in short order. When we can get people to adopt subsets of our data, these will grow. Our data will be linked. When we get to the stage where people actually object to data in Wikidata, we have improved both our quantity and quality substantially. As it is, looking at all the data, typically there is little to object to and that is in itself objectionable.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at April 02, 2017 09:31 AM

#Wikimedia - First a #strategy, then #Action

The people at Open Library have books they love to share. They are in the process of opening what they have even more.

In a previous post it was mentioned that there is a JSON document to getting information on authors like Cicero. There are many works by Cicero and today they have a JSON document in production for the books as well.

So what possible scenario is there for the readers of any Wikipedia; they check in Open Library what books there are for Cicero (or any other authors). They download a book and read it.

Where we are:
  • there is an API informing about authors and their books at Open Library based on the Open Library identifier.
  • an app can now be build that shows this information
    • this app could use identifiers of other "Sources" like Wikidata, VIAF or whatever on the assumption that Wikidata links these "Sources".
    • this app could show information based on Wikidata statements in any language using Wikidata labels.
    • this app may download the book (maybe not yet but certainly in the future)

What next:
  • investigate the JSON and see what we already can do with it
    • publish the results and iterate
  • Add more identifiers of authors known to Open Library to Wikidata
    • there are many OL identifiers in the Freebase information; they need to be extracted and a combined list of Wikidata identifiers and OL identifiers allows OL to curate it for redirects and we can then publish.
  • Raffaele Messuti pointed to existing functionality that retrieves an author ID for Wikidata and VIAF using an ISBN number.
    • Open Library knows about ISBN numbers for its books. When it runs the functionality for all the authors where it does not have a VIAF identifier it can enrich its database and share the information with Wikidata.
    • Alternatively someone does this based on exposed information at Open Library.. :)
  • We add a link to Open Library in the {{authority control}} in Wikipedia
  • We could add information for nearby libraries like they do in Worldcat [1].
  • We can measure how popular it is; how many people we refer to Open Library or to their library.
At the Wikimedia Foundation we aim to share in the sum of all knowledge. We aim to enable people acquire information. Making this happen for people at Wikipedia, Open Library and their library is part of this mission we just have to be bold and make it so.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at April 02, 2017 08:07 AM

#Wikimedia - Sharing all #knowledge

It is strategy time at the Wikimedia Foundation. For me the overarching theme is: "Share in the sum of all knowledge". Ensuring that knowledge, information is available is not only an objective for us, it is an objective we share with organisations like the Internet Archive and the OCLC.

One of the activities of Open Archive is the "Open Library". It provides over the Internet access to books that are free to read. At Wikidata we include links for authors that are known to the Open Library so all it takes is for a Wikipedia to have a {{authority control}} on its authors and a link to Open Library has been provided.

When you work together, a lot can be achieved. A file with identifiers for authors has been sent to the OCLC en Open Library. The reaction is that in the JSON for these authors Open Library includes a link to both VIAF (a system by the OCLC) and Wikidata. This is the JSON for Mr Richard W. Townshend.

The next step is to optimise the process of including identifiers for both VIAF and Open Library. What we bring in is our community. We have done a lot of work using Mix'n Match. We do add identifiers when it seems opportune and we already function as a stepping stone between Open Library and the OCLC. So when we can target attention in Mix'n Match per language, it already is a lot easier to make a match. It may be possible for the OCLC and Open Library to match authors through publications and in that way technology is a deciding factor.

In the end there is only one point to all this: share in the sum of all knowledge. We all have a part to play.
Thanks,
       GerardM

by Gerard Meijssen (noreply@blogger.com) at April 02, 2017 05:42 AM

April 01, 2017

Wikimedia Foundation

What if people paid for Wikipedia, and only got a few articles? Now you can

Hamble, the Humble Bundle mascot, enjoys a vintage reading experience through a monocle with a printout of Wikipedia. Photo by Whitney Stutes

There are about 5.4 million articles on the English-language Wikipedia. That can be impressive – and a little overwhelming. It’s empowering to know that at any time you can—for free!—peruse lists of songs, and delve into the list of songs about vehicle crashes, and then read about the song “30,000 Pounds of Bananas,” and the real incident it is based on, and wow that’s actually kind of tragic, and did you know bananas are actually berries, and you can make beer out of them? And… wait, it’s 3 a.m.?

Rabbit hole. Gets ya every time. That’s where inspiration struck Humble Bundle, the San Francisco-based company that lets gamers buy a bundle of games while helping their favorite charities. Humble Bundle has long been a supporter of the Wikimedia Foundation, the nonprofit that supports Wikipedia and its sister projects.

A few weeks ago, the folks at Humble Bundle wondered, what if – instead of endless rabbit holes – people could get a little divet of Wikipedia, or maybe a pothole of Wikipedia, or a small ditch of their very own they could fall into? What if people could pay for a download of Wikipedia in staggeringly massive DRM-free text files?

“On a laserdisc!” suggested our forward-thinking Executive Director Katherine Maher. “Or a VHS tape!” Or a papyrus scroll! Or a thing where the whole encyclopedia is spelled out with Stonehenge-like tablets!

It turns out some of that wasn’t feasible.

Print shop. Photo by Daniel Chodowiecki, public domain.

But you can—you really, truly can—buy yourself a chunk of Wikipedia via Humble Bundle. You can download it for use whenever you want to look something up. You can even buy a slender printed volume! (Printing is a way of reproducing text and images using a master form or template.)

Dog in top hat. Photo by Bonque & Kindermann photography, public domain.

Humble Bundle is producing handsome bound volumes on topics including “Encyclopedia of English Language Metaphors,” “Encyclopedia of Commonly Misspelled Words,” (see what they did there?), and “Encyclopedia of the Metal Umlaut.”

Think of the vintage experience of perusing a volume of Wikipedia through your monocle before a crackling fire. In the article on the use of the umlaut in the name of heavy metal bands you read the explanation from Spın̈al Tap’s rocker David St. Hubbins: “It’s like a pair of eyes. You’re looking at the umlaut, and it’s looking at you.” Satisfied, you lay the book upon a doily on your oaken desk and pour a glass of sherry.

From now through Monday, you can order the Wikipedia bundle from Humble currently being featured.

The Wikimedia Foundation and open-license applications

Happy April Fool’s Day! If you get volume A from Humble Bundle, you can read all about the origins of this day on your own Wikipedia. Otherwise, you can look it up with everybody else online.

This offer, however, is very real, as is our sincere appreciation for Humble Bundle. This is their great idea, and the kind of creative generosity that makes the company a real gift to gaming and the internet.

Downloading and printing Wikipedia are possible because of free and open licenses that allow for other fantastical applications. Want to see a 4.9′ wide and 43′ tall representation of the complete text and images of the Wikipedia article for Magna Carta? Check out Magna Carta (An Embroidery), a 2015 work by English installation artist Cornelia Parker. Our licenses enable “Histography,” an online visualization of world history on a sliding scale. Print Wikipedia is an art project by Michael Mandiberg. It was displayed at the at Denny Gallery in New York City, where 106 of the 7,473 volumes of English Wikipedia were printed, providing a small snapshot of what Wikipedia looked like on April 7, 2015. In addition to enabling these creative ventures, we believe free and open licenses—like the Creative Commons license utilized for Wikipedia’s content—are an important tool in making knowledge available to every human being.

About Humble Bundle

Humble Bundle sells digital content through its pay-what-you-want bundle promotions and the Humble Store. When purchasing a bundle, customers choose how much they want to pay and decide where their money goes – between the content creators, charity, and Humble Bundle. Since the company’s launch in 2010, Humble Bundle has raised more than $95 million through the support of its community for a wide range of charities, providing aid for people across the world. For more information, please visit www.humblebundle.com.

Jeff Elder, Digital Communications Manager
Wikimedia Foundation

by Jeff Elder at April 01, 2017 07:00 AM

March 31, 2017

Gerard Meijssen

#Wikidata - concentrating on #Fulbright ?

A friend told me to concentrate on substantial awards;  the Fulbright scholarship for instance. To me concentrating on 325,000+ alumni is crazy. There are too many and obviously, some of them will have turned out not to be so notable after all. I do not think Wikidata is a stamp or pokemon collection either

When you search for Fulbright in Reasonator. There is still plenty to do. There is a "Fulbright scholarship" and a "Fulbright Program" they are about the same thing so their content should be merged.. And then there is this "Fulbright Prize"; it seems to have an article only on the Hebrew Wikipedia. There are also several items with no statements.

There is no reason for me to concentrating on all the Fulbright scholars. Given that it applies to so many people, slowly but surely more people will be tagged as such. Not only the people who can be found in categories or lists but also where it is only mentioned in an article.

A scholarship implies studying at a university. When you add a scholarship and there is no information about education. This is another aspect that needs taking care of. At some point it should become obvious, it is better to concentrate on something else.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at March 31, 2017 09:43 AM

Wikimedia UK

Guest post: Teaching competition law differently

Wikimedia UK’s educators’ workshop in summer 2016.

This post was written by Dr Pedro Telles, Senior Lecturer in Law at Swansea University and originally published on his website.

For the last couple of years with my colleague Richard Leonard-Davies I have been teaching competition law here at Swansea University and doing so in a very traditional and straightforward way: lectures focused on plenty of case law and seminars where we drilled down the details. As competition law is one of those topics that can be eminently practical, there was plenty of scope for improvement. As we run two separate Competition Law modules in different semesters (Agreements in the first, Dominance in the second) it is possible to make changes in only a part of the year.

About a year ago I found this blogpost by Chris Blattman on getting students to draft a Wikipedia article as part of their assessment. Blattman called this the creation of a public good while my preferred description is getting them to pay forward for the next lot. Immediately I thought, “hmmm let’s see the entries for competition law” and they were very underwhelming.

Fast forward a year, a few hoops and plenty of support from Wikimedia UK and we’re now in the position of starting the module with a new assessment structure that includes the (re)-drafting of a Wikipedia entry. Here’s the nitty gritty:

Assessment 1 (2,000 words)

For the first coursework you will have to choose from the topics covered this semester and check if it has a Wikipedia entry or not. Once you have selected a topic you will need to submit it for approval to either member of the teaching team. If an entry already exists you will critically analyse the entry by providing a report which encompasses the following:

–       Why you have chosen this topic

–       What is covered in the Wikipedia entry

–       What the entry does well

–       How the entry could be improved in your view (ie, caselaw, different perspectives, more recent doctrinal developments, context)

–       What aspects of the topic were not covered but should have been included

–       What sources (academic/case-law) you would use to reference the entry

We expect the piece to be factual on its description of the area of the law you decided to analyse but at the same time critical and reflective, basing yourself in good quality academic sources for the arguments you are presenting.

 

Assessment 2 (1,000 words)

For Coursework 2 you will be expected to put in action the comments and analysis from Coursework 1, ie you will be drafting an actual Wikipedia entry that improves on the strong points identified and addresses the weaknesses as well. This entry will be drafted on your Wikipedia dashboard (to be discussed in the Coursework Workshop in March) and will have to be submitted both on Turnitin and also uploaded to Wikipedia itself before the deadline.

Regular plagiarism rules apply, so if you pick an entry that already exists you are advised to re-write it extensively, which, to implement the changes from Coursework 1 you should be doing nonetheless. It is fundamental that you make the Turnitin submission prior to the Wikipedia one

The drafting style for this entry will be very different from the first one (or any academic coursework for that matter) as you are no longer critiquing a pre-existing text, but creating an alternative one. As such, it is expected to be descriptive and thorough, providing a lay reader with an understanding of the topic at hand. For an idea, please check Wikipedia’s Manual of Style: https://en.wikipedia.org/wiki/Wikipedia:Writing_better_articles

What we are hoping for with this experiment is to get students out of their comfort zone and used to think and write differently from the usual academic work. Instead of padding and adding superfluous materials, they will be expected (and marked) to a different standard.

But that is not the only thing we’re changing as the seminars will also be quite different from the past. This year we will use WhatsApp as a competition law case study.

 

Why WhatsApp?

Well, when considering what company/product to use as a case study there had been no investigations into WhatsApp so that made it a clear frontrunner as a potential case study. It’s a digital product/service which may or may not be tripping EU Competition Law rules with enough of a grey area to get people to think. So we will apply the law to WhatsApp and try to figure out if:

– It has a dominant position (and if so, in what market)

– It has abused its putative dominant position

– Its merger with Facebook is above board

– it’s IP policy/third-app access policy is compliant with competition law requirements

To this end, students will have to find information by themselves (incredible the amount of statistics freely available these days online…) and be prepared to work together in the seminar to prepare the skeleton arguments in favour/against any of those possibilities. The second half of the seminar will be spent with the teams arguing their position.

We’ll see how it goes and will comment on the whole experiment in four months or so. In the meanwhile, if you want to know more drop me a line in the comments.

by Pedro Telles at March 31, 2017 09:30 AM

Weekly OSM

weeklyOSM 349

21/03/2017-27/03/2017

Text

Scouts in Popayán, Colombia learn how to use tools in emergency cases and to share data with rescue workers. 1 | © Photo: Carlos F. Castillo

Mapping

  • At the FOSSGIS conference, data privacy on OSM was discussed. Frederik Ramm summarizes (de) (automatic translation) some relevant aspects in the German forum. There was also a brief discussion on Talk-de mailing list. Personally identifiable data, such as mapping activities, is still visible to anyone.
  • Voting window for the new tag amenity=courier is open until April 7th.
  • Martijn van Exel is working on a monthly newsletter for MapRoulette. Take a look at the March edition.
  • Following Harry Wood’s idea to upload OSM notes in MAPS.ME, Noémie Lehuby generated a file to spot bus stops with missing names around Paris.
  • Telenav’s mapping team members disappoints the Canadian community by armchair mapping of turn restrictions.
  • Areas tagged landuse=farm won’t be rendered (de) (automatic translation) any longer in our standard map style. In re-tagging these legacies, the NoFarm map might be a great help.
  • Martin Koppenhoefer explained where the center of Berlin is defined, after having recently looked at the history of the city center of Rome.

Community

  • Harald Hartmann asks in the German forum, whether there should be more “gamification” at OSM, for example in the form of virtual awards and rewards. (de) (automatic translation)
  • [1] Carlos F. Castillo aka kaxtillo runs a scout group called ScoutMappers in Popayán, Colombia. The scouts should be enabled to know and apply useful tools in emergency cases and to share data with rescue workers. The group from Popayán will publish their experiences from the past six years on REME to share their knowledge with all the scouts around the world so that every scout can do a good deed every day.
  • According to Spanish OSM, there are many mazes mapped in OpenStreetMap.

OpenStreetMap Foundation

  • In Belgium, there is currently an OSM local chapter in formation. The charter is still being elaborated.

Events

  • The organizers of the 2018 FOSS4G Conference in Dar es Salaam, Tanzania, are looking for a logo and have launched a competition.
  • This year’s international State of the Map conference will take place in Aizu-Wakamatsu, Japan. This is a gentle reminder to send in your proposals for talks and/or workshops if you have not already done so. The deadline to submit your session proposal is Sunday, 2nd April.
  • Vincent de Château-Thierry asks (automatic translation) for submissions to SotM France on the talk-fr mailing list. It takes place in June from 2nd to 4th at Avignon.

Humanitarian OSM

switch2OSM

Software

  • The OpenStreetMap location monitoring app OsMo is now using MapZen vector tiles.
  • OsmAnd+ offers a 50% discount for the app.

Programming

Releases

Software Version Release date Comment
QGIS 2.18.5 2017-02-24 No infos.
Komoot iOS * 9.0 2017-03-21 Route planning and search reworked.
Mapillary Android * 3.35 2017-03-21 Allow higher resolution and wider aspect ratio of images.
Osmose Backend v1.0-2017-03-23 2017-03-21 No infos.
OSRM Backend 5.6.4 2017-03-21 Some bugfixes.
GeoWebCache 1.11.0 2017-03-22 No Infos.
Mapillary iOS * 4.6.10 2017-03-22 Two bugs fixed.
GeoServer 2.11.0 2017-03-23 Ten bugfixes and some undocumented enhancements.
Maps.me iOS * 7.2.2 2017-03-23 Bugfix release.
Komoot Android * var 2017-03-25 Minor enhancements.
Potlatch 2 2.5 2017-03-25 Please read release info.
QMapShack Lin/Mac/Win 1.8.0 2017-03-26 No infos.

Provided by the OSM Software Watchlist. Timestamp: 2017-03-27 18:15:39+02 UTC

(*) unfree software. See: freesoftware.

Other “geo” things

  • Web developer and artist Hans Hack created a map of London and Berlin with collapsed buildings to compare it to the situation in Aleppo.
  • At the National Library of Scotland in Edinburgh, researchers have pieced together a 17th-century Dutch map that has spent part of its life up a chimney, and part under the floorboards at a Scottish castle.
  • London scientists found (de) (automatic translation) that using navigation aids has a deep effect on brain’s activity.

Upcoming Events

Where What When Country
Mazzano Romano Workshop 2 31/03/2017 italy
Kyoto 【西国街道#02】山崎蒸溜所と桜マッピングパーティ 01/04/2017 japan
Rome Walk4Art 01/04/2017 italy
Rostock Rostocker Treffen 04/04/2017 germany
Stuttgart Stuttgarter Stammtisch 05/04/2017 germany
Helsinki Monthly Missing Maps mapathon at Finnish Red Cross HQ 06/04/2017 finland
Dresden Stammtisch 06/04/2017 germany
Zaragoza Mapeado Colaborativo 07/04/2017 spain
Mazzano Romano Workshop 3 07/04/2017 italy
Fribourg SOSM Annual General Meeting and mapping party 08/04/2017 switzerland
Popayán #MappingPartyTulcan (Scout Mappers) 08/04/2017 colombia
Rennes Atelier de découverte 09/04/2017 france
Rennes Réunion mensuelle 10/04/2017 france
Lyon Rencontre mensuelle libre 11/04/2017 france
Nantes Rencontres mensuelles 11/04/2017 france
Munich Münchner Stammtisch 11/04/2017 germany
Essen Stammtisch 13/04/2017 germany
Manila MapAm❤re #PhotoMapping San Juan, San Juan 13/04/2017-16/04/2017 philippines
Berlin 106. Berlin-Brandenburg Stammtisch 14/04/2017 germany
Tokyo 東京!街歩き!マッピングパーティ:第7回 小石川後楽園 15/04/2017 japan
Avignon State of the Map France 2017 02/06/2017-04/06/2017 france
Kampala State of the Map Africa 2017 08/07/2017-10/07/2017 uganda
Curitiba FOSS4G+SOTM Brasil 2017 27/07/2017-29/07/2017 brazil
Aizu-wakamatsu Shi State of the Map 2017 18/08/2017-20/08/2017 japan
Boulder State Of The Map U.S. 2017 19/10/2017-22/10/2017 united states
Buenos Aires FOSS4G+SOTM Argentina 2017 23/10/2017-28/10/2017 argentina
Lima State of the Map – LatAm 2017 29/11/2017-02/12/2017 perú

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Hakuch, Nakaner, Peda, Polyglot, Rogehm, Spec80, SrrReal, YoViajo, derFred, jcoupey, jinalfoflia, kreuzschnabel, vsandre.

by weeklyteam at March 31, 2017 01:30 AM

March 30, 2017

Wiki Education Foundation

Welcome, Shalor Toncray!

shalor-toncray cropped
Shalor Toncray

Our Wikipedia Content Experts are experienced Wikipedians who play a vital role in the Classroom Program, providing support to students as they contribute to Wikipedia for the first time. I’m pleased to announce that Shalor Toncray has joined Wiki Ed on a short-term contract to provide extra support to students as Content Expert in the spring 2017 term.

Shalor is a long-time Wikipedian who has edited as User:Tokyogirl79 since 2006, and became an administrator in 2013. Her experience with Wikipedia led her to an interest in digital archiving, working with the Library of Virginia, and also to earn a Master of Library and Information Science from Drexel University. A recent profile on the Wikimedia Blog highlighted the work she’s done the improve Wikipedia’s coverage of important but overlooked historical figures.

Before diving into archiving and library science, Shalor attended Virginia Commonwealth University, where she received a bachelor’s degree in religious studies. When she’s not working, Shalor likes to read, watch movies, play video games (especially tactical RPGs), and, of course, edit Wikipedia.

 

 

 

by Ryan McGrady at March 30, 2017 10:42 PM

Wikimedia Foundation

Community digest: As Odia Wikisource turns two, a project to digitize rare books kicks off; news in brief

Photo by Subhashish Panigrahi, CC BY-SA 4.0.

Odia Wikisource turned two in October 2016. Started in 2014, the project has over 500 volumes of text including more than 200 books from different genres and publication eras. There are about 5-10 active contributors to this project who are geographically dispersed. To celebrate the anniversary, the Odia Wikisource community is starting a batch of activities to help grow it.

On January 29, a day-long event was organized in the Indian city of Bhubaneswar, the capital of the state of Odisha, where a majority of Odia-language speakers live. The event aimed to provide training for those who wanted to learn more about Wikisource, assess the work done so far, and develop strategies for the future.

Forty Wikimedians from different backgrounds participated in this event, including six active contributors on the project: Pankajmala Sarangi, Subas Chandra Rout, Radha Dwibedi, Sangram Keshari Senapati, Prateek Pattanaik, Chinmayee Mishra, and Aliva Sahoo along with Mrutyunjaya Kar, the project administrator.

The event marked the beginning of several new community-led projects: Pothi, a project to digitize old and rare public domain Odia books; an initiative at the Utkal University library to digitize public domain books; another project that aims at digitizing palm leaf manuscripts that are hundreds of years old at a temple in Bargarh; in addition to an open-source project to record pronunciation of words for Wiktionary.

Several small workshops were organized to cover topics like low-cost setup for large-scale digitization of books, communications management for small or large events, uploading scanned works on Commons, dealing with OTRS-related issues, OCRing scanned pages, using images on Wikisource, general guidelines for proofreading, and tips for promoting digitized works on social media and other platforms.

“Books like Odishara Itihasa, Ama Debadebi, Manabasa Laxmipurana, and Sabitri Osa haven’t been available on the internet,” says Wikisourcer Sangram Keshari Senapati, “even though many search for them. That makes me proud to contribute to their digitization.

Pothi: a project to collect, archive and digitize old rare Odia books

“Odia Wikisource is run by Wikimedia volunteers,” explains Mrutyunjaya Kar, an administrator of the project. “This project is a storehouse of out-of-copyright books. In addition old books, we try to reach out to well-known authors and publishers with the aim of including some of their books in this free library. This way, the new generation won’t become oblivious to the invaluable pieces of Odia literature available in this digital age.” Prateek Pattanaik, a 12th-grade student in Delhi Public School Damanjodi, has started a project called “Pothi” to collect out-of-copyright and rare books and make them available in digital form on Wikisource. Many scholars and researchers joined the program.

Palm leaf manuscripts

Shree Dadhibaman Temple, a 400-year-old temple in Bargarh, a city in the Indian state of Odisha, has an archive of over 250 ancient Odia manuscripts that date back to the sixteenth century. These palm leaf manuscripts include Mahabharata, Ramayana, Skanda Purana, and the history of the temple, all in Odia.

Many of the manuscripts from the collection are at risk of erosion. The temple administration and trust have preserved the manuscripts with available preservation techniques. The preservation started a couple of years back when the student volunteers of different colleges of Bhatli, a nearby town, helped the temple administration identify manuscripts in dire need of preservation.

The Odia Wikimedia community is planning to collaborate with the temple administration to organize a three-day-long digitization camp for the students of two colleges in Bhatli. Participating students will be informed about Wikisource and the digitization process and some of the temple manuscripts will be digitized during this camp. After scanning the manuscripts, the Odia Wikimedia community will help the students upload, digitize and proofread the manuscripts on Odia Wikisource.

Digitizing books in the Utkal University library

Utkal University is one of the oldest universities in Odisha and the 17th oldest university in India. The central library of Utkal University, named after its first Vice Chancellor, Professor Prana Krushna Parija, hosts many old rare books and manuscripts. The library was set up in 1946 in Cuttack, and was then transferred to the Utkal University campus in Bhubaneswar in 1962. Odia Wikimedians are working closely with the university to set up a structure where the Wikimedians in Bhubaneswar (WikiTungi participants) will be involved in the scanning process. This collaboration with the university will enable the Wikimedians to use the public domain books for Wikisource where the university will host the e-books on their website.

Kathabhidhana

Kathabhidhana is a community-led project to record the pronunciation of words and upload them under open licenses to be used on projects like Wiktionary. The project is led by Odia Wikimedian Subhashish Panigrahi, drawing its inspiration largely from open-source software created by Wikimedian Shrinivasan T. The source code of the software used for Kathabhidhana is written in Python and over 1,200 audio recordings have been conducted so far using the tool.

Odia Wikimedian Prateek Pattanaik is developing workflow software using a proprietary iOS-based app that can record about 5 words per minute using an iPad or iPhone to facilitate contribution to the project. More than 1,000 recordings have been added to Odia Wiktionary so far. Odia Wiktionary, which usually lacks notable contributions, has undergone some great activity by Shitikantha Dash, a sysop of the project. The project hosts over 100,000 words, mostly from Bhashakosha, a public domain lexicon digitized by nonprofit Srujanika.

Subhashish Panigrahi
Bikash Ojha
Prateek Pattanaik
Sailesh Patnaik
Chinmayee Mishra
Odia Wikimedians

In brief

Deadline for Wikimania submissions extended: Submissions for Wikimania, the annual conference of the Wikimania movement which will be held this year in Montreal, Canada, has been extended. Proposals for presentations, (lectures, workshops, roundtables and tutorials) will be accepted until 10 April, 2017 while the proposals for lightning talks, posters and birds of a feather will be accepted until May 15, 2017. More information and submitting applications on Wikimania 2017 wiki.

Wikimedia Tunisia holds their first annual meetup: Wikimedia Tunisia user group has held a meetup for the user group members. Participants were informed about event planning, grantmaking, Wikimedia affiliations, and the next strategy for the user group. More information about the meetup can be found on Meta, and photos from the event can be found on Commons.

Fourth edition of WikiMed courses wraps up at Tel Aviv University: The Wikipedia Education Program course that started in 2013 at Tel Aviv University, has celebrated its fourth edition last January. The course has presented over 200 new articles to the Hebrew Wikipedia so far. Shani Evenstein, the Wikipedia Education Program leader in Israel wrote a post on This Month In Education newsletter about the organizer motivations and the program objectives.

Wikimedia Germany publishes 2016 impact report: Wikimedia Germany (Deutschland), the independent chapter supporting the Wikimedia movement in Germany has published their impact report for 2016. The report drops shade on the learnings and experiences from the last year with special coverage to the projects supported by the chapter. More information on Wikimedia-l.

New filters for edit review beta filter release: This week, the New filters for edit review beta option will be released on the Portuguese and Polish Wikipedias, and also on Mediawiki. The beta option adds an improved filtering interface and powerful filtering and other tools to the “recent changes” and “recent changes linked” pages. More information on Wikimedia-l.

Amical Wikimedia share their partner survey results: Amical Wikimedia, the independent thematic organization that supports Wikimedia in the Catalan-speaking region, has shared the results of a survey they conducted on over 100 of their partners to see what they think of their work with the Wikimedia movement. The survey result report is available on the Amical Wikimedia website.

Compiled and edited by Samir Elsharbaty, Digital Content Intern
Wikimedia Foundation

This post has been edited to correct the reach of WikiProject Pothi.

by Bikash Ojha, Chinmayee Mishra, Prateek Pattanaik, Subhashish Panigrahi, Sailesh Patnaik and Samir Elsharbaty at March 30, 2017 06:18 PM