weeklyOSM 461

11:27, Saturday, 25 2019 May UTC



Can you escape from the traffic in a city? 1 | © Hans Hack map data © OpenStreetMap



  • For several weeks now the desirable contributions in the OSM user blogs have sunk under a deluge of spam. There are people who are calling for countermeasures, such as forced moderation for all new user accounts. alexkemp, who has been involved with forum spam for some time, suspects that the current wave of spam is just the beginning.
  • OSM Foundation Japan had a meetup (ja) between board members and local mappers to build a better community. Many proposals for improvements were made holding mapping parties and the promotion of OSM.
  • Ilya Zverev in his blog “SHTOSM” ponders (automatic translation) why anyone in the 21st century would need paper maps.


  • As Laura Mugeha has tweeted, SotM Africa, which will take place on 22-24 November 2019 in Grand-Bassam, Ivory Coast, needs more workshop proposals.
  • So far only one of talks for the State of the Map US has been submitted by a woman. The organisers are determined to change that, though the target of more than 50 percent of all talks to be submitted by women appears challenging.


  • Heidelberg University’s GIScience Research Group has used the example of health-related amenities to explore the development of OSM data over time. The results provide deep insight into mapping patterns such as the saturation of an area with a given tag, spin-offs in the form of tag diversification such as amenity=hospital -> amenity=clinic and the erratically increasing usage of the tag as usually seen when a country gets into the scope of the humanitarian sector.


  • Can you escape from traffic in a city? Hans Hack has created (de) a map for Berlin showing the places which are farthest away from a road.
  • The public Swiss geoportal, which has recently introduced an OSM-based vector tile test map with customisable map style, has updated its map with the Mapbox rendering engine. Jeff Konnen from Luxembourg announced that geoportail.lu will “copy” this. Gonza from Argentina reports in a tweet that mapa.ign.gob.ar has also integrated OSM into its base map.
  • Greenpeace has created a map of air quality in Russian cities. It uses OpenStreetMap as a basemap.

Open Data

  • …Almost a year ago Tolyatti city administration (Russia) made its official GIS-portal “EMGIS” open to everybody. The portal is now under the Creative Commons Attribution 4.0 International licence. Moreover, the OSM community received permission (automatic translation) to use the portal.


  • OSGeoLive 13.0 has reached Alpha status and the developers are looking for volunteer testers. OSGeoLive is a Linux Live distribution, which is equipped with current Open Source GIS programs.
  • Peermaps wants to become a distributed, offline-friendly alternative to commercial map providers such as Google Maps. It plans to use peer-to-peer network techniques to share the hosting of files. The project has received funding to build an initial prototype which will display a regularly updated OSM-based map.
  • Russian OSMers Alexander Pankratov and Alexander Istomin have developed a JOSM map style “Building_Levels_Labels” for highlighting objects without specified “floors” attributes. The map style has been translated into English.

Did you know …

  • … Pascal’s blog post #100? Pascal’s tools are an integral part of the OSM ecosystem.
  • … the Russian company NextGIS developed an interactive map showing the dynamics of the political borders of Russia and its predecessors. Also they share insights about how this map was created.
  • … how to tag shops offering vegan, halal, kosher or gluten-free products? The keys diet:vegan, diet:halal, diet:kosher, diet:gluten_free and several more can be added to mark them.
  • Long-serving OSMers may still remember how the Maxheight map started in 2012. In its current form, the map not only shows worldwide missing maxheight= tags, but also provides overlays for other truck-related tags. The documentation on the Wiki describes the various ways you can help improve the map.

Other “geo” things

  • In the Strike Tracker project, Amnesty International used satellite imagery and crowdsourcing to analyse air strikes on the Iraqi city of Raqqa. More than 3000 volunteers collected the data; an analysis website including an OSM map summarises the results.
  • Mapillary’s #CompleteTheMap is back. The challenge kicks off in June—read more in the Mapillary blog.
  • Microsoft follows Niantic, who has published the augmented reality games Ingress and Pokémon GO, and announced an AR version of Minecraft, called Minecraft Earth. The announcement had extensive press coverage, for instance in Wired and CNN. The game is expected to use Microsoft’s Azure Spatial Anchors , which is Azure’s cloud system combined with OpenStreetMap data, rather than GPS for positioning. It remains to be seen what impact the game will have on OSM.
  • Garmin launched a device with an OSM-based topo map pre-installed. The Garmin Overlander comes with a proprietary road navigation and topographic OSM map and costs €699. OSM as a map source is only mentioned in the German press release (de) (automatic translation).

Upcoming Events

Where What When Country
Lübeck Lübecker Mappertreffen 2019-05-23 germany
Montrouge Rencontre mensuelle de Montrouge et alentours 2019-05-23 france
Vienna 62. Wiener Stammtisch 2019-05-23 österreich
Greater Vancouver area Metrotown mappy Hour 2019-05-24 canada
Strasbourg Rencontre périodique de Strasbourg 2019-05-25 france
Bremen Bremer Mappertreffen 2019-05-27 germany
Rome Incontro mensile 2019-05-27 italy
Salt Lake City SLC Map Night 2019-05-28 united states
Mannheim Mannheimer Mapathons 2019-05-28 germany
Zurich Missing Maps Mapathon Zurich 2019-05-29 switzerland
Saarbrücken Mapathon OpenSaar/Ärzte ohne Grenzen/EuYoutH_OSM/Libre_Graphics_Meeting_2019 2019-05-29 germany
Montpellier Réunion mensuelle 2019-05-29 france
Düsseldorf Stammtisch 2019-05-29 germany
Bratislava Missing Maps mapathon Bratislava #6 at Faculty of Civil Engineering Slovak University of Technology in Bratislava in Bratislava 2019-05-30 slovakia
Joué-lès-Tours Stand OSM sur la fête du vélo 2019-06-01 france
Taipei OSM x Wikidata #5 2019-06-03 taiwan
Toronto Toronto Mappy Hour 2019-06-03 canada
London Missing Maps Mapathon 2019-06-04 united kingdom
Essen Mappertreffen 2019-06-05 germany
Toulouse Rencontre mensuelle 2019-06-05 france
Stuttgart Stuttgarter Stammtisch 2019-06-05 germany
Bochum Mappertreffen 2019-06-06 germany
Mannheim Mannheimer Mapathons 2019-06-06 germany
Nantes Réunion mensuelle 2019-06-06 france
Dresden Stammtisch Dresden 2019-06-06 germany
Reutti Stammtisch Ulmer Alb 2019-06-06 germany
Dortmund Mappertreffen 2019-06-07 germany
Biella Incontro mensile 2019-06-08 italia
Rennes Réunion mensuelle 2019-06-10 france
Bordeaux Réunion mensuelle 2019-06-10 france
Lyon Rencontre mensuelle pour tous 2019-06-11 france
Salt Lake City SLC Mappy Hour 2019-06-11 united states
Zurich OSM Stammtisch Zurich 2019-06-11 switzerland
Bordeaux Réunion mensuelle 2019-06-11 france
Hamburg Hamburger Mappertreffen 2019-06-11 germany
Leoben Stammtisch Obersteiermark 2019-06-13 austria
Munich Münchner Stammtisch 2019-06-13 germany
Montpellier State of the Map France 2019 2019-06-14-2019-06-16 france
Angra do Heroísmo Erasmus+ EuYoutH_OSM Meeting 2019-06-24-2019-06-29 portugal
Minneapolis State of the Map US 2019 2019-09-06-2019-09-08 united states
Edinburgh FOSS4GUK 2019 2019-09-18-2019-09-21 united kingdom
Heidelberg Erasmus+ EuYoutH_OSM Meeting 2019-09-18-2019-09-23 germany
Heidelberg HOT Summit 2019 2019-09-19-2019-09-20 germany
Heidelberg State of the Map 2019 (international conference) 2019-09-21-2019-09-23 germany
Grand-Bassam State of the Map Africa 2019 2019-11-22-2019-11-24 ivory coast

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Nakaner, Polyglot, Rogehm, SK53, Silka123, SunCobalt, TheSwavu, derFred, jinalfoflia, keithonearth.

Happy Africa Day!

To celebrate this, I am happy to make a little announcement: It is now possible to write in all the Wikipedias of all the languages of Africa, with all the special letters that are difficult to find on common keyboards. You can do it on any computer, without buying any new equipment, installing any software, or changing operating system preferences. Please see the full list of languages and instructions.

This release completes a pet project that I began a year ago: to make it easy to write in all the languages of Africa in which there is a Wikipedia or an active Wikipedia Incubator.

Most of these languages are written in the Latin alphabet, but with addition of many special letters such as Ŋ, Ɛ, Ɣ, and Ɔ, or letters with accents such as Ũ or Ẹ̀. These letters are hard to type on common keyboards, and in my meetings with African people who would write in Wikipedia in their language this is very often brought up as a barrier to writing confidently.

Some of these languages have keyboard layouts that are built into modern operating systems, but my experience showed me that to enable them one has to dig deep in the operating system preferences, which is difficult for many people, and even after enabling the right thing in the preferences, some keyboards are still wrong and hard to use. I hope that this will be built into future operating system releases in a more convenient way, just as it is for languages such as French or Russian, but in the mean time I provide this shortcut.

The new software released this week to all Wikimedia sites and to translatewiki.net makes it possible to type these special characters without installing any software or pressing any combining keys such as Ctrl or Alt. In most cases you simply need to press the tilde character (~) followed by the letter that is similar to the one you want to type. For example:

  • Ɓ is written using ~B
  • Ɛ is written using ~E
  • Ɔ is written using ~O
    … and so on.

Some of these languages are written in their own unique writing systems. N’Ko and Vai keyboards were made by myself, mostly based on ideas from freely licensed keyboard layouts by Keyman. (A keyboard for the Amharic language, also written with its own script, has had keyboards made by User:Elfalem for a while. I am mentioning it here for completeness.)

This release addresses only laptop and desktop computers. On mobile phones and tablets most of these languages can be typed using apps such as Gboard (also in iPhone), SwiftKey (also on iPhone), or African Keyboard. If you aren’t doing this already, try these apps on your phone, and start exchanging messages with your friends and family in your language, and writing in Wikipedia in your language on your phone! If you are having difficulties doing this, please contact me and I’ll do my best to help.

The technology used to make this is the MediaWiki ULS extension and the jquery.ime package.

I would like to thank all the people who helped:

  • Mahuton Possoupe (Benin), with whom I made the first of these keyboards, for the Fon language, at the Barcelona Hackathon.
  • Kartik Mistry, Santhosh Thottingal (India), Niklas Laxström (Finland), and Petar Petkovich (Serbia), who reviewed the numerous code patches that I made for this project.

This is quite a big release or code. While I made quite a lot of effort to test everything, code may always have bugs: missing languages, wrong or missing letters, mistakes in documentation, and so on. I’ll be happy to hear any feedback and to fix the bugs.

And now it’s all up to you! I hope that these keyboard layouts make it easier for all of you, African Wikimedians, to write in your languages, to write and translate articles, and share more knowledge!

Again, happy Africa day!

The full list of languages for which there is now a keyboard in ULS and jquery.ime:

  • Afrikaans
  • Akan
  • Amharic
  • Bambara
  • Berber
  • Dagbani
  • Dinka
  • Ewe
  • Fula
  • Fon
  • Ga
  • Hausa
  • Igbo
  • Kabiye
  • Kabyle
  • Kikuyu
  • Luganda
  • Lingala
  • Malagasy
  • N’Ko
  • Sango
  • Sotho
  • Northern Sotho
  • Koyraboro Senni Songhay
  • Tigrinya
  • Vai
  • Venda
  • Wolof
  • Yoruba

Introducing the codehealth pipeline beta

15:09, Friday, 24 2019 May UTC

After many months of discussion, work and consultation across teams and departments[0], and with much gratitude and appreciation to the hard work and patience of @thcipriani and @hashar, the Code-Health-Metrics group is pleased to announce the introduction of the code health pipeline. The pipeline is currently in beta and enabled for GrowthExperiments, soon to be followed by Notifications, PageCuration, and StructuredDiscussions. (If you'd like to enable the pipeline for an extension you maintain or contribute to, please reach out to us via the comments on this post.)

What are we trying to do?

The Code-Health-Metrics group has been working to define a set of common code health metrics. Our current understanding of code health factors are: simplicity, readability, testability, buildability. Beyond analyzing a given patch set for these factors, we also want to have a historical view of code as it evolves over time. We want to be able to see which areas of code lack test coverage, where refactoring a class due to excessive complexity might be called for, and where possible bugs exist.

After talking through some options, we settled on a proof-of-concept to integrate Wikimedia's gerrit patch sets with SonarQube as the hub for analyzing and displaying metrics on our code[1]. SonarQube is a Java project that analyzes code according to a set of a rules. SonarQube has a concept of a "Quality Gate", which can be defined organization wide or overridden on a per-project basis. The default Quality Gate says that of code added in a patch set, over 80% of it must be covered by tests, less than 3% of it may contain duplicated lines of code, and the maintainability, reliability and security ratings should be graded as an A. If code passes these criteria then we say it has passed the quality gate, otherwise it has failed.

Here's an example of a patch that failed the quality gate:

screenshot of sonarqube quality gate

If you click through to the report, you can see that it failed because the patch introduced an unused local variable (code smell), so the maintainability score for that patch was graded as a C.

How does it integrate with gerrit?

For projects that have been opted in to the code health pipeline, submitting a new patch or commenting with "check codehealth" will result in the following actions:

  1. The mwext-codehealth-patch job checks out the patchset and installs MediaWiki
  2. PHPUnit is run and a code coverage report is generated
  3. npm test:unit is run which may generate a code coverage report if the package.json file is configured to do so
  4. sonar-scanner binary runs which sends 1) the code, 2) PHP code coverage, and 3) the JavaScript code coverage to Sonar
  5. After Sonar is done analyzing the code and coverage reports, the pipeline reports if the quality gate passed or failed. The outcome does not prevent merge in case of failure.
pipeline screenshot

If you click the link, you'll be able to view the analysis in SonarQube. From there you can also view the code of a project and see which lines are covered by tests, which lines have issues, etc.

Also, when a patch merges, the mwext-codehealth-master-non-voting job executes which will update the default view of a project in SonarQube with the latest code coverage and code metrics.[3]

What's next?

We would like to enable the code health pipeline for more projects, and eventually we would like to use it for core. One challenge with core is that it currently takes ~2 hours to generate the PHPUnit coverage report. We also want to gather feedback from the developer community on false positives and unhelpful rules. We have tried to start with a minimal set of rules that we think everyone could agree with but are happy to adjust based on developer feedback[2]. Our current list of rules can be seen in this quality profile.

If you'll be at the Hackathon, we will be presenting on the code health pipeline and SonarQube at the Code health and quality metrics in Wikimedia continuous integration session on Friday at 3 PM. We look forward to your feedback!

Kosta, for the Code-Health-Metrics group

[0] More about the Code Health Metrics group: https://www.mediawiki.org/wiki/Code_Health_Group/projects/Code_Health_Metrics, currently comprised of Guillaume Lederrey (R), Jean-Rene Branaa (A), Kosta Harlan (R), Kunal Mehta (C), Piotr Miazga (C), Željko Filipin (R). Thank you also to @daniel for feedback and review of rules in SonarQube.
[1] While SonarQube is an open source project, we currently use the hosted version at sonarcloud.io. We plan to eventually migrate to our own self-hosted SonarQube instance, so we have full ownership of tools and data.
[2] You can add a topic here https://www.mediawiki.org/wiki/Talk:Code_Health_Group/projects/Code_Health_Metrics
[3] You might have also noticed a post-merge job over the last few months, wmf-sonar-scanner-change. This job did not incorporate code coverage, but it did analyze most of our extensions and MediaWiki core, and as a result there is a set of project data and issues that might be of interest to you. The Issues view in SonarQube might be interesting, for example, as a starting point for new developers who want to contribute to a project and want to make some small fixes.

Conway Hall library – image by Jwslubbock CC BY-SA 4.0

Last weekend Conway Hall in central London hosted a Wikipedia editathon to improve pages on Wikipedia about 19th century pamphleteers and the subjects they wrotr about. Hundreds of Victorian-era pamphlets have been digitised and placed on CC0 licenses by Conway Hall library, and these are now being uploaded to Commons at the Category:Conway Hall digital collections.

These pamphlets are still being added, and if you want to help us improve Wikipedia by embedding them in relevant Wikipedia pages, you should also check out the GLAM/Conway Hall page for links to articles and subject areas on Wikipedia that need improving. Many of the pamphleteers who authored the publications have Wikipedia pages, and the pamphlets themselves often show the late 19th century thinking around subjects like religion, secularism, politics and society.

Conway Hall hosts lots of interesting events and talks on politics, music and history, all with a progressive, forwardthinking attitude to improving society. We are very grateful that the library has decided to publish its collection of digitised pamphlets on CC licenses so that they can be used on Wikipedia, and this will hopefully lead to a much wider audience for their collection.

There are lots of interesting pamphlets in the Conway Hall collection, exploring 19th century attitudes to railway nationalisation, Siam (Thailand), secularism, socialism and many other topics. There is still a list of articles in the GLAM page listed above that could be created on particular pamphleteers, and there are articles on pamphleteers like Gustav Zerffi, Charles Voysey and Annie Besant. which could have their newly uploaded pamphlets inserted into the articles from Commons.

We hope to do further editathons with Conway Hall library in future, and Alicia Chilcott from the library says that “we are planning to produce a special issue of our Ethical Record journal, focusing on the project and the various workshops that we have run as a part of it”.

We hope that the pamphlets will continue to be embedded in relevant articles on Wikipedia so they can help readers understand the progress of late 19th century thought on social and religious issues. If you improve an article with a document from Conway Hall’s collection, why not get in touch and tell us about it?

Shocking tales from ornithology

07:19, Friday, 24 2019 May UTC
Manipulative people have always made use of the dynamics of ingroups and outgroups to create diversions from bigger issues. The situation is made worse when misguided philosophies are peddled by governments that put economics ahead of ecology. The pursuit of easily gamed targets such as GDP is preferrable to ecological amelioration since money is a man-made and controllable entity. Nationalism, pride, other forms of chauvinism, the creation of enemies and the magnification of war threats are all effective tools in the arsenal of Machiavelli for use in misdirecting the masses when things go wrong. One might imagine that the educated, especially scientists, would be smart enough not to fall into these traps, but cases from history dampen hopes for such optimism.

There is a very interesting book in German by Eugeniusz Nowak called "Wissenschaftler in turbulenten Zeiten" (or scientists in turbulent times) that deals with the lives of ornithologists, conservationists and other naturalists during the Second World War. Preceded by a series of recollections published in various journals, the book was published in 2010 but I became aware of it only recently while translating some biographies into the English Wikipedia. I have not yet actually seen the book (it has about five pages on Salim Ali as well) and have had to go by secondary quotations in other content. Nowak was a student of Erwin Stresemann (with whom the first chapter deals with) and he writes about several European (but mostly German, Polish and Russian) ornithologists and their lives during the turbulent 1930s and 40s. Although Europe is pretty far from India, there are ripples that reached afar. Incidentally, Nowak's ornithological research includes studies on the expansion in range of the collared dove (Streptopelia decaocto) which the Germans called the Türkentaube, literally the "Turkish dove", a name with a baggage of cultural prejudices.

Nowak's first paper of "recollections" notes that: [he] presents the facts not as accusations or indictments, but rather as a stimulus to the younger generation of scientists to consider the issues, in particular to think “What would I have done if I had lived there or at that time?” - a thought to keep as you read on.

A shocker from this period is a paper by Dr Günther Niethammer on the birds of Auschwitz (Birkenau). This paper (read it online here) was published when Niethammer was posted to the security at the main gate of the concentration camp. You might be forgiven if you thought he was just a victim of the war. Niethammer was a proud nationalist and volunteered to join the Nazi forces in 1937 leaving his position as a curator at the Museum Koenig at Bonn.
The contrast provided by Niethammer who looked at the birds on one side
while ignoring inhumanity on the other provided
novelist Arno Surminski with a title for his 2008 novel -
Die Vogelwelt von Auschwitz
- ie. the birdlife of Auschwitz.

G. Niethammer
Niethammer studied birds around Auschwitz and also shot ducks in numbers for himself and to supply the commandant of the camp Rudolf Höss (if the name does not mean anything please do go to the linked article / or search for the name online).  Upon the death of Niethammer, an obituary (open access PDF here) was published in the Ibis of 1975 - a tribute with little mention of the war years or the fact that he rose to the rank of Obersturmführer. The Bonn museum journal had a special tribute issue noting the works and influence of Niethammer. Among the many tributes is one by Hans Kumerloeve (starts here online). A subspecies of the common jay was named as Garrulus glandarius hansguentheri by Hungarian ornithologist Andreas Keve in 1967 after the first names of Kumerloeve and Niethammer. Fortunately for the poor jay, this name is a junior synonym of  G. g. anatoliae described by Seebohm in 1883.

Meanwhile inside Auschwitz, the Polish artist Wladyslaw Siwek was making sketches of everyday life  in the camp. After the war he became a zoological artist of repute. Unfortunately there is very little that is readily accessible to English readers on the internet (beyond the Wikipedia entry).
Siwek, artist who documented life at Auschwitz
before working as a wildlife artist.
Hans Kumerloeve
Now for Niethammer's friend Dr Kumerloeve who also worked in the Museum Koenig at Bonn. His name was originally spelt Kummerlöwe and was, like Niethammer, a doctoral student of Johannes Meisenheimer. Kummerloeve and Niethammer made journeys on a small motorcyle to study the birds of Turkey. Kummerlöwe's political activities started earlier than Niethammer, joining the NSDAP (German: Nationalsozialistische Deutsche Arbeiterpartei = The National Socialist German Workers' Party)  in 1925 and starting the first student union of the party in 1933. Kummerlöwe soon became a member of the Ahnenerbe, a think tank meant to provide "scientific" support to the party-ideas on race and history. In 1939 he wrote an anthropological study on "Polish prisoners of war". At the museum in Dresden that he headed, he thought up ideas to promote politics and he published them in 1939 and 1940. After the war, it is thought that he went to all the European libraries that held copies of this journal (Anyone interested in hunting it should look for copies of Abhandlungen und Berichte aus den Staatlichen Museen für Tierkunde und Völkerkunde in Dresden 20:1-15.) and purged them of his article. According to Nowak, he even managed to get his hands (and scissors) on copies held in Moscow and Leningrad!  

The Dresden museum was also home to the German ornithologist Adolf Bernhard Meyer (1840–1911). In 1858, he translated the works of Charles Darwin and Alfred Russel Wallace into German and introduced evolutionary theory to a whole generation of German scientists. Among Meyer's amazing works is a series of avian osteological works which uses photography and depicts birds in nearly-life-like positions (wonder how it was done!) - a less artistic precursor to Katrina van Grouw's 2012 book The Unfeathered Bird. Meyer's skeleton images can be found here. In 1904 Meyer was eased out of the Dresden museum because of rising anti-semitism. Meyer does not find a place in Nowak's book.

Nowak's book includes entries on the following scientists: (I keep this here partly for my reference as I intend to improve Wikipedia entries on several of them as and when time and resources permit. Would be amazing if others could pitch in!).
In the first of his "recollection papers" (his 1998 article) Nowak writes about the reason for writing them - noticing that the obituary for Prof. Ernst Schäfer  was a whitewash that carefully avoided any mention of his wartime activities. And this brings us to India. In a recent article in Indian Birds, Sylke Frahnert and coauthors have written about the bird collections from Sikkim in the Berlin natural history museum. In their article there is a brief statement that "The  collection  in  Berlin  has  remained  almost  unknown due  to  the  political  circumstances  of  the  expedition". This might be a bit cryptic for many but the best read on the topic is Himmler's Crusade: The true story of the 1939 Nazi expedition into Tibet (2009) by Christopher Hale. Hale writes about Himmler: 
He revered the ancient cultures of India and the East, or at least his own weird vision of them.
These were not private enthusiasms, and they were certainly not harmless. Cranky pseudoscience nourished Himmler’s own murderous convictions about race and inspired ways of convincing others...
Himmler regarded himself not as the fantasist he was but as a patron of science. He believed that most conventional wisdom was bogus and that his power gave him a unique opportunity to promulgate new thinking. He founded the Ahnenerbe specifically to advance the study of the Aryan (or Nordic or Indo-German) race and its origins
From there Hale goes on to examine the motivations of Schäfer and his team. He looks at how much of the science was politically driven. Swastika signs dominate some of the photos from the expedition - as if it provided for a natural tie with Buddhism in Tibet. It seems that Himmler gave Schäfer the opportunity to rise within the political hierarchy. The team that went to Sikkim included Bruno Beger. Beger was a physical anthropologist but with less than innocent motivations although that would be much harder to ascribe to the team's other pursuits like botany and ornithology. One of the results from the expedition was a film made by the entomologist of the group, Ernst Krause - Geheimnis Tibet - or secret Tibet - a copy of this 1 hour and 40 minute film is on YouTube. At around 26 minutes, you can see Bruno Beger creating face casts - first as a negative in Plaster of Paris from which a positive copy was made using resin. Hale talks about how one of the Tibetans put into a cast with just straws to breathe from went into an epileptic seizure from the claustrophobia and fear induced. The real horror however is revealed when Hale quotes a May 1943 letter from an SS officer to Beger - ‘What exactly is happening with the Jewish heads? They are lying around and taking up valuable space . . . In my opinion, the most reasonable course of action is to send them to Strasbourg . . .’ Apparently Beger had to select some prisoners from Auschwitz who appeared to have Asiatic features. Hale shows that Beger knew the fate of his selection - they were gassed for research conducted by Beger and August Hirt.
SS-Sturmbannführer Schäfer at the head of the table in Lhasa

In all, Hale makes a clear case that the Schäfer mission had quite a bit of political activity underneath. We find that Sven Hedin (Schäfer was a big fan of him in his youth. Hedin was a Nazi sympathizer who funded and supported the mission) was in contact with fellow Nazi supporter Erica Schneider-Filchner and her father Wilhelm Filchner in India, both of whom were interned later at Satara, while Bruno Beger made contact with Subhash Chandra Bose more than once. [Two of the pictures from the Bundesarchiv show a certain Bhattacharya - who appears to be a chemist working on snake venom at the Calcutta snake park - one wonders if he is Abhinash Bhattacharya.]

My review of Nowak's book must be uniquely flawed as  I have never managed to access it beyond some online snippets and English reviews.  The war had impacts on the entire region and Nowak's coverage is limited and there were many other interesting characters including the Russian ornithologist Malchevsky  who survived German bullets thanks to a fat bird observation notebook in his pocket! In the 1950's Trofim Lysenko, the crank scientist who controlled science in the USSR sought Malchevsky's help in proving his own pet theories - one of which was the ideas that cuckoos were the result of feeding hairy caterpillars to young warblers!

Issues arising from race and perceptions are of course not restricted to this period or region, one of the less glorious stories of the Smithsonian Institution concerns the honorary curator Robert Wilson Shufeldt (1850 – 1934) who in the infamous Audubon affair made his personal troubles with his second wife, a grand-daughter of Audubon, into one of race. He also wrote such books as America's Greatest Problem: The Negro (1915) in which we learn of the ideas of other scientists of the period like Edward Drinker Cope! Like many other obituaries, Shufeldt's is a classic whitewash.  

Even as recently as 2015, the University of Salzburg withdrew an honorary doctorate that they had given to the Nobel prize winning Konrad Lorenz for his support of the political setup and racial beliefs. It should not be that hard for scientists to figure out whether they are on the wrong side of history even if they are funded by the state. Perhaps salaried scientists in India would do well to look at the legal contracts they sign with their employers, especially the state, more carefully. The current rules make government employees less free than ordinary citizens but will the educated speak out or do they prefer shackling themselves. 

  • Mixing natural history with war sometimes led to tragedy for the participants as well. In the case of Dr Manfred Oberdörffer who used his cover as an expert on leprosy to visit the borders of Afghanistan with entomologist Fred Hermann Brandt (1908–1994), an exchange of gunfire with British forces killed him although Brandt lived on to tell the tale.
  • Apparently Himmler's entanglement with ornithology also led him to dream up "Storchbein Propaganda" - a plan to send pamphlets to the Boers in South Africa via migrating storks! The German ornithologist Ernst Schüz quietly (and safely) pointed out the inefficiency of it purely on the statistics of recoveries!

Science communication in action at Northeastern

17:03, Thursday, 23 2019 May UTC

This spring, students in Dr. Amy Carleton’s Advanced Writing in the Sciences at Northeastern University created lots of new Wikipedia articles as an assignment. The new articles include topics like tissue engineering of heart valvesextremophiles in biotechnologyBilophila wadsworthiaBoston University CTE Center and Brain Bank and food safety in the United States.

Tissue engineered heart valves are prosthetic heart valves that, unlike mechanical or biological options, are living heart valves created from a person’s own cells. These living heart valves are “capable of growing, adapting, and interacting within the human body’s biological system.” The brand new Wikipedia article that Dr. Carleton’s student created describes the procedure of regenerating heart tissue inside of the body, as well as the body’s response to the procedure. The article also describes the benefits that tissue engineered heart valves offer over biological and mechanical options; the risks of the procedure; and the history of such practices in medical research. The student even found images on Wikimedia Commons (Wikipedia’s sister site for media) to illustrate key concepts in the article.

Another student created an article about the Boston University CTE Center and Brain Bank, a research facility that studies the effects of brain trauma and degenerative diseases, specifically the effects of chronic traumatic encephalopathy (CTE). CTE can only be conclusively diagnosed after death, but the research facility is working on developing ways to diagnose and potentially treat CTE in living subjects.

Given that Wikipedia readers might be visiting the site to make health decisions, articles about medical or psychological topics must adhere to stricter standards of quality than many other articles. Dr. Carleton’s students have done a great job citing peer-reviewed sources that are considered reliable on Wikipedia and linking their new articles to related topics.

Students are uniquely positioned to communicate science topics to the general public, who may not have a background in the topic. Students remember what it was like not to have the context or scientific background needed to understand nuances, so they can explain what they’ve learned in ways that are clear for someone learning about the topic for the first time.

Want to have your students create or expand Wikipedia articles as an assignment? There are lots of reasons to do so! Access our free assignment templates, management tools, and student trainings at teach.wikiedu.org.

Image: File:Suture micrograph.jpg,  Nephron, CC BY-SA 3.0, via Wikimedia Commons.

At the Wikimedia Foundation, we believe that free access to knowledge and freedom of expression are fundamental human rights. We believe that when people have good information, they can make better decisions. Free access to information creates economic opportunity and empowers people to build sustainable livelihoods. Knowledge makes our societies more informed, more connected, and more equitable.

Over the past two years, we have seen governments censor Wikipedia, including in Turkey and most recently in China, denying these rights to millions of people around the world.

Today, we proceed to the European Court of Human Rights, an international court which hears cases of human rights violations within the Council of Europe, to ask the Court to lift the more than two-year block of Wikipedia in Turkey. We are taking this action as part of our continued commitment to knowledge and freedom of expression as fundamental rights for every person.

This is not a step we have taken lightly; we are doing so only after continued and exhaustive attempts to lift the block through legal action in the Turkish courts, good faith conversations with the Turkish authorities, and campaigns to raise awareness of the block and its impact on Turkey and the rest of the world.

Despite these efforts, Wikipedia continues to be blocked in Turkey after more than two years.

This news was announced in a press call with the Wikimedia Foundation’s Executive Director Katherine Maher, Wikipedia’s founder Jimmy Wales, and the Foundation’s Legal Director Stephen LaPorte.

“We believe that information—knowledge—makes the world better. That when we ask questions, get the facts, and are able to understand all perspectives on an issue, it allows us to build the foundation for a more just and tolerant society,” said Katherine Maher. “Wikipedia is a global resource that everyone can be actively part of shaping. It is through this collective process of writing and rewriting, and debate that Wikipedia becomes more useful, more comprehensive, and more representative. It is also through this process that we, a global society, establish a more comprehensive consensus on how we see the world.”

In our application to the Strasbourg Court, we argue that the blanket ban of Wikipedia violates fundamental freedoms, including the right to freedom of expression as guaranteed by Article 10 of the European Convention. Moreover, these freedoms have been denied to the more than 80 million people of Turkey who have been impacted most directly by the block, and to the rest of the world, which has lost the nation’s rich perspectives in contributing, debating, and adding to Wikipedia’s more than 50 million articles.

Over the past two years, the Wikimedia Foundation has done all that it possibly can to lift the block of Wikipedia in Turkey. The order blocking Wikipedia referred to only two articles, which have continued to be open for improvement by anyone and edited by volunteers around the world despite the block. It is unclear what, if any, concerns remain. The block continues despite numerous good faith discussions with Turkish authorities to understand their views, including through an open letter to the Turkish Minister of Transport, Maritime, and Communication, to discuss Wikipedia’s open editing model, values, and strong opposition to impermissible censorship of any kind.

Immediately following the block, we filed our case in the domestic courts, requesting that Wikipedia be unblocked on the grounds that such a block violated the rights to freedom of expression and freedom of the press. The lower courts have upheld the block, and there has been no response from Turkey’s highest court in the two years since we appealed the lower court’s decision. Consequently, we believe that this step is necessary.

The European Court of Human Rights (ECHR) is the international court created by the European Convention on Human Rights to ensure the enforcement and implementation of the human rights provisions set out in the Convention. Turkey is a long-standing party to the Convention, and the fundamental rights provided by the Convention are guaranteed in the Turkish Constitution, which makes the interference with human rights in this case all the more devastating. Moreover, internet blocks and censorship are a growing concern for Council of Europe states, making this case all the more pressing for consideration by the court.

Today, Wikipedia is one of the most widely-accessed sources of knowledge in the world. It is read 6,000 times every second, and our articles are edited, improved, and debated daily by a community of more than 250,000 volunteers from across the globe. More than 85 percent of those articles are in languages other than English, which includes the Turkish Wikipedia’s more than 300,000 articles, written by Turkish-speaking volunteers for Turkish-speaking people. These volunteers make good-faith efforts to cover all sides of a given topic, even controversial ones, to ensure people can understand topics fully and transparently.

Wikipedia is better, richer, and more reflective of the world when more people can engage with, improve, and edit its content. When one nation is denied access to the global conversation on Wikipedia, the entire world is poorer.

The Wikimedia Foundation is committed to upholding knowledge as a fundamental human right, to be enjoyed and protected for everyone, for our millions of users around the world. We announce our decision to file our application in the European Court of Human Rights today as a reflection of that commitment.

The Wikimedia Foundation is represented by Can Yeginsu, who leads a team of barristers practicing from 4 New Square Chambers in London, and Gonenc Gurkaynak at ELIG Gurkaynak Attorneys-at-Law in Istanbul.

Wikimedia Vakfı olarak bilgiye özgür erişimin ve ifade özgürlüğünün temel insan hakları olduğuna inanıyoruz. İnsanların iyi bilgiye sahip olduklarında daha iyi kararlar alabileceklerine inanıyoruz. Bilgiye özgür erişim ekonomik fırsatlar yaratıyor ve insanların sürdürülebilir geçim kaynakları oluşturmalarını sağlıyor. Bilgi, toplumlarımızın daha bilinçli, daha bağlantılı ve daha adil olmasını sağlıyor.

Geçtiğimiz iki yıl boyunca, Türkiye ve en son Çin de dahil olmak üzere, hükümetlerin Vikipedi’ye sansür uygulayarak dünya çapında milyonlarca insanı bu haklarından mahrum bıraktıklarını gördük.

Bugün, Türkiye’de iki yılı aşkın süredir devam eden Vikipedi erişim engelinin kaldırılmasını talep etmek için uluslararası bir mahkeme olan ve Avrupa Konseyi‘ndeki insan hakları ihlalleri davalarına bakan Avrupa İnsan Hakları Mahkemesi‘ne başvuru yoluna gidiyoruz. Bu başvuruyu her insan için temel haklar olan bilgi ve ifade özgürlüğüne bağlılığımızın bir parçası olarak yapmaktayız.

Bu hafife aldığımız bir adım değil. Aksine bu adımı, engeli kaldırmaya yönelik Türk mahkemelerindeki devamlı ve geniş kapsamlı girişimlerimiz, Türk makamlarıyla yapılan iyi niyet görüşmeleri ve erişim engeline ve engelin hem Türkiye hem de dünyanın geri kalanına etkisi üzerinde farkındalık yaratmaya yönelik düzenlenen kampanyalarımız sonrasında atıyoruz.

Bu çabalara rağmen, Vikipedi iki yılı aşkın bir süredir Türkiye’de erişime engelli kalmaya devam etmekte.

• • •

Bu haber, Wikimedia Vakfı’nın başkanı Katherine Maher, Wikipedia’nın kurucusu Jimmy Wales ve Vakfı’n Hukuk Direktörü Stephen LaPorte tarafından bir basın toplantısıyla kamuoyuna duyuruldu. Katherine Maher, “Bilginin – enfarmasyonun- dünyayı daha iyi hale getirdiğine inanıyoruz. Soru sorduğumuzda, gerçekleri öğrenebilmemiz ve bir konudaki tüm bakış açılarını anlayabilmemiz, daha adil ve hoşgörülü bir toplum için temel oluşturmamıza olanak sağlıyor.” dedi ve ekledi, ”Vikipedi, herkesin aktif olarak şekillendirebildiği küresel bir kaynak. Vikipedi’yi daha kullanışlı, daha kapsamlı ve daha temsil eder bir hale getiren bu ortaklaşa yazım, yeniden yazım ve münazara süreci. Bu süreç, aynı zamanda, küresel bir toplum olarak dünyayı nasıl gördüğümüz konusunda daha kapsamlı bir fikir birliği sağlamamızı mümkün kılıyor.”

Strazburg Mahkemesi’ne başvurumuzda, Vikipedi’nin tamamen erişime engellenmesinin Avrupa Sözleşmesi Madde 10 ile güvence altına alınan ifade özgürlüğü hakkı da dahil olmak üzere temel özgürlükleri ihlal ettiğini savunuyoruz. Dahası, bu özgürlükler, erişim engelinden en doğrudan etkilenen 80 milyondan fazla Türk vatandaşı ve bu ulusun 50 milyondan fazla Vikipedi maddesine katkıda bulunma, münazara ve eklemeler yapmadaki zengin bakış açılarını kaybeden dünyanın geri kalanına inkar edilmektedir.

Geçtiğimiz iki yıl boyunca Wikimedia Vakfı Türkiye’deki Vikipedi erişim engelinin kaldırılması için elinden gelen her şeyi yaptı. Vikipedi’yi erişime engelleyen karar, engele rağmen herkes tarafından geliştirilmeye ve gönüllüler tarafından düzenlenebilmeye açık olan, iki maddeye atıfta bulunmakla yetindi. Endişelerin kaynağının – eğer varsa – ne olduğu henüz belli değil. Söz konusu erişim engeli, Türk makamlarının bakış açısını daha iyi anlamak amacıyla yapılan sayısız iyi niyet görüşmesine rağmen, ki buna Ulaştırma, Denizcilik ve Haberleşme Bakanı’na hitaben yazılmış, Vikipedi’nin açık düzenleme modelini, değerlerini ve her türlü yasak sansüre karşı güçlü muhalefetini konu edinen açık mektup da dahil olmak üzere, devam etmekte.

Erişim engelinin akabinde, Vikipedi’nin erişime açılmasını talep ettiğimiz davamızı böyle bir engelin ifade ve basın özgürlüğü hakkını ihlal ettiği gerekçesiyle yerel mahkemede açtık. İlk derece mahkemesi erişim engelini onayladı ve ilk derece mahkemesinin kararına itiraz etmemizden bu yana iki yıl geçmesine rağmen Türkiye’nin en üst mahkemesinden herhangi bir cevap gelmedi. Netice olarak, bu adımın gerekli olduğuna inanıyoruz.

Avrupa İnsan Hakları Mahkemesi (AİHM), sözleşmede belirtilen insan hakları hükümlerinin uygulanmasını ve yürürlüğe konulmasını sağlamak amacıyla Avrupa İnsan Hakları Sözleşmesi tarafından oluşturulan uluslararası bir mahkeme. Türkiye uzun süredir sözleşmenin tarafı ve insan haklarına yapılan müdahaleyi bu durumda daha da üzücü kılan şey, sözleşme tarafından öngörülen temel hakların Türk Anayasası tarafından güvence altına alınmış olması. Ayrıca, internet erişim engellerinin ve sansürün Avrupa Konseyi üyesi ülkeler için giderek artan bir endişe kaynağı oluşu, mahkemenin dosyayı incelemeye almasını daha da önemli hale getirmekte.

Vikipedi, bugün dünyadaki en yaygın erişilen bilgi kaynaklarından birisi. O saniyede 6 bin kez okunuyor ve maddelerimiz dünya genelinde 250 binden fazla gönüllüden oluşan bir topluluk tarafından günlük olarak düzenleniyor, geliştiriliyor ve münazara ediliyor. Bu maddelerin yüzde 85’inden fazlası İngilizce dışındaki dillerde yazılı; ki buna Türkçe konuşan gönüllüler tarafından Türkçe konuşan insanlar için yazılmış 300 bin madde de dahil. Bu gönüllüler, iyiniyetli çabalarıyla belirli bir konunun tüm yönlerini, tartışmalı olanları dahi, aktararak tamamen ve şeffaf bir şekilde anlaşılmasını sağlamaya çalışıyor.

Vikipedi, daha fazla insan onu kullandıkça, geliştirdikçe ve içeriğini düzenledikçe daha iyi, daha zengin ve dünyayı daha iyi yansıtır hale geliyor. Bir ulus Vikipedi’deki küresel sohbete erişime engellendiğinde, bütün dünya fakirleşiyor.

Wikimedia Vakfı, temel bir insan hakkı olarak bilginin, herkes ve dünya genelindeki milyonlarca kullanıcımız tarafından kullanımının korunmasını savunmayı taahhüt ediyor. Bugün, bu bağlılığın bir parçası olarak, Avrupa İnsan Hakları Mahkemesi’ne başvuruda bulunma kararımızı ilan ediyoruz.

Wikimedia, Londra’daki 4 New Square Chambers’dan Can Yeğinsu ve İstanbul’daki ELİG Gürkaynak Avukatlık Ortaklığı’ndan Gönenç Gürkaynak tarafından temsil edilmektedir.

Four years ago, Ismael Andani Abdulai was in graduate school at the University of California, Berkeley, working towards a master’s in law.

One of the required courses for that degree was on cyber law, focusing on legal issues related to computers, the internet, and information technology. Appropriately, the professor in charge of the class assigned their students to edit Wikipedia articles.

“As part of our assessment for the class, each student was required to write a Wikipedia article on any topic related to the course,” he told us. “We were also required to peer review the articles of at least two of our colleagues.”

Like many students, Ismael was no fan of assignments handed down by his teachers—but this one was different. To his surprise, he was excited and enthused about getting graded for editing and reviewing Wikipedia.

“I was … keen on this assignment … because I thought it would be a fun way to contribute something useful to the world,” he says.

Besides being enjoyable and novel, Ismael had another motivation for editing. “There is a sense of pride and achievement each time I see my article pop up in a search,” he admitted.

Today, Ismael is a part-time lecturer at the Ghana Institute of Management and Public Administration (GIMPA) in Accra, and he’s putting his Wikipedia experience to work.

In 2017, he called on the Wikimedia Ghana User Group to help him set up a Wikipedia Education Program at the institute (now known as Wikimedia Education). “I thought I would give my students an opportunity to share in the same excitement I did when it was introduced to me,” he says, adding that “it was also an opportunity to share more material on Ghanaian law.”

Since Ismael was teaching a class on intellectual property law, focusing on topics like industrial property rights and copyright, Ismael assigned his students to create articles around that topic.

The program, which he named ‘Wikipedia Education Program Gimpa’ or simply ‘WEP Gimpa’, had 60 participants—the entirety of Ismael’s intellectual property law class.

The Wikimedia Ghana User Group hosted two workshops for the class as part of the program before the assignment got underway.

The first workshop introduced the students to Wikimedia, Wikipedia, and Wikimedia in education around the globe. The second gave them the opportunity to create Wikipedia accounts and also learn how to create and edit articles.

The next activity was the assignment to write articles. “It was a mandatory assignment, so it was not too much of a problem getting them involved,” Ismael amusedly recounted.

The user group took note of the versatility of Wikipedia as a teaching tool in the classroom.

In one activity, students could learn through writing articles and contributing knowledge, for which they would get rewarded. The tutor could also rely on it as an innovative way to give exercises.

However, these benefits didn’t come without hurdles.

“Many of [the students] didn’t quite understand Wikipedia editing standards, and that was quite a challenge,” Ismael said. “Also, because of numbers, they had to work in groups so managing them was a bit easier, although I’d have preferred individual assignments.”

Alongside his students, Ismael has learned from the assignment as well. In the future, he’d like to “devise a way to improve supervision, both in terms of content, and in terms of actual editing.”

Still, Ismael ended his interview hopeful. “Yes, I am [happy],” he said. “I’m hoping at least one of [the students will] take it a step further” and become a regular editor of the site.

Sandister Tei and Justice Okai-Allotey
Wikimedia Ghana User Group

This blog post has been edited to clarify that the Wikipedia Education Program is now known as Wikimedia Education.

In short interviews with our employees we illuminate aspects and questions of our daily work with BlueSpice and our customers. In this article, we will focus on the topic of customizing. An interview with Sabine Gürtler, Team Lead Service & Support at Hallo Welt! GmbH.

Sabine, tell me: What does customizing mean in the context of our product?

In our case, customizings are adaptations to our BlueSpice wiki software that go beyond the product standard.

So the customer has the possibility to “rebuild” BlueSpice according to his wishes?

Exactly. But it’s not the customer himself who does this, it’s us. Specific customer wishes are always the starting point for a customizing project. It doesn’t matter whether our customer uses a MediaWiki or BlueSpice – the enterprise version of MediaWiki. Our developers implement adaptations for both systems on customer request.

What exactly can be adapted within the scope of the customizing project?

Quite a lot. For example, it is possible to intervene in the standard workflow of BlueSpice. This concerns, for example, the modification of rules with regard to the release or comment options of an article. Beyond that, some customers request an individual interface design. While minor adjustments to the corporate design guidelines of the customer are usually covered by our “branding package”, sometimes we have to go a step further. A good example to underline that example would be the implementation of two different user interfaces for wiki admins and wiki users.

Another customer runs a public medical wiki and wants to deliver his wiki in the “look and feel” of a website. No problem for us. Yet another one wants an individual wiki homepage with the “portal look” of an intranet to provide employees with special information and encourage them to use the company wiki.

Sounds exciting. Is there more to it?

Absolutely. Some customers, for example, wish to adapt or extend existing wiki functions. In this context we are currently implementing special upload notifications, a PDF export of various wiki page types and a push-and-merge function where content is transferred from one wiki to another – including an automated check for redundant content.

When it comes to customizing, one should not forget the development of technical interfaces to existing IT systems, like the connection of the wiki search engine to the customers’ intranet.

Furthermore data migration or synchronization routines can be adapted. A great example is a customer who requested that Microsoft Word files stored on his internal drive be automatically imported into his company wiki. This is realized with technical solutions like “CronJobs”. You see, almost everything is feasible.

Almost everything? Are there any exclusion criteria?

Yes, there are. Since we avoid so-called “core hacks”, the prerequisite for customizing is a corresponding interface (hook / API) in MediaWiki, the core software of BlueSpice. We do not interfere with the MediaWiki code, because otherwise problems would arise during update and maintenance of the system. This would have negative effects on the stability, system security and sustainability of the system. However, we always try to find a solution via workarounds for critical adjustment requests. It is important that the cost-benefit ratio fits. Of course, we advise our customers beforehand whether an adaptation makes sense from our point of view or not.

And how do customizings affect the standard product?

Normally, customizings are realized with customer-specific extensions. However, customizations that are useful for all BlueSpice customers are transferred to the standard product. Recently, this has involved, for example, usability optimizations, improvements in statistical evaluations or a simplification of the image insertion routine in the visual editor. In case of adaptations that involve a deep intervention in the programming code, we also check whether standardization makes sense. Either way, the decision whether an adaptation becomes a standard lies with us, the Hallo Welt! Ltd.

O.K. But what’s the benefit for the customer if his adaptation becomes a  product standard?

That’s a good question. The customer of course benefits. On the one hand there is the maintenance effort, which doesn’t apply as soon as the customization is transferred to our standard software. Thus the customers’ investment in “his” customization pays for itself in the long term. If an adaptation similar to that desired by the customer is already on our agenda, we also contribute to the implementation costs. This significantly reduces the customers’ financial input. Plus, by incorporating the adaptation into the standard product, we ensure that the function is continuously optimized and further developed. Good for the customer, good for us.

Last but not least: How does a customization process work?

In a few words: After the customer has articulated a wish, our technical team checks the effort. Then we write an offer or submit an alternative suggestion. In case of larger adaptations we work out specification book, which is the foundation for technical implementations.

At the end of the day, one thing is particularly important to us: a customer who gets exactly the BlueSpice wiki he needs for his day-to-day work in his company.

Let’s Wiki together!


More information about our migration services can be found here:

Test BlueSpice pro now for 30 days free of charge and without obligation:

Visit our webinar and get to know BlueSpice:

Contact us:
Angelika Müller and Florian Müller
Telephone: +49 (0) 941 660 800
E-Mail: sales@bluespice.com

Author: David Schweiger

The post Customizing: How we adapt BlueSpice and MediaWiki to our customers’ wishes. appeared first on BlueSpice Blog.

Tool creation added to toolsadmin.wikimedia.org

23:11, Tuesday, 21 2019 May UTC

Toolsadmin.wikimedia.org is a management interface for Toolforge users. On 2017-08-24, a new major update to the application was deployed which added support for creating new tool accounts and managing metadata associated with all tool accounts.

Under the older Wikitech based tool creation process, a tool maintainer sees this interface:

wikitech screenshot

As @yuvipanda noted in T128158, this interface is rather confusing. What is a "service group?" I thought I just clicked a link that said "Create a new Tool." What are the constrains of this name and where will it be used?

With the new process on toolsadmin, the initial form includes more explanation and collects additional data:

toolsadmin screenshot

The form labels are more consistent. Some explanation is given for how the tool's name will be used and a link is provided to additional documentation on wikitech. More information is also collected that will be used to help others understand the purpose of the tool. This information is displayed on the tool's public description page in toolsadmin:

toolinfo example

After a tool has been created, additional information can also be supplied. This information is a superset of the data needed for the toolinfo.json standard used by Hay's Directory. All tools documented using toolsadmin are automatically published to Hay's Directory. Some of this information can also be edited collaboratively by others. A tool can also have multiple toolinfo.json entries to support tools where a suite of functionality is published under a single tool account.

The Striker project tracks bugs and feature ideas for toolsadmin. The application is written in Python3 using the Django framework. Like all Wikimedia software projects, Striker is FLOSS software and community contributions are welcome. See the project's page on wikitech for more information about contributing to the project.

Gerrit now automatically adds reviewers

21:57, Tuesday, 21 2019 May UTC

Finding reviewers for a change is often a challenge, especially for a newcomer or folks proposing changes to projects they are not familiar with. Since January 16th, 2019, Gerrit automatically adds reviewers on your behalf based on who last changed the code you are affecting.

Antoine "@hashar" Musso exposes what lead us to enable that feature and how to configure it to fit your project. He will offers tip as to how to seek more reviewers based on years of experience.

When uploading a new patch, reviewers should be added automatically, that is the subject of the task T91190 opened almost four years ago (March 2015). I declined the task since we already have the Reviewer bot (see section below), @Tgr found a plugin for Gerrit which analyzes the code history with git blame and uses that to determine potential reviewers for a change. It took us a while to add that particular Gerrit plugin and the first version we installed was not compatible with our Gerrit version. The plugin was upgraded yesterday (Jan 16th) and is working fine (T101131).

Let's have a look at the functionality the plugin provides, and how it can be configured per repository. I will then offer a refresher of how one can search for reviewers based on git history.

Reviewers by blame plugin

The Gerrit plugin looks at affected code using git blame, it extracts the top three past authors which are then added as reviewers to the change on your behalf. Added reviewers will thus receive a notification showing you have asked them for code review.

The configuration is done on a per project basis and inherits from the parent project. Without any tweaks, your project inherits the configuration from All-Projects. If you are a project owner, you can adjust the configuration. As an example the configuration for operations/mediawiki-config which shows inherited values and an exception to not process a file named InitialiseSettings.php:

The three settings are described in the documentation for the plugin:

The maximum number of reviewers that should be added to a change by this plugin.
By default 3.

Ignore files where the filename matches the given regular expression when computing the reviewers. If empty or not set, no files are ignored.
By default not set.

Ignore commits where the subject of the commit messages matches the given regular expression. If empty or not set, no commits are ignored.
By default not set.

By making past authors aware of a change to code they previously altered, I believe you will get more reviews and hopefully get your changes approved faster.

Previously we had other methods to add reviewers, one opt-in based and the others being cumbersome manual steps. They should be used to compliment the Gerrit reviewers by blame plugin, and I am giving an overview of each of them in the following sections.

Gerrit watchlist

The original system from Gerrit lets you watch projects, similar to a user watch list on MediaWiki. In Gerrit preferences, one can get notified for new changes, patchsets, comments... Simply indicate a repository, optionally a search query and you will receive email notifications for matching events.

The attached image is my watched projects configuration, I thus receive notifications for any changes made to the integration/config config as well as for changes in mediawiki/core which affect either composer.json or one of the Wikimedia deployment branches for that repo.

One drawback is that we can not watch a whole hierarchy of projects such as mediawiki and all its descendants, which would be helpful to watch our deployment branch. It is still useful when you are the primary maintainer of a repository since you can keep track of all activity for the repository.

Reviewer bot

The reviewer bot has been written by Merlijn van Deen (@valhallasw), it is similar to the Gerrit watched projects feature with some major benefits:

  • watcher is added as a reviewer, the author thus knows you were notified
  • it supports watching a hierarchy of projects (eg: mediawiki/*)
  • the file/branch filtering might be easier to gasp compared to Gerrit search queries
  • the watchers are stored at a central place which is public to anyone, making it easy to add others as reviewers.

One registers reviewers on a single wiki page: https://www.mediawiki.org/wiki/Git/Reviewers.

Each repository filter is a wikitext section (eg: === mediawiki/core ===) followed by a wikitext template and a file filter using using python fnmatch. Some examples:

Listen to any changes that touch i18n:

== Listen to repository groups ==
=== * ===
* {{Gerrit-reviewer|JohnDoe|file_regexp=<nowiki>i18n</nowiki>}}

Listen to MediaWiki core search related code:

=== mediawiki/core ===
* {{Gerrit-reviewer|JaneDoe|file_regexp=<nowiki>^includes/search/</nowiki>

The system works great, given maintainers remember to register on the page and that the files are not moved around. The bot is not that well known though and most repositories do not have any reviewers listed.

Inspecting git history

A source of reviewers is the git history, one can easily retrieve a list of past authors which should be good candidates to review code. I typically use git shortlog --summary --no-merges for that (--no-merges filters out merge commit crafted by Gerrit when a change is submitted). Example for MediaWiki Job queue system:

$ git shortlog --no-merges --summary --since "one year ago" includes/jobqueue/|sort -n|tail -n4
     3 Petr Pchelko
     4 Brad Jorsch
     4 Umherirrender
    16 Aaron Schulz

Which gives me four candidates that acted on that directory over a year.

Past reviewers from git notes

When a patch is merged, Gerrit records in git trace votes and the canonical URL of the change. They are available in git notes under /refs/notes/review, once notes are fetched, they can be show in git show or git log by passing --show-notes=review, for each commit, after the commit messages, the notes get displayed and show votes among other metadata:

$ git fetch refs/notes/review:refs/notes/review
$ git log --no-merges --show-notes=review -n1
commit e1d2c92ac69b6537866c742d8e9006f98d0e82e8
Author: Gergő Tisza <tgr.huwiki@gmail.com>
Date:   Wed Jan 16 18:14:52 2019 -0800

    Fix error reporting in MovePage
    Bug: T210739
    Change-Id: I8f6c9647ee949b33fd4daeae6aed6b94bb1988aa

Notes (review):
    Code-Review+2: Jforrester <jforrester@wikimedia.org>
    Verified+2: jenkins-bot
    Submitted-by: jenkins-bot
    Submitted-at: Thu, 17 Jan 2019 05:02:23 +0000
    Reviewed-on: https://gerrit.wikimedia.org/r/484825
    Project: mediawiki/core
    Branch: refs/heads/master

And I can then get the list of authors that previously voted Code-Review +2 for a given path. Using the previous example of includes/jobqueue/ over a year, the list is slightly different:

$ git log --show-notes=review --since "1 year ago" includes/jobqueue/|grep 'Code-Review+2:'|sort|uniq -c|sort -n|tail -n5
      2     Code-Review+2: Umherirrender <umherirrender_de.wp@web.de>
      3     Code-Review+2: Jforrester <jforrester@wikimedia.org>
      3     Code-Review+2: Mobrovac <mobrovac@wikimedia.org>
      9     Code-Review+2: Aaron Schulz <aschulz@wikimedia.org>
     18     Code-Review+2: Krinkle <krinklemail@gmail.com>

User Krinkle has approved a lot of patches, even if he doesn't show in the list of authors obtained by the previous mean (inspecting git history).


The Gerrit reviewers by blame plugin acts automatically which offers a good chance your newly uploaded patch will get reviewers added out of the box. For finer tweaking one should register as a reviewer on https://www.mediawiki.org/wiki/Git/Reviewers which benefits everyone. The last course of action is meant to compliment the git log history.

For any remarks, support, concerns, reach out on IRC freenode channel #wikimedia-releng or fill a task in Phabricator.

Thank you @thcipriani for the proof reading and english fixes.

Opening Online Learning with OER

10:50, Tuesday, 21 2019 May UTC

This is the transcript of a talk I gave last week at the College of Medicine and Veterinary Medicine’s Post Graduate Tutors Away Day at the University of Edinburgh.  Slides are available here: Opening Online Learning with OER

Before I go on to talk about open education and OER, I want you to think about Ra’ana Hussein’s inspiring video where she articulates so clearly why participating in the MSc in Paediatric Emergency Medicine has been so empowering for her. 

Ra’ana said that the course helps her to be better at her work, and that she gains knowledge and learning that she can implement practically. It’s enabled her to meet people from diverse backgrounds, and connect with a global community of peers that she can share her practice with.  She finds online learning convenient, and tailored to her needs and she benefits from having immediate access to support, which helps her to balance her work and study commitments.

I’d like you to try and hold Ra’ana’s words in your mind while we go on and take a look at open education, OER and what it’s got to do with why we’re here today.

What is open education?

Open education is many things to many people.

  • A practice?
  • A philosophy?
  • A movement?
  • A human right?
  • A licensing issue?
  • A buzz word?
  • A way to save money?

Capetown Declaration

The principles of the open education were outlined in the 2008 Cape Town Declaration, one of the first initiatives to lay the foundations of the “emerging open education movement”.  The Declaration advocates that everyone should have the freedom to use, customize, and redistribute educational resources without constraint, in order to nourish the kind of participatory culture of learning, sharing and cooperation that rapidly changing knowledge societies need.  The Cape Town Declaration is still an influential document and it was updated last year on its 10th anniversary as Capetown +10, and I can highly recommend having a look at this if you want a broad overview of the principles of open education.

Aspects of Open Education

Although there’s no one hard and fast definition of open education, one description of the open education movement that I particularly like is from the not for profit organization OER Commons…

“The worldwide OER movement is rooted in the human right to access high-quality education. The Open Education Movement is not just about cost savings and easy access to openly licensed content; it’s about participation and co-creation.”

Open education is highly contextual and encompasses many different things. These are just some of the aspects of open education

  • Open online courses
  • Open pedagogy
  • Open practice
  • Open assessment practices
  • Open textbooks
  • Open licensing
  • Open data
  • MOOCs
  • Open Access scholarly works
  • Open educational resources (OER)


Though Open Education can encompass many different things, open educational resources, or OER, are central to any understanding of this domain.

UNESCO define open educational resources as

“teaching, learning and research materials in any medium, digital or otherwise, that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions.”

UNESCO Policy Instruments

And the reason I’ve chosen this definition is that UNESCO is one of a number of international agencies that actively supports the global adoption of open educational resources.  In 2012 UNESCO released the Paris OER Declaration which encourages governments and authorities to open license educational materials produced with public funds, in order to realize substantial benefits for their citizens and maximize the impact of investment.   And in 2017 UNESCO brought together 111 member states for the 2nd OER World Congress in Slovenia, the main output of which was the UNESCO Ljubljana OER Action Plan.  Central to the OER Action plan is the acknowledgement of the role that OER can play in achieving United Nations Sustainable Development Goal 4 and support quality education that is equitable, inclusive, open and participatory.

In his summing up at the end of the congress UNESCO Assistant Director for Education Qian Tang said

“to meet the education challenges, we can’t use the traditional way. In remote and developing areas, particularly for girls and women, OER are a crucial, crucial means to reach SDGs. OER are the key.”

The Action Plan acknowledges that open education and OER provide a strategic opportunity to improve knowledge sharing, capacity building and universal access to quality learning and teaching resources. And, when coupled with collaborative learning, and supported by sound pedagogical practice, OER has the transformative potential to increase access to education, opening up opportunities to create and share an array of educational resources to accommodate greater diversity of educator and learner needs.

Open Education at the University of Edinburgh

Now all this may sound very aspirational and possibly a touch idealistic, but here at the University of Edinburgh we believe that open education and OER are strongly in line with our institutional mission to deliver impact for society, discover, develop and share knowledge, and make a significant, sustainable and socially responsible contribution to the Scotland, the UK and the world.

Support for Sustainable Development Goals

It’s also worth noting that the University already has a commitment to Sustainable development goals through the Department for Social Responsibility and Sustainability and the university and college sectors’ Sustainable Development Accord.  And the new principal has recently re-stated the University’s commitment to meeting this goals.

OER Vision

The University has a vision for OER which has three strands, building on our excellent education and research collections, traditions of the Scottish Enlightenment and the university’s civic mission.  These are:

  • For the common good – encompassing every day teaching and learning materials.
  • Edinburgh at its best – high quality resources produced by a range of projects and initiatives.
  • Edinburgh’s Treasures – content from our world class cultural heritage collections.

OER Policy

This vision is backed up by an OER Policy, approved by our Learning and Teaching Committee, which encourages staff and students to use, create and publish OERs to enhance the quality of the student experience.  This OER Policy is itself CC licensed and is adapted from an OER Policy that has already been adopted by a number of other institutions in the UK.  The fact that this policy was approved by the Learning and Teaching Committee, rather than by the Knowledge Strategy Committee is significant because it places open education and OER squarely in the domain of teaching and learning, which of course is the domain we’re focusing on here today.  The University’s vision for OER is very much the brain child of Melissa Highton, Assisstant Principal Online Learning and Director of Learning and Teaching Web Services.  However it’s also notable that EUSA the student union were instrumental in encouraging the University to adopt an OER policy, and we continue to see student engagement and co-creation as being fundamental aspects of open education.  

OER Service

But of course policy is nothing without support, so we also have an OER Service that provides staff and students with advice and guidance on creating and using OER and engaging with open education.  We run a wide range of digital skills workshops for staff and students focused on copyright, open licencing, OER and playful engagement.  And we provide a one stop shop where you can access open educational resources produced by staff and students across the university, including some from this college.   As well as working closely with our students, the OER Service also hosts Open Content Creation student interns every summer.  And if you’d like to talk to me about the advice and guidance the OER Service provides…

Near Future Teaching

Openness is also at the heart of the Near Future Teaching project undertaken over the last two years by a team from the Centre for Research in Digital Education, led by Sian Bayne (Assistant Principal Digital Education).  This project co-created a values based vision for the future of digital education at the University with input from more than 400 staff and students. The project report, published last month, sets out a vision and aims for a near future teaching that is community focused, post digital, data fluent, assessment oriented, playful and experimental, and boundary challenging.  And one of the ways these goals can be achieved is  through increasing openness.  So for example the report calls for boundary challenging digital education that is lifelong, open and transdisciplinary, and the actions required to achieve these objectives are all centered on committing to openness.

So that’s the big picture vision, but what I want to do now is just take a few minutes to look at what’s actually happening in practice, and to highlight some of the innovative open education initiatives that are already going on across the university.

Building Community

Open education is a great way to build community and if you cast your mind back to Ra’ana you’ll remember that she appreciated being part of a connected global community of peers. 

One great way to build community is through academic blogging, and just last year the University set up a new centrally supported Academic Blogging Service. The service provides staff and students with a range of different blogging platforms to support professional development and learning, teaching and research activities.  The service includes existing platforms such as Learn, Moodle, and Pebblepad and a new centrally supported WordPress service, blogs.ed.ac.uk.  To complement the service, we provide digital skills resources and workshops, including one on Blogging to Build Your Professional Profile, we’ve recently launched a seminar series featuring talks from academic blog users around the University, and we’ve been running a mini-series on the Teaching Matters blog.  And I’d like to draw your attention to the most recent blog post in that series from Bethany Easton from the School of Health in Social Science, about The Nursing Blog which was set up in 2014 as a community blog where staff and students from across the Nursing Studies Subject area can share their achievements, research, and work.   And another great example of community blogging is Stories from Vet School which features blogs posts written by current undergraduate veterinary medicine students.  And if you look carefully you’ll see that one thing both these blogs have in common is that they both carry a Creative Commons open licence, which means that the posts themselves are open educational resources that can be reused by other teachers and learners. It’s easy to see how this format could be adopted for use with online postgraduate students as a great way to connect them with their peers and build that all important sense of community so critical for distance learners.

Diversifying the Curriculum

OER can also make a significant contribution to diversifying and decolonizing the curriculum. 

LGBT+ Healthcare 101 was a collaborative project run by EDE and the Usher Institute worked with undergraduate students, to develop a suite of resources covering lesbian, gay, bisexual and transsexual health. Although knowledge of LGBT health and of the sensitivities needed to treat LGBT patients are valuable skills for qualifying doctors, these issues are not well-covered in the Medical curricula.  Using materials from the commons, this project sought to address the lack of teaching on LGBT health through OER.  The project remixed and repurposed resources originally created by Case Western Reserve University School of Medicine in Ohio, and then contributed these resources back to the commons as Creative Commons licensed OER.  New open resources including digital stories recorded from patient interviews and resources for Secondary School children of all ages were also created and released as OER.

More recently the OER Service has released a series of resources on Openness, Equality and Inclusion which includes materials from a workshop we ran with EUSA VP of Education, Diva Mukherji, on Decolonising and Diversifing the curriculum with Open Educational Resources.  And again it’s not difficult to see how important diversifying the curriculum is when you’re creating educational resources and learning experiences for global students from a wide range of different cultural contexts.

Access to Resources

Creating and using open educational resources is also an important way to ensure longevity of access to course materials, and this can benefit staff, students, and the university itself.    It’s very common to think of OER as primarily being of benefit to those outwith the institution, however open licenses also help to ensure that we can continue to use and reuse the resources that we ourselves have created.  I’m sure you’ll all have come projects that created great content only for those resources to become inaccessible once the project ends or great teaching and learning materials belonging to a colleague who has subsequently retired or moved on, and nobody quite knows if they can still be used or not. Unless teaching and learning resources carry a clear and unambiguous open licence, it is difficult to know whether and in what context they can be reused.  This is a phenomenon that my colleague Melissa Highton has referred to as copyright debt.  If you don’t get the licensing right first time round it will cost you to fix it further down the line, and the cost and reputational risk to the university could be significant if copyright is breached.   And this is one of the best strategic reasons for investing in open educational resources at the institutional level. We need to ensure that we have the right use, adapt, and reuse, the educational resources we have invested in.  We already have some really innovative open educational resources from the College highlighted on the OER Service website and if you want to learn more about how to use and create re-useable open content without fear of breaching copyright, the OER Service runs a number of digital skills workshops covering this and we have lots of materials available online too.

In the context of online distance learning, using open licensed resources means that students can continue to access and use these resources after they have graduated.  And this is an issue that is becoming increasingly pressing as there have been a number of critical press reports recently about postgraduate students who have lost access to resources after the taught component of their courses has finished but before they have submitted all their course work.

MOOCs and the Open Media Bank

Continued access to educational resources can be particularly problematic when it comes to MOOCs.  Educational content often gets locked into commercial MOOC platforms, regardless of whether or not it is openly licensed, and some platforms are now time limiting access to content. Clearly this is not helpful for students and, given how costly high quality online teaching and learning resources are to produce, it also represents a poor return on investment for the University.  So one of the ways that we’re addressing this here at the University is by ensuring that all the content we have produced for our MOOCs is also freely available to download under open licence from the Open Media Bank channel on Media Hopper Create.  We now have over 500 MOOC videos which are available to re-use under Creative Commons licence, including “Mental Health: A Global Priority” from the School of Molecular, Genetic and Population Health Sciences, and “Clinical Psychology of Children and Young People” from the School of Health in Social Science.

Wikipedia in the Classroom

Another way we can create open knowledge and embed open education in the curriculum is by engaging with the world’s biggest open educational resource, Wikipedia.  Here at the University we have our very own Wikipedian in Residence, Ewan McAndrew, based in Learning, Teaching and Web Services. Ewan works to embed open knowledge in the curriculum, through skills training sessions, editathons, Wikipedia in the classroom initiatives and Wikidata projects, in order to increase the quantity and quality of open knowledge and enhance digital and information literacy skills for both staff and students.   And one of the ways that Ewan does this is by working with academic colleagues to develop Wikipedia in the Classroom assignments. Creating Wikipedia entries enables students to demonstrate the relevance of their field of study and share their scholarship in a real-world context and at the same time, contribute to the global pool of open knowledge.

To date, 11 course programmes across the University have developed Wikipedia assignments, some of which are now in their second or third iteration. And I know that Ewan is working with colleagues to explore the creation of new Wikipedia assignments for the MScs in Global and Public Health. 

Reproductive Biomedicine have been successfully running Wikipedia assignments as part of their Reproductive Biology Honours course since 2015.  As part of her assignment in 2016, honours student Aine Kavanagh created a new Wikipedia article on high-grade serous carcinoma, one of the most common forms of ovarian cancer.   This article, including over sixty references and open-licensed diagrams created by Áine herself, has now been viewed over 64,000 times since it was published in September 2016, it’s hard to imagine many other student assignments having this kind of impact.  Not only has Aine contributed valuable health information to the global Open Knowledge community, she has also created a resource that other students and global health experts can add to and improve over time.  Creating resources that will live on on the open web, and that make a real contribution to global open knowledge, has proved to be a powerful motivator for the students taking part in these assignments.  

OER Creation Assignments

In addition to the Wikipedia in the Classroom assignments, there are also other examples of open assessment practices from around the University, including assessed blogging assignments and OER creation assignments. So for example, these resources on Inflammatory Bowel Disease in Pets were created by Silke Salavati for an assignment as part of the Digital Education module for the Postgraduate Certificate (PgCert) in Academic Practice.  And OER creation assignments also form an integral part of the Digital Futures for Learning course which is part of the MSc in Digital Education.  Commenting on this OER creation assignment in a recent blog post, Jen Ross who runs this course said

“Experiencing first-hand what it means to engage in open educational practice gives student an appetite to learn and think more.  The creation of OERs provides a platform for students to share their learning. In this way, these assignments can have ongoing, tangible value for students and for the people who encounter their work.”


These are just some of the ways that open education and OER is already being embedded and supported across the University and I hope this will give you some ideas as to how open approaches can benefit your online courses ad modules here in the College.  And if you think back to Ra’ana and all the reasons that she appreciated being a student on the MSc in Paediatric Emergency Medicine; ease of access to resources and support, the practical application of knowledge, the ability to share her practice with her peers, being part of a diverse and connected global community, these are all aspects that can be enhanced further by engaging with OER and open education.

 I want to finish with a quote from one of our Open Content Curation student interns, and I make no apology for using this quote almost every time I talk about open education and OER.  This is former undergraduate Physics student Martin Tasker talking about the value of open education

“Open education has played such an integral part of my life so far, and has given me access to knowledge that would otherwise have been totally inaccessible to me. It has genuinely changed my life, and likely the lives of many others. This freedom of knowledge can allow us to tear down the barriers that hold people back from getting a world class education – be those barriers class, gender or race. Open education is the future, and I am both proud of my university for embracing it, and glad that I can contribute even in a small way. Because every resource we release could be a life changed. And that makes it all worth it.”

The Wikimedia Foundation has supported free access to the sum of all knowledge for nearly sixteen years. This longstanding vision would not be possible without the dedication of community members who contribute content to the Wikimedia projects. As a global platform for free knowledge, we are sometimes approached by governments and private parties with requests to delete or change project content, or to release nonpublic user information. The Foundation consistently evaluates such requests with an eye towards protecting privacy and freedom of expression. We are committed to sharing data about our responses to these requests with the diverse communities of Wikimedians who contribute to the projects we support.

Twice a year, we publish a transparency report outlining the number of requests we received, their types, countries of origin, and other information. The report also features an FAQ and stories about interesting and unusual cases.

A few highlights:

Content alteration and takedown requests. From July to December of 2018, we received 492 requests to alter or remove project content. We did not make any changes to project content as a result, but often encouraged the requesters to work with the user communities to address their concerns. 195 of these requests were Right to Erasure-based requests related to user accounts. When we receive such a request, we provide the user information on the community-driven vanishing process.

The volunteer contributors who build, grow, and improve the Wikimedia projects follow community-created policies that ensure project content is appropriate and well-sourced. We support the communities’ prerogative to determine what educational content belongs on the projects.

Copyright takedown requests. The Wikimedia communities work diligently to ensure that copyrighted material is not uploaded to the projects without an appropriate free license or exception, such as fair use. Most Wikimedia project content is therefore freely licensed or in the public domain. When we occasionally receive Digital Millennium Copyright Act (DMCA) notices asking us to remove allegedly infringing material, we conduct thorough investigations to make sure the claims are valid. From July to December of 2018, we received only five DMCA requests, and granted none of them. This low number is due to the hard work of community volunteers who ensure that content on the projects is properly licensed.

Requests for user data. The Wikimedia Foundation only grants requests for user data that comply with our requests for user information procedures and guidelines (which includes a provision for emergency conditions). The Foundation also collects little nonpublic user information in the first place, as part of our commitment to user privacy, and retains that information for a short amount of time. Of the 24 user data requests we received, only four resulted in disclosure of nonpublic user information.

In addition to updating the online report, we have also released an updated version of our print edition, providing detailed figures for the last six months of requests. These print versions will be available at Wikimedia Foundation events.

The Wikimedia Foundation’s biannual transparency report reaffirms our commitment to transparency, privacy, and freedom of expression. It also reflects the diligent work of the Wikimedia community members who shape the projects. We invite you to learn more about requests we received in the past six months in our comprehensive transparency report. For information about past reports, please see our previous blog posts.

Jim Buatti, Legal Counsel
Leighanna Mixter, Legal Counsel
Aeryn Palmer, Senior Legal Counsel
Wikimedia Foundation

The transparency report would not be possible without the contributions of Jacob Rogers, Katie Francis, Rachel Stallman, Stella Chang, Benson Chao, Linnea Doan, Joe Sutherland, Patrick Johnson, Prateek Saxena, Jan Gerlach, and the Wikimedia Foundation’s Digital Media team. The print edition of the report is produced by Oscar Printing Company.

Tech News issue #21, 2019 (May 20, 2019)

00:00, Monday, 20 2019 May UTC
TriangleArrow-Left.svgprevious 2019, week 21 (Monday 20 May 2019) nextTriangleArrow-Right.svg
Other languages:
Bahasa Indonesia • ‎English • ‎français • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎हिन्दी • ‎中文 • ‎日本語

Today I sent a letter to U.S. Rep. Louis Frankel on behalf of OpenMeetings.org.  Two pages, professionally typeset, enclosed in custom-made envelopes bearing the original 19th century Caligraph ad as a security pattern, then sent via certified mail.  The letter itself is an
in-district meeting request to discuss concerns over previously-undisclosed blanket government surveillance and how such widespread surveillance adversely affects open meetings; Rep. Frankel recently voted against the Amash amendment, which would have de-funded the domestic side of it.  (It’s important to note in the U.S. all funding bills must originate in the House.)  The amendment failed 205 to 217 with only seven additional votes needed for passage.

A copy of the letter is available.  I’ll update this post with the response sent on behalf of her office; in the meantime, I’d like to open a discussion thread on open thoughts related to this topic or items that should be discussed at the constituent meeting.  Please feel free to contact me directly at GChriss atAt openmeetings dotDot org or leave a comment below.

Known Issues: A Hacking Roadmap

20:00, Sunday, 19 2019 May UTC

Soo… welcome to the Announcements blog! I’m excited to introduce this blog as a venue that will provide project updates, context on OMwiki’s internal structure, and a discussion forum for other issues surrounding OpenMeetings.org.

To start, the following email summarizes many of the known issues with scalability.  If these issues can be properly addressed, I’m hoping that the amount of video indexed and delivered by OpenMeetings.org can be increased by a few orders of magnitude.

If you can help tackle these issues, or know somebody who can, please leave a comment either here or on my talk page.

Thanks!  -George

From:       “George Chriss” <GChriss -at- openmeetings.org>
Date:       Wed, July 21, 2010 6:12 pm
To:       “SFC Board” <board -at- freeculture.org>
Cc:       metavid-l -at- lists.wikimedia.org

Video publication is a very manual process; production workflow is a process that definitely needs to be hacked.  The following is a list of things that would help, and that I need help with:

A) Recording
I’ve taken prosumer-grade cameras about as far as possible —
currently, I’m using a Canon FS22 with 2×32GB flash.  More expensive cameras don’t really help: they become cost-prohibitive in terms of scale, are less discrete, a pain to travel with, and don’t offer much advantage in visual quality at web resolutions.  The largest shortcomings of the FS22 are that it requires ‘modcopy‘ to fix 16:9 aspect ratios during file-import, long recordings are split across multiple files, there is an accumulative as-recorded time drift vs. real-world time, and the mic-in preamp often picks up line noise with XLR sources (impedance issues?).  Other than that, it’s a pretty good

I submitted a CC Catalyst application to fund hacking of Elphel
open-source, open-hardware cameras
: bit.ly/bTLmQx
This will be a really fun project if funded!

The Elphel cameras could be set to request the event of the event title, speaker names/affiliations, CC license, “who’s speaking right now?”, etc., as this takes time to dig up after-the-fact.

There’s also a lot of work to be done developing reference
documentation for properly-equipped meeting spaces.  I’ve started sketching out arrangements of in-room equipment (OMwiki:Gear), and more equipment documentation is on it’s way.

B) Editing
Cinelerra is a mess, but it’s the only viable way to edit video
professionally using all-FLOSS software.  The majority of editing time on non-XLR recordings is spent on sound cleanup, as was the case with FCX.  The remainder of the time is spent drafting graphic title slides (GIMP), scanning for sections that should be removed, and, if necessary, manually re-syncing audio from a secondary audio source (can Audacity do this?).

I’m looking forward to trying out VideoLAN Movie Creator or,
eventually, Lumiera, but I haven’t attempted either yet.  Blender might be an option if it supported piped YUV output, as is the case with Cinelerra — I don’t trust built-in encoders.

After a Theora video is rendered via the YUV4MPEG pipe, I merge-in audio (oggz-merge), create a Skeleton (oggz-chop), validate the file (oggz-validate), create a .torrent file (BT Mainline + WINE — could valid files be produced from the command line?), created an animated GIF (see below), then upload to the Internet Archive.  A script to automate this process shouldn’t be tooo hard to draft…

C) Internet Archive (IA)
In-page playback is busted, as is automated animated + static
thumbnail creation for files that are submitted as Ogg Theora. Both issues will need to be resolved by IA staff, but recommendations on the following items might be helpful:
-Edits of the .js file responsible for rendering the element,
especially in the absence of a H.264 derived file.
ffmpeg recently changed the ‘-padtop’-style syntax for thumbnail generation — I haven’t figured out how to create thumbnails with the most-recent versions.

Additionally, there are a number of IA metadata fields that are
manually calculated and entered, such as a Unix timestamp for the date of the event, wgs84 geo-coordinates, user-generated md5sum hash (to check against incomplete uploads, file corruption, tampering), and a few other fields that could be integrated with the upload script and/or Elphel metadata.

D) OpenMeetings.org
After publication is complete, I download a copy of the meeting from the IA to OpenMeetings.org, create a new page in the ‘Stream:’ namespace, then supply the URL of the just-downloaded video.  Then, I upload the animated thumbnail (MediaWiki), overwrite the MediaWiki-generated static thumbnail with the original animated thumbnail (SSH), and rotate-in the meeting as a Featured Meeting and add it to the Visual Finding Aid. I then enter the Unix timestamp into the appropriate MySQL field (phpMyAdmin) such that videos are searchable according to date, which is busted at the moment (see below).

I am embarrassed to say that I generate fully-specified Media RSS feeds by hand, and that I accidentally deleted some many-item feeds.

The MetaVidWiki extension needs to be rewritten for ‘Stream:’ asset management, compatibility with other platforms (e.g., Universal Subtitles), and integration with the latest Kaltura embedded player.  Right now, this means that some of the MetaVidWiki controls are busted, such as advanced search, automatic caption scrolling, and “jump to” hyperlinks.  On the plus side, in-browser video remixing is starting to come online.

Fixing Broken Functionality

20:00, Sunday, 19 2019 May UTC

From approximately 5-October until 23-November, in-browser video playback in OMwiki was completely broken.  Visitors were greeted with a “Video playback is broken at the moment. :-/” top-posted message, which leads to poor first impressions and undermines our overall credibility.  So, what happened and how can it be avoided in the future?

Click here to skip the tech breakdown.

What happened?
All video currently hosted on OpenMeetings.org is encoded in the Ogg Theora format and is stored in the /archives subdirectory.  Currently, there is no content delivery network (CDN) in place, so all video originates from a single server.  This server utilizes oggz-chop, which is part of the oggz toolset — it’s one of several programs that manipulate Ogg files in helpful ways.  oggz-chop is a binary program that returns the very beginning of an Ogg Theora file proceeded immediately by the section of interest indicated by a query string appended to the URL request (/archives/video.ogv?query_string).  The query string contains the requested start/end time, and makes possible user-friendly jump-to-this-time URLs.  Thus, we achieve intelligent seeking.

Because oggz-chop is a binary program, it is sensitive to changes in the host operating system.  Sometime early October, our hosting provider migrated all shared hosting plans to a new compute cluster and operating system.  The migration was unannounced as there is a general expectation that shared hosting plans will not run executable binaries; most customers would not have noticed a difference.  Compiling natively on both the old and new clusters is disabled for security.

How was it fixed?
The new operating system lacked the libogg.so shared library, a mandatory dependency.  Both libogg and oggz-chop where compiled on a binary-compatible machine and the resulting binaries were copied over.  This is a mostly-blind trail-and-error process.

I unsuccessfully attempted to create a single binary that contained both libogg and oggz-chop via static linking.  I found alternative route by including a runtime search path (export CFLAGS='-Wl,-rpath=/home/openmeet') such that oggz-chop could locate libogg.so in a non-system directory.  Tweak some variable names in cgi.c, and voilà!

#define DOCUMENT_ROOT “/home/openmeet/public_html”
. . .
getenv (“PATH_INFO”); –change-to–> getenv (“REDIRECT_URL”);

[Post truncated for brevity 11-June-2012]

We’re Joining the SOPA/PIPA Blackout

20:00, Sunday, 19 2019 May UTC

OpenMeetings.org will join other organizations including the Internet Archive, Mozilla Foundation, and the Wikimedia Foundation in suspending core services for all EST hours on January 18th, 2012 to protest consideration of Stop Online Piracy Act (“SOPA”) and PROTECT IP Act (“PIPA”), two bills under active consideration in the U.S. Congress.  During this time our main page will redirect to Electronic Frontier Foundation’s home page which provides relevant background information on the issue and helpful call-to-action instructions.

However, we will have real-life presence at an Emergency NY Tech Meetup outside the Manhattan offices of U.S. Senators Schumer & Gillibrand.  Watch for User:GChriss, probably with a video camera in hand.

Our participation in the blackout is for the same reasons outlined by Kat Walsh, a Trustee of the Wikimedia Foundation:

“We depend on a legal infrastructure that makes it possible for us to operate.  And we depend on a legal infrastructure that also allows other sites to host user-contributed material, both information and expression.  For the most part, Wikimedia projects are organizing and summarizing and collecting the world’s knowledge.  We’re putting it in context, and showing people how to make to sense of it.

But that knowledge has to be published somewhere for anyone to find and use it.  Where it can be censored without due process, it hurts the speaker, the public, and Wikimedia.  Where you can only speak if you have sufficient resources to fight legal challenges, or if your views are pre-approved by someone who does, the same narrow set of ideas already popular will continue to be all anyone has meaningful access to.”

Many of our indexed meetings contain elements of “fair use” – a short audio segment, an illustrative graphic, or video clips embedded within presentations and discussions.  Inclusion of these elements as commentary/criticism on underlying ideas are classical examples of “fair use” provisions in the copyright act.  If SOPA and PIPA became law – a very real possibility – we would become vulnerable to politically-motivated takedowns – or even attacks on our existence – on material deemed as politically inconvenient.

Finally, we’re a small project that very much welcomes donations to keep us running.

#Scholia: on the "requirement" of completeness

07:41, Sunday, 19 2019 May UTC
Scholia, the presentation of scholarly information on authors, papers, universities, awards et al is at this time not included in the "Authority control" part of a Wikipedia article. The reason I understand is because Wikipedians "that matter" insist that its information is to be complete.

That is imho utter balderdash.

The first argument is the Wiki principle itself. Things do not need to be complete, in the Wiki world it is all about the work that is underway. The second is in the information that it provides: its information is arguably superior to what a Wikipedia article provides on the corpus of papers written by an author. The third is that with the prospect of all references of all Wikipedias ending up in Wikidata, value is added when a paper can be seen in relation to its authors and citations. It matters when it is known what citations a paper is said to support. It matters that we know the papers that are retracted. The fourth argument is in the  maths of it all; typically scientific papers have multiple authors. It takes only one author with an ORCiD identifier to get its papers included. The other authors have not been open about their work, it is their own doing why they are not known in the most read corpus on the planet. They still exist but as "author strings". When a kind soul wants to remove them from obscurity they can.

As to the "Katie Bouman"s among them? There are many fine people that are equally deserving, that have not been recognised yet for their relevance. Fine people that have a public ORCiD record. For them it is feasible to have their Scholia ready when they are recognised. For the others, well it is not a Pokemon game, it is a Wiki.

weeklyOSM 460

12:44, Saturday, 18 2019 May UTC



openSenseMap a platform for open sensor data 1 | © senseBox Institute for Geoinformatics, Münster, Germany; Map data © OpenStreetMap contributors, ODbL


  • Osmose now uses the MapCSS validation rules of the OpenRailwayMap project to validate railway infrastructure.
  • Dr Erin Ryan found a number of gaps and errors in OpenStreetMap data around Charlottesville, Virginia when trying to geocode arrest data. Various types of errors were noted: apparent multiple locations for a prison and the complete absence of the suffix “Extended” on some roads (e.g. Avon Street).
  • Christopher Beddow, from Mapillary, has been travelling and recently he was lucky to have good weather whilst capturing some splendid scenery in Scotland.
  • A golf simulator using OSM data leads to very detailed mapping of golf courses.
  • Nick Bolten wants to change the way we tag pedestrian crossings as he thinks that crossing=* has problems in terms of orthogonality, understand-ability and semantic correctness. With his proposal for crossing=marked he is disturbing a wasps nest: the tag has already been implemented in iD but the usage is disputed.
  • Quincy Morgan, one of the maintainers of the iD editor, introduced a new tag nonsquare=yes. This is meant for buildings with some, or all, corners not being a right angle. The idea is to avoid such buildings showing up as validation issues. He ignored requests to discuss such a change with the OSM community. This is criticised by multiple users on the Talk mailing list.
  • TBKMrt wants to extend the range of values for the key toll= beyond yes/no, which are currently 98.98% of the values. He suggests toll=<COUNTRY CODE>:<TYPE> with <TYPE> being one of fee, vignette, meter or no. The key toll:type= could be used with distance or time.


  • dktue, a German OSM mapper, asked (automatic translation) on Talk-AT why postal codes in Austria are not recorded nationwide, as they are in Germany. In the course of the discussion it turned out that he means postal code multipolygons, which cannot be recorded easily due to the lack of data in Austria.
  • Well-known Russian mapper Ilya Zverev started a podcast about GIS-technology “Mapokalypse” (in Russian) with Sergei Golubev and Maxim Dubinin (CEO of NextGIS). Together they discuss one specific topic connected to GIS in each episode. The last podcast “OpenStreetMap kills” (automatic translation) was about the negative impact of OSM.
  • alexkemp analysed the spam flood that has reached OSM diaries since the end of April with up to 30,000 spam-posts daily. In his blog post he details his suggested “no-edit, no-diary” rule to stop the spam.

OpenStreetMap Foundation

  • The German local chapter FOSSGIS received funding requests for an OpenLayers code sprint (de) (automatic translation) and a replacement for the Overpass development server (de) (automatic translation). The OpenLayers developers want to spend five days to make the open source Javascript library faster, easier to use and more error-free. The requested Overpass server will replace the existing, five year old development machine.
  • The minutes of the Licence Working Group meeting on 11 April were published.
  • The OSMF Board will meet physically in Brussels later in May and asks for topics and issues that the community thinks should be considered. If you have a topic, you are asked to fill out the survey.
  • The Open Source Initiative approached the OSMF last year and asked whether OSMF wants to become an affiliate and follow organisations like the Linux Foundation, the Python Software Foundation, Wikimedia Foundation and many more. The OSMF board decided in favour and the application has been accepted.


  • The website of FOSS4G Hokkaido 2019 has been opened (automatic translation). The meeting will be held in Sapporo on July 12 and 13. Applications for presentation and workshop subjects will be accepted until May 24.

Humanitarian OSM

  • Mapillary and HOT are collaborating to speed up map data collection in undermapped regions with the launch of the #map2020 campaign. #map2020 asks local mappers to submit street level imagery using projects that collect data for humanitarian purposes. The two winning projects will be invited to the HOT Summit in Heidelberg, Germany in September this year.


  • Bexhill-OSM, a detailed map of Bexhill-on-Sea, England, added a feature to view 1300 photos of notable buildings, including 360 panoramas, in full screen. A short guide is included in this tweet.
  • The public Swiss geoportal made an OSM-based vector tile map available. As Boris Mericskay points out (fr) (automatic translation) in his tweet, the test map style can be customised to a certain extent.
  • The Russian OSM website openstreetmap.ru has recently updated POI and address information.
  • Dmitriy Konoshonkin created a guide around Krasnoyarsk (Russia). As a base map he used OpenStreetMap. (ru)


  • OpenStreetMap US points to an Associated Press article on how data from U.S. Census Bureau, Cal Fire and OpenStreetMap were combined to evaluate fire-related evacuation routes.

Open Data

  • CycleStreets wrote a blog post about the proposed release of information on 240,000 cycling infrastructure assets in Greater London by Transport for London as open data. The blog article points to a newly created demo map for selected areas to allow OSM mappers to evaluate the data. Please note that we are not yet allowed to use the data for OSM purposes.


  • CleanTechnica, a cleantech-focused website, promotes A Better Route Planner (ABRP), a router dedicated to electric vehicles, and OpenStreetMap, on which ABRP is based. The website explains why OSM matters and how users can start editing to improve electric-based mobility.
  • geohacker from Development Seed introduces the offline-first field mapping tool called “Observe”. The comprehensive blog post covers the motivation, a short user guide, future plans and asks for feedback.


  • Leaflet version 1.5.1 has just been announced. If you had troubles with the module export regression, whose fix is the only changelog entry, then you should update.
  • Westnordost has released version 12 of StreetComplete. The new version added the new quest “What is the name of this place?” and comes with some minor improvements.
  • Many new tests have recently been added to the Jungle Bus validators for JOSM (which improve the quality of public transport data in OSM):
    • on the new tags interval, opening_hours and interval:conditional
    • on line colours
    • on the tag differences between the route_master relationship and its route relationships
    • on the walking bus lines
    • on the geometry of bus stops and bus stations
    • etc.

    These validators can be enabled in Preferences > Data Validator > Attribute Checker Rules tab.
    Your help is welcome to translate these new tests.

  • The OsmAnd team promised to bring the iOS version of its navigation app up to the same level as on Android. Version 2.70 has recently been released. Their blog post details the new features. However, there is still some way to go as the Android version was upgraded to 3.30 in March 2019.

Did you know …

  • … the Lifecycle prefixes with which you can describe the current status of an important object? The prefixes are mainly used to separate objects and facilities that are planned or under construction or that have been closed, expired, removed (also translocated) or destroyed from currently existing objects, in order to continue to find them in the OSM database.
  • … how to tag a door that opens automatically? The key for this is automatic_door. This can be important information for handicapped people who may need to know if they can use a door without help.
  • … this video clip (pt) from the Portuguese channel TVI? It was pointed out to us by a reader and documents the usage of OSM at the public Civil Protection authority.
  • openSenseMap, the map with open sensor data? The senseBox offers students and interested citizens an OSM platform to collect and publish measurement data.

OSM in the media

  • In Russia an interview (ru) (automatic translation) with Russian cartographer Nikita came out on the radio station “Echo of Moscow” on the programme “Inside”. In his interview Nikita often mentions OpenStreetMap. For example, he thinks that the basics of OSM should be taught in schools in geography lessons.
  • OpenStreetMap was featured (ja) in the latest issue of Japanese GIS magazine 地図中心 (Map Central). The contents were about crisis mapping, mapping parties, ODbL, and so on. Of course community members contributed those articles. (automatic translation)

Other “geo” things

  • Timofey Samsonov et.al. describe in a paper (PDF) the automated addition of intermediate contour lines. Software for the generation of main and counting contour lines has been available for about 50 years. However, intermediate contour lines are only drawn where there is space and it makes sense for the terrain to be shown. An ArcGIS version can be found on GitHub.
  • The Linux Foundation, backed by a number of large companies, has established the Urban Computing Foundation, which is dedicated to open source software with a focus on mobility.
  • ZDNet reports on a project started by the UK’s national mapping agency, the Ordnance Survey, that aims to create a highly precise real-time map for 5G planning and autonomous driving. It is intended to collect the data by processing imagery taken by the camera-equipped cars of utility companies.

Upcoming Events

Where What When Country
Prague Wikimedia Hackathon 2019 2019-05-17-2019-05-19 czech republic
Santa Cruz Santa Cruz Ca. Mapping Party 2019-05-18 California
Reading Reading Missing Maps Mapathon 2019-05-21 england
Cologne Bonn Airport Bonner Stammtisch 2019-05-21 germany
Derby Derby pub meetup 2019-05-21 england
Lüneburg Lüneburger Mappertreffen 2019-05-21 germany
Viersen OSM Stammtisch Viersen 2019-05-21 germany
Cambridge Missing Maps Mapathon 2019-05-22 united kingdom
Lübeck Lübecker Mappertreffen 2019-05-23 germany
Montrouge Rencontre mensuelle de Montrouge et alentours 2019-05-23 france
Vienna 62. Wiener Stammtisch 2019-05-23 österreich
Greater Vancouver area Metrotown mappy Hour 2019-05-24 canada
Strasbourg Rencontre périodique de Strasbourg 2019-05-25 france
Bremen Bremer Mappertreffen 2019-05-27 germany
Rome Incontro mensile 2019-05-27 italy
Salt Lake City SLC Map Night 2019-05-28 united states
Mannheim Mannheimer Mapathons 2019-05-28 germany
Zurich Missing Maps Mapathon Zurich 2019-05-29 switzerland
Saarbrücken Mapathon OpenSaar/Ärzte ohne Grenzen/EuYoutH_OSM/Libre_Graphics_Meeting_2019 2019-05-29 germany
Montpellier Réunion mensuelle 2019-05-29 france
Düsseldorf Stammtisch 2019-05-29 germany
Bratislava Missing Maps mapathon Bratislava #6 at Faculty of Civil Engineering Slovak University of Technology in Bratislava in Bratislava 2019-05-30 slovakia
Joué-lès-Tours Stand OSM sur la fête du vélo 2019-06-01 france
Taipei OSM x Wikidata #5 2019-06-03 taiwan
London Missing Maps Mapathon 2019-06-04 united kingdom
Essen Mappertreffen 2019-06-05 germany
Toulouse Rencontre mensuelle 2019-06-05 france
Stuttgart Stuttgarter Stammtisch 2019-06-05 germany
Bochum Mappertreffen 2019-06-06 germany
Mannheim Mannheimer Mapathons 2019-06-06 germany
Nantes Réunion mensuelle 2019-06-06 france
Dresden Stammtisch Dresden 2019-06-06 germany
Reutti Stammtisch Ulmer Alb 2019-06-06 germany
Dortmund Mappertreffen 2019-06-07 germany
Montpellier State of the Map France 2019 2019-06-14-2019-06-16 france
Angra do Heroísmo Erasmus+ EuYoutH_OSM Meeting 2019-06-24-2019-06-29 portugal
Minneapolis State of the Map US 2019 2019-09-06-2019-09-08 united states
Edinburgh FOSS4GUK 2019 2019-09-18-2019-09-21 united kingdom
Heidelberg Erasmus+ EuYoutH_OSM Meeting 2019-09-18-2019-09-23 germany
Heidelberg HOT Summit 2019 2019-09-19-2019-09-20 germany
Heidelberg State of the Map 2019 (international conference) 2019-09-21-2019-09-23 germany
Grand-Bassam State of the Map Africa 2019 2019-11-22-2019-11-24 ivory coast

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Nakaner, Rogehm, SK53, Silka123, SunCobalt, TheSwavu, derFred, geologist.

The Wikimedia Foundation has determined that Wikipedia is no longer accessible in the People’s Republic of China—impacting more than 1.3 billion readers, students, professionals, researchers, and more who can no longer access this resource or share their knowledge and achievements with the world. We have not received notice or any indication as to why this current block is occurring and why now.

Based on our internal traffic reports, Wikipedia is currently blocked across all language versions. Most other Wikimedia projects, such as Wikimedia Commons, a site that holds 53 million freely licensed media files, are still available. Wikipedia has been blocked intermittently in China dating back to 2004. Most recently, Chinese language Wikipedia has been blocked in China since June 2015, while most other language Wikipedias remained available until this action.

Open access to knowledge is a fundamental human right. Website blocks such as the one in China and the one imposed on Wikipedia for over two years in Turkey are violations of that right, and hinder the world’s ability to collect and share knowledge. When one country, region, or culture cannot join the global conversation on Wikipedia, the entire world is poorer.

We are committed to allowing everyone, everywhere to freely access, share, and participate in knowledge on Wikipedia. We regret the Chinese authorities’ decision to further limit internet freedom and urge them to restore access to Wikipedia.

Small class, big footprint

18:04, Thursday, 16 2019 May UTC

Dr. Michael Rushing is an Associate Professor of Piano in the Department of Music at Mississippi College and taught a Wikipedia writing assignment for the first time last fall. Here, he describes how his students responded to putting their hard work out on a world stage.

Dr. Michael Rushing.
(Image via Wikimedia Commons, CC BY-SA 4.0)

In the Fall of 2018, two students in a graduate Group Piano Pedagogy course were given the assignment of creating a new Wikipedia page for Group Piano. Though thousands of students study piano in a group setting, and textbooks have been written for use in advanced piano pedagogy courses on the topic of group teaching, no Wikipedia page was dedicated to the topic.

The Wiki Education Dashboard provided students with the training necessary to contribute to Wikipedia in a meaningful way. Importantly, the timeline and content of the training was designed in such a way that integrating the project into the class was easy. Students took the project more seriously than they might have if the result was to be read by their instructor and returned.

Within two hours of moving the article from the “sandbox” into Wikipedia’s public main space, a Wikipedia volunteer interested in new articles removed several paragraphs and heavily edited the students’ work. While my response was to thank the volunteer, the students were less enthusiastic. In their time as students, they typically received feedback from instructors and made revisions themselves. I reminded them that they no longer had ownership of the article, and that Wikipedia articles are never considered complete.

Semester-long projects like this one are often used in piano pedagogy courses. They provide a means of familiarizing students with course content, testing students’ abilities to synthesize that content, and providing real-world experience. I’ve often used these kinds of projects in graduate classes, with the goal of students creating a resource they can use long after the class is finished. By adding one extra step – making the result publicly available online – students not only demonstrate competence in synthesizing the content of the course and providing a resource for themselves, they provide resources for anyone interested in learning about the topic. Through these types of outward-facing projects, they develop a sense of community and increased engagement with the content of the class, learning to become contributors in the truest sense.

Interested in incorporating a Wikipedia assignment into your course? Visit teach.wikiedu.org to access our free tools, assignment templates, and systems of support.

Header image by Robby Followell via Wikimedia Commons, CC BY-SA 3.0.

Papers on Rust

09:06, Thursday, 16 2019 May UTC

I have written about my attempt with Rust and MediaWiki before. This post is an update on my progress.

I started out writing a MediaWiki API crate to be able to talk to MediaWiki installations from Rust. I was then pointed to a wikibase crate by Tobias Schönberg and others, to which I subsequently contributed some code, including improvements to the existing codebase, but also an “entity cache” struct (to retrieve and manage larger amounts of entities from Wikibase/Wikidata), as well as an “entity diff” struct.

The latter is something I had started in PHP before, but never really finished. The idea is that, when creating or updating an entity, instead of painstakingly testing if each statement/label/etc. exists, one simply creates a new, blank item, fills it with all the data that should be in there, and then generates a “diff” to a blank (for creating) or existing (for updating) entity. That diff can then be passed to the wbeditentity API action. The diff generation can be fine-tuned, e.g. only add English labels, or add/update (but not remove) P31 statements.

Armed with these two crates, I went to re-create a functionality that I had written in PHP before: creation and updating of items for scientific publications, mainly used in my SouceMD tool. The code underlying that tool has grown over the years, meaning it’s a mess, and has also developed some idiosyncrasies that lead to unfortunate edits.

For a rewrite in Rust, I also wanted to make the code more modular, especially regarding the data sources. I did find a crate to query CrossRef, but not much else. So I wrote new crates to query pubmed, ORCID, and Semantic Scholar. All these crates are completely independent of MediaWiki/Wikibase; they can be re-used in any kind of Rust code related to scientific publications. I consider them a sound investment into the Rust crate ecosystem.

With these crates in a basic but usable state, I went to write papers, Rust code (not a crate just yet) to gather data from the above sources, and inject them into Wikidata. I wrote a Rust trait to represent a generic source, and then wrote adapter structs for each of the sources. Finally, I added some wrapper code to take a list of adapters, query them about a paper, and update Wikidata accordingly. It can already

  • iteratively gather IDs (supply a PubMed ID, PubMed might get a DOI, which then can get you data from ORCID)
  • find item(s) for these IDs on Wikidata
  • gather information about the authors of the paper
  • find items for these authors on Wikidata
  • create new author items, or update existing ones, on Wikidata
  • create new paper items, or update existing ones, on Wikidata (no author statement updates yet)

The adapter trait is designed to both unify data across sources (e.g. use standardized author information), but also allow to update paper items with source-specific data (e.g. publication dates, Mesh terms). This system is open to add more adapters for different sources. It is also flexible enough to extend to other, similar “publication types”, such as book, or maybe even artwork. My test example shows how easy it is to use in other code; indeed, I am already using it in Rust code I developed for my work (publication in progress).

I see all of this as a seeding of the Rust crate system with easily reusable, MediaWiki-related code. I will add more such code in the future, and hope this will help in the adoption of Rust in the MediaWiki programmer community. Watch this space.

Update: Now available as a crate!

Education in Wales and Wikipedia

17:47, Wednesday, 15 2019 May UTC
Robin Owain and Aaron Morris at Llangefni, Wales. March 2017 – image by Llywelyn2000 CC BY-SA 4.0

By Robin Owain, Wikimedia UK Wales Manager

Wales has always had more than its fair share of ministers of religion, farmers… and educators!

During the Medieval period, the training of a ‘Prifardd’ (a registered Chief Poet), took ten years, but educating ordinary folk was by word of mouth. Around 1402, the last Welsh Prince of Wales, Owain Glyndwr, suggested founding a National University of Wales. Education, from the 17th Century onwards became linked to religion, as in many other nations. In 1674, the Welsh Trust (no article on en-wiki!) was formed in order to establish secondary schools throughout Wales and by 1681 there were 300 schools. This work was taken over by S.P.C.K. (Society for the Promotion of Christian Knowledge) early in the 18c. S.P.C.K. was formed in 1699 in London by a Welshman, Dr Thomas Bray, and 3 of the five main drivers were from Wales.

Many schools in Wales were also set up by charities so that the ordinary working-class person was given the opportunity to learn to read and discuss the Bible. Most teachers were curates, of which Griffith Jones, who taught 158,000 children to read in Welsh, must be one of the most famous today. By 1755 Wales had 3,495 schools, nearly all teaching through the medium of Welsh, as only around 5% of adults at that time could speak English.

At this time, Wales was one of the most literate countries in the world. The importance of education to the ordinary Welsh person during this time can not be over stressed. In England at this time, most ordinary folk were not given the chance to read and write whilst private school education flourished. Only a handful of private schools have existed in Wales, and as a result of this, today we can read what was written by the working-class people of Wales. For example, diaries kept by farmers are today being digitised and studied as they provide a comprehensive record of the weather at that time, information which is very relevant to those studying global warming.

During the 18th Century, the Welsh aspect within schools was lost: the English language was forced down the throats of children and corporal punishment became a daily routine for those who dared speak their mother tongue. My grandmother, who died 6 years ago, remembered it well! This inhumane practice was also used in Ireland, the Basque Country and other countries.

Today, for all children up to 16 years old, both Welsh and English languages are compulsory subjects. The Welsh Government aims to double the number of Welsh speakers by 2050 and ‘Wicipedia’ and ‘Wicidata’ are mentioned in their 2050 development plan, several times.

Wikipedia in Wales

In that context, let’s turn to a few Wikipedia milestones in Wales.

Having been appointed as Wales Manager for Wikimedia UK in July 2013, one of my first tasks was to co-organise the Eduwiki Conference at Cardiff. I invited Gareth Morlais, Digital Media Specialist at the Welsh Government, to open the conference and he spoke about the difficulty of getting minority languages recognised by internet giants such as Google. Gareth delivered his presentation in Welsh with live translation through headsets. The conference, and Gareth’s input, not only placed Wales on a global stage but also laid the foundation for the following work.

I approached the Coleg Cymraeg (the Federal University of Wales), which agreed to appoint a Wikimedian in Residence – the first full time WiR working in an university, worldwide. Mark Haynes was appointed in March 2014 and advised the Coleg on Creative Commons licences, which resulted in policy change. Since then, most of the academic work which goes on the Coleg Portal is on an open (CC-BY-SA) licence. This was a major breakthrough and even today, I’m yet to find a university which has opened their doors quite as wide.

The outcome of this is that when academic work is published, we can use it word for word on Wikipedia.

Edit-a-thon at Swansea University; 28 January 2015 – image by Llywelyn2000 CC BY-SA 4.0

On 28 January 2015, I organised the first ever Wikipedia Edit-a-thon in Wales at Swansea Library, and this was immediately followed by a Swansea University Edit-a-thon titled Women and Justice 1100-1750 in collaboration with Prof Deborah Youngs and Dr Sparky Booker from the Department of History and Classics. More on Swansea, later!

There are 22 ‘Language Ventures’ in Wales, one of which is Menter Iaith Mon (translation: ‘Anglesey language venture’). After a number of meetings with Menter Iaith Mon, in the Summer of 2016 they appointed a full time Wikipedian in Residence with funding from the Welsh Government. Aaron focused on the training of Wikipedia editing skills, and having been employed in the secondary sector for a few years, began training pupils on the island of Anglesey.

Menter Iaith Mon and myself wrote an application to the main examination board of Wales (WJEC) to formalise the training of wiki-skills as part of the Welsh Baccalaureate as one of the ‘Community Challenges’. This was successful and since September 2018 Aaron has worked with 6 schools in Anglesey and Gwynedd. In the next few years, we will encourage other language ventures and schools to follow suit.

Aaron’s work also dovetails with the Digital Competence Framework as well as an input into the GCSE curriculum: more on this in the next few months.

In 2016, Swansea University Senior Lecturer in Law, Dr Pedro Telles, and Richard Leonard-Davies began using the Wikimedia Dashboard; the project is currently in its 3rd year. Post-graduate students are drafting Wikipedia articles as part of their assessment.

In December 2018, the ”Companion to the Music of Wales” was published by Coleg Cymraeg (Federal University) and Bangor University. This is an encyclopedia that covers the history of music in Wales with over 500 articles ranging from early music to contemporary music, from folk singers to orchestras. As a direct result of our WiR at the Coleg, all text is on an open licence and we shall shortly be transfering it to Wikipedia. It is an authoritative encyclopedia and is the result of a collaborative project between the School of Music and Media at Bangor University and the Coleg Cymraeg. Editors: Wyn Thomas, Pro Vice-Chancellor, Bangor University and Dr Pwyll ap Siôn, Professor in Music.

Wicipop Editathon at Bangor University, 2017 – image by Llywelyn2000 CC BY-SA 4.0

Three weeks ago, I delivered a presentation at the first International Eduwiki at Donostia, Basque Country where some of these milestones were shared. I was glad to see that we are not alone in the work we are doing here in Wales. The Basque Country excels in wiki-education work at all levels. It was inspiring to see such wonderful work in many other languages in the field of open education in secondary and tertiary sectors.

The context and groundwork are solid. It is now time to build on this foundation. The Welsh Government are committed to this work in partnership with Wikimedia UK and others.

A new project is about to start shortly: a pilot project on how we can make it easier for children and young people to access Wicipedia Cymraeg.

Watch this space!

It’s the last week to register for our upcoming online course in collaboration with the National Archives! Join a network of scholars who want to hone career skills while making Wikipedia more representative of all history.

What will I learn in the course?

Our course will help you achieve the key career diversity skills that academics with PhDs said they didn’t learn in grad school, but that have been vital to their success beyond the academy: communication, collaboration, intellectual self-confidence, and digital literacy. (Read more about that in another blog post.) These skills are relevant across all careers, not exclusively in academia, so consider taking the course regardless of your industry.

Dr. Rachel Boyle, an alum, remarked that learning how to write Wikipedia articles was an inspiring writing exercise that allowed her to build upon the existing skills she has as a public historian: “Contributing [content] to Wikipedia requires a different mindset than academic writing,” she wrote. She found the process to be “valuable inspiration for any kind of writing.”

Wikipedia is also a powerful tool for public scholarship. For those interested in making sure the public understands key concepts in their field, we offer a chance to learn how to leverage the site for that communication. “Contributing [content] to Wikipedia directly responds to the public’s existing digital habits and browsing patterns,” said Dr. Boyle. This course allows you to share your unique knowledge as widely as possible. After all, the mission of Wikipedia is to capture “the sum of all human knowledge” – let us help you achieve that goal by joining an ecosystem of people passionate about open knowledge.

What is the experience like?

This cohort of Wiki Scholars will meet online for an hour a week over 12 weeks. Learn from our Wikipedia Experts on staff about how Wikipedia works and how you can help close the site’s gender content gap. Collaborate with other newcomers, learn from them, and build your intellectual self-confidence by becoming a part of an online community of Wikipedians.

“I’ve had many advantages in my life, but not even all of that education and privilege has always let me see myself as having authority,” wrote Adjunct Assistant Professor Dr. Erin Siodmak in a post-course reflection. “I owe a part of my new feeling of authority to this course. … It’s great to have ‘experts’ like academics and scholars work to improve Wikipedia, but given that Wikipedia is a tool for spreading knowledge (and given a feminist perspective on epistemology), wouldn’t it be great for everyone to see themselves as experts?”

Many in our course have tried making a difference on Wikipedia before, like freelance writer Eilene Lyon, and found that the barriers to entry were great. In another reflective blog post, Eilene explained how the regular communication with both our Wikipedia Experts on staff and with the other newcomers in the course made for a fulfilling and encouraging learning environment. For Eilene, the course opportunity combined her passion for closing the gender gap in public knowledge and her interest in Wikipedia editing: “I jumped at the chance to learn how to be a Wikipedian while improving women’s suffrage articles.”

How do I register?

Register for this one of a kind professional development opportunity before May 17th. See the course landing page for more information and to read testimonials. Or head straight to our registration page for course details:

Image: File:Woman Suffrage Procession 1913 opening.jpg, public domain, via Wikimedia Commons.

Nova-network is gone!

10:39, Tuesday, 14 2019 May UTC

A couple of week ago we finally moved the last lingering VMs in our OpenStack platform from the nova-network region to the Neutron region (Blog Post: Neutron is here!). The bulk of the work had been done a month earlier, so the final nails in nova-network's coffin felt a bit anticlimactic -- nevertheless, this is a big step that represents a huge amount of work on the part of both staff and volunteers.

VPS and Toolforge users have been extraordinarily patient with all of the downtime, deprecations, and hand-tuning that this change required. The lack of grumbles did not go unnoticed -- it's great to work with such a supportive group of users. Thank you!

Our reliance on the long-obsolete nova-network service was blocking many, many things. Now that it's gone, we can...

  • Officially discontinue support for Ubuntu Trusty throughout the WMF's infrastructure. (some of the last Trusty servers were supporting nova-network)
  • Shut down and decommission half a dozen long-obsolete servers.
  • Rip out hundreds of lines of special-case puppet code that supported the old nova-network OpenStack region.
  • Upgrade our OpenStack Nova services past version Mitaka. (Mitaka was the last release that supported nova-network; there have been SIX upstream releases since Mitaka)
  • Upgrade Horizon.wikimedia.org to run newer versions with fancier panels. (Previously we were stuck running the last version that supported nova-network)
  • Switch to a less buggy Designate/DNS model that should result in fewer VM creation failures and fewer DNS record leaks.
  • Possibly spend some time on other WMCS features that aren't 100% dedicated to repaying technical debt.

Most of those changes will be invisible to the end-users, but will make for much happier operations staff!

Migrating data to BlueSpice MediaWiki

10:19, Tuesday, 14 2019 May UTC

In short interviews with our employees we illuminate aspects and questions of our daily work with BlueSpice and our customers. This article deals with the topic of data migration. An Interview with Robert Vogel, Team Lead Product and Software Development at Hallo Welt! GmbH.

Robert, what is migration or data migration?

Migration basically describes the transfer of data from one software to another, in our case from a suitable system in BlueSpice MediaWiki. This transfer is semi-automatic and script-based. This means that the human factor always plays a certain role despite all automatisms.

So data migration isn’t that easy?

As usual, it is a question of solid planning. Migration projects are about structural migration on the one hand. This is the more complex part, because different software systems usually have a different storage logic. Besides that we are facing the issue of content migration, referring to the conversion of content from the source to the target format, which is “WikiText” – the format to optimally display and format content in the wiki.

Further information on the WikiText format can be found here:


Why are data migrated at all? What’s the reason behind it?

The trigger for a migration project in our company is usually the deactivation of an existing software and the customer’s switch to BlueSpice MediaWiki. Migration projects make sense when it comes to transferring a large number of content or documents to BlueSpice MediaWiki. If so, the effort for manual data transfer to the new system would be considerable. In this case, we propose a migration project to the customer. Because migration projects involve programming effort on our part, the profitability of such a project is clarified in advance. As data can also be transferred manually the basic rule is that the migration project must be more economical for the customer than manual data transfer. The cost-benefit ratio must be right. If this is the case, depending on the size of the migration project, there will be considerable time and cost savings for the customer.

What exactly is migrated in a migration project?

On the one hand there is content from editorial or CMS systems such as WordPress, Lotus Notes or Typo3. Often, however, the content is also provided in classic document formats like MS Word or PDF. If the customer wishes to transfer content from an existing Wiki system such as MediaWiki, Confluence or TWiki, this is no problem either. Almost everything is feasible.

What should be particularly taken into account when migrating data?

The homogeneity of the data source at the customer is especially important. This means that the documents or contents should be created according to the same logic. If they are not, our automatisms or scripts do not work and many small things have to be maintained manually. To sum it up: The more heterogeneous the data is, the more complex migration becomes. Therefore it sometimes makes sense to import homogeneous parts of the data stock via a migration script and to manually import others. The ideal solutions is very customer-specific.

And how exactly does a migration project work for us?

We work along a clearly defined process that has already proven itself in many migration projects. First, within the framework of a feasibility analysis, a fundamental check is made as to whether a migration of the existing data stock makes any technical sense at all (keyword: structural migration and data homogeneity).

If the feasibility is given, the details are determined in a specification workshop (via screen sharing or on site at the customer). This includes, for example, renaming documents and files, cleaning out obsolete data, assigning content to wiki categories, namespaces (areas delimited by access rights), pages, sub-pages or wiki instances. The whole thing is an iterative process with several feedback rounds. Once these tasks have been completed, a multi-stage shutdown and handover process follows. The goal: minimizing the time in which the documents cannot be processed by the customer. At the end there is the activation of BlueSpice and the deactivation of the old system – And a satisfied customer who can now manage his content easily and comfortably in his company wiki: BlueSpice MediaWiki.

Let’s Wiki together!

More information about our migration services can be found here:

Test BlueSpice pro now for 30 days free of charge and without obligation:

Visit our webinar and get to know BlueSpice:

Contact us:
Telephone: +49 (0) 941 660 800
E-Mail: sales@bluespice.com

Author: David Schweiger

The post Migrating data to BlueSpice MediaWiki appeared first on BlueSpice Blog.

Institutions know that Wikipedia is educating the world about the topics they care most about. Many want to understand how they can get involved in curating that information so it paints the most up-to-date and accurate picture of these topics. Thais Morata, a research audiologist at the National Institute for Occupational Safety and Health (NIOSH), and her group of collaborators have done that. And now NIOSH has recognized those efforts by awarding their Bullard-Sherwood Research-to-Practice Award in the category of Knowledge.

Over the last few years, Morata has been part of a team leading NIOSH in a broad, multi-faceted strategy to educate the public by contributing up-to-date, comprehensive, and accurate occupational safety and health information to a variety of Wikimedia projects. She has been using our resources and support to help students write for Wikipedia since 2016. And she works with multiple university classrooms to channel work students are already doing into a space where the public can have access to it.

Thais Morata joins fellow winners and collaborators Max Lum, John Sadowski, Tania Carreon-Valencia, Diana Ceballos, Mary Beth Genter, Erin Haynes, Lilian Jacob Corteletti, Regina Tangerino de Souza Jacob, Adriane Lima Mortari Moret, Natália Barreto F. Lopes, Alexandre Montilla, João Alexandre Peschanski, Douglas Scott, as well as Wiki Education’s Helaine Blumenthal and Ian Ramjohn.

“I am thrilled to have the group recognized for our efforts,” Morata told Wiki Education. “I feel privileged to be in a position that made it possible for me to bring them together. Some in the group knew it was a good idea, but they did not know much about the other contributors. Others knew the contributors and what they had to offer, but they were not totally sure it was a good idea. So it took courage and generosity from each of them, and it feels wonderful to have these traits rewarded. This effort reminded me of the Brazilian poem Weaving the morning by João Cabral de Melo Neto that starts: A rooster alone does not weave one morning: He will always need other roosters. It is the spirit of this group and Wikipedia.”

As NIOSH’s award booklet states, the connection between Wikipedia and NIOSH’s mission is obvious: “NIOSH efforts not only contribute to Wikipedia’s goal to freely share information, but also towards its own mission of translating research into usable information.”

And it’s a positive, important experience for students learning science communication. “A common response among students was that they ‘enjoyed writing to be read’ and not just ‘to be graded.’ All partners involved in this initiative learned something about science communication and digital literacy while contributing solid, verifiable knowledge about health to the public and reducing misinformation.”

“Not only does this work expand the reach of accurate occupational safety and health information, but it helps train a new generation on science communication and digital literacy.”

“It was very generous of Thais to include us in the nomination,” Wiki Education’s Senior Wikipedia Expert Ian Ramjohn said. Ian has supported Morata’s courses through our Student Program, along with Wikipedia Student Program Manager Helaine Blumenthal, since 2016 by answering student and instructor questions, reviewing student work, and navigating any on-Wikipedia conflicts around policy.

“I’m happy to be able to support the great work that so many of our partners are able to turn out, using the tools and support we provide.”

Helaine Blumenthal holding her NIOSH award.

Both Ian and Helaine noted that the award is a great reminder of why we do what we do at Wiki Education.

“It’s so easy to focus on the day-to-day – answering questions, solving problems, picking through drafts to help student editors improve their contributions – and lose perspective on the impact of what we’re doing,” Ian noted. “It’s great to have an outside group like the NIOSH award judges look at the work that Thais and her group have done and see it in the broader context of their mission.”

“I joined Wiki Education because I believe strongly in its mission of making knowledge freely available to all, but it’s easy to forget that lofty goal amid the day to day grind,” Helaine also noted. “When I found out that we won the NIOSH award, I was reminded of what our work means for real people trying to seek or share real information. I was pleasantly surprised and truly touched when the physical plaque showed up in the office. It’s a nice reminder that what we do really does matter.”


The award nomination did not include all the work covered in NIOSH’s Wikipedia strategy, which can be found here.  Some of the classroom efforts were described here. This award was for activities conducted in 2018 only. Not all who participated in previous years could be included.


Interested in teaching a Wikipedia writing assignment? Visit teach.wikiedu.org to access our free assignment management tools, student trainings, and more.

Tech News issue #20, 2019 (May 13, 2019)

00:00, Monday, 13 2019 May UTC
TriangleArrow-Left.svgprevious 2019, week 20 (Monday 13 May 2019) nextTriangleArrow-Right.svg
Other languages:
Bahasa Indonesia • ‎English • ‎dansk • ‎español • ‎français • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎Ελληνικά • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎हिन्दी • ‎中文 • ‎日本語

Older blog entries