Tech Lead Digest – Q3/4 2021

07:55, Monday, 06 2021 December UTC

It’s time for the 5th instalment of my tech lead digest posts. I switched to monthly for 2 months, by decided to back down to quarterlyish. You can find the other digests by checking out the series.

🧑‍🤝‍🧑Wikidata & Wikibase

The biggest event of note in the past months was WikidataCon 2021 which took place toward the end of October 2021. Spread over 3 days the event celebrated Wikidatas 9th birthday. We are still awaiting the report from the event to know how many folks participated, and recordings of talks will likely not be available until early 2022. At which point I’ll try to write another blog post.

Just before WikidataCon the updated strategy for Linked Open Data was published by Wikimedia Deutschland which includes sub-strategies for Wikidata and the Wikibase Ecosystem. This strategy is much easier to digest than the strategy papers published in 2019 and I highly recommend the read. Part of the Wikidata strategy talks about “sharing workload” which reminds me of some thoughts I recently had comparing Wikipedia and Wikidata editing. Wikibase has a focus on Ecosystem enablement, which I am looking forward to working on.

The Wikibase stakeholder group continues to grow and organize. A Twitter account (@wbstakeholders) now exists tweeting relevant updates. Now with over 14 organizational members and 15 individual members, the budget is now public and the group is working on getting some desired features implemented. If you are an organization or individual working in the Wikibase space, be sure to check them out! The group recently published a prioritized list of institutional requirements, and I’m happy to say that some parts of the “Automatic maintenance processes and updating cascades should work out of the box” area that scored 4 have already been tackled by the Wikidata / Wikibase teams.

I also have to give a special shout out to the Learn Wikidata interactive course. I haven’t completed the course myself, but it aspires to teach librarians, library staff members, and other information professionals how to edit Wikidata. The project uses software called Twine which allows creating sites and learning materials with branching narratives. You can find the code on Github. Thanks to the WikiCite grants for making this possible.

At the technical level:

  • Work on the Mismatch Finder continues
  • Federated Properties received another batch of development work and is likley to be announced for testing in the coming days, weeks or months
  • The Search Platform team at the foundation deployed the new Streaming Updater for the Wikidata query service, promising more efficient updating
  • Wikibase.Cloud was announced and has been worked on, which will be the Wikimedia Deutschland owned and operated version of using the same codebase. You can read more about what WBStack is in my introductory post.
  • The team worked on removing the need to setup cron scripts for various Wikibase components, such as change dispatching and also some database cleanup. This will be fully handeled by the job queue in a future Wikibase release.

🌎Wider Wikimedia

New Wikimedia Foundation board members were elected, and have since been formally appointed, in a vote that saw higher turnout than the previous election. Maryana Iskander was appointed as the new CEO of the Wikimedia Foundation with an official start date of January 2022.

A screenshot of suggested edits from growth features

Growth features continue to be rolled out across more Wikipedia projects (it’s almost everywhere now). This feature provides users with a personalized homepage, suggested tasks and help panel. It’s been worked on by the WMF Growth team, and I really like the direction this is taking.

MediaWiki had a 1.37 branch cut, and is likely to be released in the coming days or weeks.

In the future, unregistered editors will be given an identity that is not their IP address. You can read the suggestions for how that identity could work and discuss on the talk page.

The technical leadership community of practice received an updated name.

A Toolhub catalogue has been developed to make it easier to find software tools relating to Wikimedia projects. (announcement)

🔗Links & Reading




  •, An open platform for building developer portals (Mentioned in the Twitter / Spotify podcast above)
  • Twine, an open-source tool for telling interactive, nonlinear stories.
  •, the fastest way to share your notebooks (blogpost)
  • skaffold, Local Kubernetes Development. Awesome, and in use for

🧩Did you know?

The post Tech Lead Digest – Q3/4 2021 appeared first on addshore.

Tech News issue #49, 2021 (December 6, 2021)

00:00, Monday, 06 2021 December UTC
This document has a planned publication deadline (link leads to
previous 2021, week 49 (Monday 06 December 2021) next


As a hobby I add publications and scholars to Wikidata. I am particularly interested in hydrology, biodiversity and climate change. When I come across scientists, I look them up on Wikidata, I care most about scientists with an ORCiD identifier, I often add Google scholar and Twitter identifiers as well. ]

As there have been many, many papers and scholars for a long time, the impact I have is particularly in scientists new to Wikidata and attributing papers to them. This results in an improved representation that can be seen in their Scholia. When there is a Wikipedia article for a scientist, a template for the Scholia can be added; it will always show the most up to date information.

The way it works out is that I am a "browser", I read Twitter feeds by a scholar, find a publication they enthuse about and I may already be on a tangent adding the papers and scholars. In my job I do not have time to read and comprehend but I have time to mechanically add more papers. I read in the morning, make a pick for the day and I "search" using the Scholia search function with a DOI as the argument. When a paper does not exist I am presented with the option to add it to Wikidata. When its authors can be found because of their ORCiD identifier, they are linked from the start to their Wikidata item.

My personal hierarchy

  • I find a paper, a scholar on Twitter
  • They publish on a subject I am interested in
  • When they are already known to Wikidata, they are linked to the publication I search for
    • When search indicates that an ORCiD identifier exists, I search Wikidata and add a scholar or add an identifier.
    • When Twitter indicates Twitter handles, I add them
    • When I cannot guess the gender, I look at Google scholar for a picture, I add the identifier as well.
  • When a scholar is picked for the day, I add the latest pictures first because they include more ORCiD identifiers for co-authors.
A Wikidata/Wikipedia hierarchy
  • Most valuable are scholars with a Wikipedia article that includes a Scolia template
  • Scholars with a Wikipedia article
  • Scholars with a Wikidata item with many identifiers including an ORCiD identifier.
  • Scholars with only an ORCiD identifier
  • Scholars with no identifiers
  • Scholars only known because of an "Author string"
When I come across an Australian scientists who was rewarded with an award, who has a Twitter account, an ORCiD and a Google Scholar identifier but no Wikidata item, it is someone I will add. So a warm welcome to Heather Neilly. She is of interest to me because of her work as an ecological consultant and in natural resource management for local government. Her latest work is on vegetation change in sheep-grazed chenopod shrublands in South Australia.

Reflection on filling a new Wikidata item

11:19, Saturday, 04 2021 December UTC

A few days ago I watched a Twitch stream by Molly / GorillaWarfare where they created the Louis W. Roberts English Wikipedia page. I decided to follow along and populate the matching Wikidata item (Q109662645) with as much information as I could from the same references that were being found for the Wikipedia article.

Along the way, I remembered some of the quirks of the manual editing experience for Wikidata and noted some other things that generally might be interesting folks.

This is a write-up of those thoughts.

Louis W. Roberts

Louis featured on a list of African-Americans in Boston having articles created or improved on English Wikipedia. This list was generated from the content of a book called “African-Americans in Boston : more than 350 years” by Hayden, Robert C which can be found on Louis is specifically noted on page 149.

This provides some starting context and some good elements to match against other references. From here multiple other references expanding what was known about Louis were found using search engines, Wikipedia library and more. All of which are now included on the Wikipedia and Wikidata pages.

Wikipedia article formation

Molly started creating the article in their userspace with the first version including a single line of content. This was gradually expanded while looking through the references. This expansion continued during the first hour after which there was enough referenced content to warrant a move to the main namespace. It made sense to keep the article in user space while it was being worked on to avoid unsuspecting readers seeing a work in progress article.

After landing in the main namespace categories were added using the HotCat gadget. The article continued to expand, including more references and categories. A person infobox was added to the article using data already found and stated in the article. Categories were sorted using another user script, and content continued to get added.

A few days after editing by Molly was complete, another user came along and added the Short description template copying the description from the Wikidata item.

Screenshot of the Louis W. Roberts articles after creation by Molly

Wikidata item population

While following the stream I tried to add roughly the same information to the Wikidata item that was appearing in the Wikipedia article. Molly had already added the first few basic statements to the Item such as instance of, sex or gender, given name, family name, occupation and employer. And the single reference to African-Americans in Boston page 149 already exists on one of the statements.

My first changes added the initial reference to another 2 statements that already existed, and added a new alias matching the Wikipedia article title. I then noticed that the existing employer statement could be more specific per one of the newly found references, so I altered the value and added a new reference.

Date of birth and date of death came next, but I immediately followed up these changes adding some new statements with the preferred rank for some more specific dates that I found in another reference. Also then a reference place of birth.

Many education and employer related changes ensued, followed by a realization that one of the references that I had been copying between statements using a gadget had been copying an incorrect reference that I didn’t want to keep, which I removed.

More content was added, followed by another realization of some incorrect data (the end time was before the start time). I’ll skip over the rest, but many more statements were added and tweaks made.


I have a feeling that the creation of the Wikipedia article and Wikidata item would probably have been much more interesting to watch than to read after the fact with a bunch of links to diffs. But you’ll have to cope with my poorly written adventure for context.

Edits & Bytes: The Wikipedia article was fairly complete with around 32 edits, totalling 8k bytes of text storage for the final revision. The Wikidata item was fairly complete after around 114 edits, totalling 71k bytes text storage for the final revision. That’s 3-4x the number of edits as the Wikipedia article, and nearly 10x the number of bytes in the final revision.

References: The Wikipedia article includes a reference list of 8 sources, and these are referenced 34 or so times in the article text. References need only be defined once using a <ref> tag, and can then be referred to by name for subsequent uses. The Wikidata item contains 51 distinct references (no reuse is available) making use of 7 sources on 27 statements.

Wikidata editing quirks

The DuplicateReferences provides copy links next to references that already exist. This is helpful to avoid retyping things when a reference already exists that is the same or very similar to what you want to add. There is also the functionality to drag references from one statement to another (possibly provided by the same gadget). As noted above sometimes the gadget doesn’t quite hit the spot when you want a similar but slightly different reference. You can end up copying things you don’t want, and need to remove them later.

As a long time Wikidata editor I know when to use the rank feature, and what rank means. A couple of edits that I made set a preferred rank leaving some less specific but still referenced values at a normal rank. However, the interface for making this change is not very user friendly and there is no real guidance in the editing flow covering how and when to use this.

When creating some statements that needed references to Items that did not already exist (mother and father), I had to break out of my editing flow to navigate to Special:NewItem in order to create entities to point to. A nicer experience could be to be able to create such Items on the fly while creating a statement.

As the item got longer and longer it got harder to copy references between statements, but also harder to keep an eye on the statements that I had already added so as not to add them again. When complete the Wikidata item was over 4-6 whole heights of my monitor, meaning lots of scrolling back and forth.

Some information ends up duplicated, for example, educated at statements often container a qualifier for academic degree. There can however also be a top-level academic degree statement. The more I thought about this the more I thought that editing Wikidata 1 level higher might be nice, where some other community maintained definitions map higher-level data input to actual statement changes.

Wikidata data in use

Some Wikipedia projects, such as, have an extension called ArticlePlaceholder enabled. This provides a special page on the Wikipedia that can be found via search results including some information for a given concept, and a prompt to create an article for the topic.

Wikidata and other smaller Wikipedia projects can also link to the Reasonator tool created by Magnus Manske that can give a summary of a topic, and is viewable in multiple languages. Other elements, such as timelines, are also generated in this tool.

Infoboxes are generally used on all Wikipedia sites. We saw above that the current English Wikipedia article uses a manual infobox created by Molly using Infobox Person. There is a Wikidata powered version of this infobox too called Infobox person/Wikidata.

According to the template transclusion count tool this Wikidata powered infobox is currently used only 3.7k times on English Wikipedia.

At the time of writing this, you can find an example of this infobox on the Aelbert Cuyp article. But you can also see a previewed rendering of the Wikidata infobox for the Louis Roberts article in this tweet.

Final thoughts

I really enjoyed watching Molly’s stream editing Wikipedia, I’ll be sure to join again. It’s a great excuse to relax and do some Wikidata editing too.

I’m not really sure how many folks edit individual Wikidata items in this way anymore. The largest numbers of Wikidata edits come from other bulk editing interfaces, or from bots, but that doesn’t mean that high-quality individual editing should not be possible.

Using some napkin maths I’d say that there are generally 400-900k edits per day. Roughly ~60k edits per day come from the Wikidata UI (so ~10%). ~5k edits a day come from changes on client sites such as Wikipedia (so ~1%).

It’d be really nice to connect the workflows of content creation a bit more. The research done when either writing a Wikipedia article or Wikidata item is ultimately the work that we want to be able to easily share between projects. If when editing Wikipedia you could define facts/statements as you went to then be included in Wikidata, I imagine we would reduce effort everywhere and increase reuse of this research across projects.

The post Reflection on filling a new Wikidata item appeared first on addshore.

Wikipedia’s wiped edit

05:00, Saturday, 04 2021 December UTC

Yesterday I was surprised to see widespread headlines that Wikipedia’s “first edit” was being auctioned as a NFT. That wasn’t right, and I wondered what this meant for an ongoing auction?

I know people will try to auction most anything now, but I was taken aback to see this screenshot, which was not the first recorded edit to Wikipedia.

Christie’s screen shot of recollected Wikipedia
Christie’s screen shot of recollected Wikipedia

Early Wikipedia edits had been lost for a number of years, and when Tim Starling rediscovered them in 2010, I reconstructed them as the 10k redux archive. From this, I knew the first recorded edit to Wikipedia was “This is the new WikiPedia!” and not “Hello, World!” as Christie’s and the news articles were claiming – repeating Wales’s earlier recollections. You can also see this first edit on today’s Wikipedia as the lost edits to the first page, HomePage, were imported in 2019.

Jimmy Wales has clarified that this auction is for an “artistic recreation of the original” based on his recollection of the first thing he typed and immediately wiped – on the server itself, hence no record. I was concerned that even if this was the case, the original time stamp on the recreation (7:29 PM in the image above) confused things about the actual history because it was after the first recorded edit (7:27:13 PM). He’s now updated the his NFT recreation to show “6:29 pm,” which is 58 minutes before the first recorded edit.

We all know headlines suck, but an accurate presentation of this would be “Wales’s re-creation of his recollection of the wiped test edit.” That satisfies things on the historical front.

On the NFT front, though, this prompts other questions:

  1. People have already bid on this item, which has been changed! (When does the actual digital fingerprint get taken and added to the NFT blockchain?) Could someone bid on something, and see it undergo extraordinary changes beyond a timestamp? Is that legal?
  2. What kind of provenance due-diligence does Christie’s due on NFTs? Do they check if the NFT is of a copyrighted work someone else’s owns? (See this discussion of licensing.) Or check that it is what it purports to be?

A first look at Wikidata through Github Copilot

22:16, Friday, 03 2021 December UTC

I was added to the Github Copilot preview in the past days, and the first thing I tested out was code suggestions around Wikidata.

Copilot allows you to start writing code, and have a model trained on public Github code suggest block of code that you might want to use.

For example, if you start defining a method called onePlusOne then it might suggest the body return 1+1;.

function onePlusOne() { return 1 + 1; }
Code language: JavaScript (javascript)

More targeted toward Wikidata, if you start defining a function called whatIsWikidata then it might suggest return "Wikidata";

function whatIsWikidata(){ return "Wikidata"; }
Code language: JavaScript (javascript)

In some cases copilot suggests multiple code blocks. Some of these seem useful, others less so.

function whatIsWikidata(){ var wd = new Wikidata(); var wd_json = wd.getData(); var wd_json_obj = JSON.parse(wd_json); var wd_json_obj_items = wd_json_obj.items; var wd_json_obj_entities = wd_json_obj.entities; var wd_json_obj_properties =; var wd_json_obj_claims =; var wd_json_obj_labels = wd_json_obj.labels; var wd_json_obj_descriptions = wd_json_obj.descriptions; var wd_json_obj_aliases = wd_json_obj.aliases; var wd_json_obj_sitelinks = wd_json_obj.sitelinks; var wd_json_obj_urls = wd_json_obj.urls; var wd_json_obj_qualifiers = wd_json_obj.qualifiers; var wd_json_obj_datatype = wd_json_obj.datatype; var wd_json_obj_datavalue = wd_json_obj.datavalue; var wd_json_obj_datavalue_value = wd_json_obj.datavalue.value; var wd_json_obj_datavalue_type = wd_json_obj.datavalue.type; var wd_json_obj_datavalue_value_entityid = wd_json_obj.datavalue.value.entityid; var wd_json_obj_datavalue_value_time = wd_json_obj.datavalue.value.time; var wd_json_obj_datavalue_value_string = wd_json_obj.datavalue.value.string; var wd_json_obj_datavalue_value_monoling
Code language: JavaScript (javascript)

And sometimes the suggestions appear to be different for the same inputs, potentially depending on other context of the file?

function getWikidataEntity(id)

Of 10 suggestions, 6 used XMLHttpRequest, 3 used jquery, and the final suggestion was fairly useless for what I was aiming for.

Solution 1

When paired with the xmlhttprequest package for node and a quick function call and console.log call the first example actually works, returning JSON that includes the entity requested.

var XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest; console.log(getWikidataEntity("Q1000")) function getWikidataEntity(id){ var url = "" + id + ".json"; var xhr = new XMLHttpRequest();"GET", url, false); xhr.setRequestHeader("Content-Type", "application/json"); xhr.send(); var data = JSON.parse(xhr.responseText); return data; }
Code language: JavaScript (javascript)

Interestingly Copilot here chose to use Special:EntityData instead of the wbgetentities action API module for Wikidata. The function also does not unwrap the result to only provide the single entity. This is the request that is made.

Solution 2

The second solution correctly retrieves and decodes a single entity JSON using onload for the request. Adding a quick console.log in before the return within onload shows the request being correctly made.

var XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest; getWikidataEntity("Q1000") function getWikidataEntity(id){ var url = "" + id; var xhr = new XMLHttpRequest();"GET", url, true); xhr.onload = function() { if (xhr.status >= 200 && xhr.status < 300) { //console.log(xhr.responseText); var json = JSON.parse(xhr.responseText); console.log(json.entities[id]) return json.entities[id]; } else { console.log("Error: " + xhr.status + " " + xhr.statusText); } }; xhr.onerror = function() { console.log("Error: " + xhr.status + " " + xhr.statusText); }; xhr.send(); }
Code language: JavaScript (javascript)

Solution 3

Solution number 3 brings us a less ideal block of code, as it doesn’t work out of the box. The request that is made results in a redirect, thus there is no xhr.responseText to output to the user.

var XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest; console.log(getWikidataEntity("Q1000")) function getWikidataEntity(id){ var url = ""+id; var xhr = new XMLHttpRequest();'GET', url, false); xhr.send(null); return xhr.responseText; }
Code language: JavaScript (javascript)

Other solutions

The solutions from here become quite a mixed bag. Lots of the code works, and does something, though often not quite what I would have expected.

You can find all suggestions pasted here.

The post A first look at Wikidata through Github Copilot appeared first on addshore.

weeklyOSM 593

18:08, Friday, 03 2021 December UTC


lead picture

Closest grocery in Helsinki [1] | © Topi Tjukanov | map data © OpenStreetMap contributors


  • Enock4seth complained about changeset comments consisting solely of hashtags. These make it difficult to understand exactly what changes people are making in Ghana, so he is now asking new mappers to provide explanations as well.


  • LuxuryCoop was interviewed about OpenStreetMap in South Korea, as part of the Geomob podcast series.
  • Contributor LySioS has illustrated the ’10 commandments for the OSM contributor’ as a short presentation slide deck (fr). There is also a related discussion (fr) on the OSM-FR forum.

OpenStreetMap Foundation

  • What type of memberships are there in the OSM Foundation?You can apply for a free membership if you have mapped on 42 or more days in the past year. If you contribute to OpenStreetMap in ways other than mapping, but equivalent in effort to the mapping requirement, you may also qualify. Note that new members will not be eligible to vote at this year’s Annual General Meeting.
  • Existing OSMF members have until Saturday 4 December 00:00 UTC (at midnight Friday/Saturday) to renew their membership via the Application form for active mapping contributor membership (i.e., 42 days or more of OSM mapping in the last year).
  • Allan Mustard has published the 2021 Annual Chairperson’s Report and reminded the members of the OSMF that the Annual General Meeting is just around the corner (online in the IRC chat room #osmf-gm on the IRC network, starting Saturday 11 December at 16:00 UTC). Voting to elect four new members of the OSMF Board of Directors (six candidates) will begin Saturday 4 December and there is also a proposed amendment to the Articles of Association, to count time as an associate member for board candidacy requirements. Please be sure to vote!
  • On the OSM-wiki one can find links to the answers from the 2021 OSMF Board election candidates and their manifestos.
  • Did you know that the monthly board meetings of the OSMF are public and broadcast live on the web? The dates are published in an OSS calendar.


  • The State of the Map Africa 2021 took place online from 19 to 21 November. Here are a selection of some reflections on this event:
    • Erick Tamba STATE OF THE MAP AFRICA 2021: (SMCoSE YouthMappers)
    • Maria Nabuwembo Celebrating the open mapping culture and its impact across Africa
    • Vickystickz My Experience @ State of The Map 2021
    • Sheila Job Development of solutions using data.


  • stragu published their RStats and OpenStreetMap workshop which was presented both at State of the Map 2021 and ResBazQld (Research Bazaar Queensland).
  • Taiwanese Mapper Supaplex pointed out the Organised Editing Guidelines to community members, suggesting that they could constrain obvious assignments appointed by school teachers, and register contact information and mapping purposes. As mapping requires state permission in China, it might not be a good idea to ask people from China to follow the Organised Editing Guidelines, which would require they register their contact information and mapping project purposes.


  • Sundellviz showed ‘where to get a drink in Europe’, a map on which all the cafés, bars, pubs and beer gardens in Europe are displayed as different coloured dots. As argued in the comments section, the fact that each type of establishment is called by different terms in each country seems to be the cause of some inconsistencies on the map.
  • Bristow explained (fr) > en how to get aerial views in OSMAnd and in particular those of of the French National Geographic Institute.
  • [1] Here are some selected OpenStreetMap-based maps from the final week of the #30DayMapChallenge:
    • Day 23: GHSL: the second challenge based on a non-OSM data source. Dror Bogin felt they had to add some OSM data to ‘make it interesting’ for Iceland.
    • Day 24: Historical map: Heikki Vesanto animated the ongoing effort by the OSM Irish community to map all of the buildings in Ireland.
    • Day 25: Interactive map: Zihan Song demonstrated the new clustering features in ArcGIS with an interactive a map of shop locations extracted from OSM.
    • Day 26: Choropleth map: Topi Tjukanov, who started the Challenge, divided Helsinki into segments based on the closest shop of each grocery retail chain.
    • Day 27: Heatmap: Hans van der Kwast pointed to the concentration of bicycle parking in South Holland, the Netherlands, mapped in OSM.
    • Day 28: The Earth is not flat: Fedir Gontsa depicted coverage for the biggest private postal and courier company in Ukraine using OSM data.
    • Day 29: Null: Ryan Hart found a Null Road in the OSM road network of North Carolina, USA.
    • Day 30: Metamapping: on the last day of the Challenge, while most people took a break, Marie Anna Baovola mapped buildings and roads in Kigali, Rwanda, obtained from OSM.

Open Data

  • The Brazilian Institute of Geography and Statistics (IBGE) launched (pt) > en the Continuous Vector Cartographic Base of the State of Rio Grande do Sul, Brasil on 27 October. These IBGE data are freely available for use in OpenStreetMap.


  • Sarah Heidekorn blogged about the new ways to access the ohsome API, an application that allows non-programmers to analyse the rich data source of OpenStreetMap history. Tools to help analyse OpenStreetMap data are available in packages for Python, R, QGIS, and JavaScript.


  • Colin Angus, VictimOfMaths, for some time now has been making available the R code for all plots he shows on Twitter. A side effect of his participation in the #30DayMapChallenge is that the code for his isochrone map is available on GitHub.

Did you know …

  • … the Web to OSM Opening Hours converter? You can copy and paste website content, including tables, into a form field that will extract the textual content and try to convert this into values that conforms to the opening_hours key.

Other “geo” things

  • In Australia, ABC News reported on the detailed Lidar analysis that has uncovered failures by the state logging company to abide by strict laws designed to protect valuable water catchments in Victoria.
  • The Laotian Times reported that the government of Laos has introduced large fines for unauthorised mapping or use of unauthorised maps.
  • BBC News profiled Geograph a website where contributors share geographically located photographs, grouped by 1 km grid squares. Geograph covers Great Britain and Ireland – there is also a German equivalent. Photographs are licensed CC-BY-SA.

Upcoming Events

Where What Online When Country
MapRoulette Community Meeting osmcalpic 2021-12-07
San Jose South Bay Map Night osmcalpic 2021-12-08 flag
London Missing Maps London Mapathon osmcalpic 2021-12-07 flag
Berlin OSM-Verkehrswende #30 (Online) osmcalpic 2021-12-07 flag
Landau an der Isar Virtuelles Niederbayern-Treffen osmcalpic 2021-12-07 flag
Stuttgart Stuttgarter Stammtisch (Online) osmcalpic 2021-12-07 flag
CASA talk: Ramya Ragupathy, Humanitarian OpenStreetMap Team osmcalpic 2021-12-08
London London pub meet-up osmcalpic 2021-12-08 flag
Chippewa Township Michigan Meetup osmcalpic 2021-12-09 flag
Bratislava Missing Maps mapathon Slovakia online #5 osmcalpic 2021-12-09 flag
Großarl 3. Virtueller OpenStreetMap Stammtisch Österreich osmcalpic 2021-12-09 flag
Berlin 162. Berlin-Brandenburg OpenStreetMap Stammtisch osmcalpic 2021-12-10 flag
[Online] 15th Annual General Meeting of the OpenStreetMap Foundation osmcalpic 2021-12-11
Grenoble OSM Grenoble Atelier OpenStreetMap osmcalpic 2021-12-13 flag
臺北市 OSM x Wikidata Taipei #35 osmcalpic 2021-12-13 flag
OSMF Engineering Working Group meeting osmcalpic 2021-12-13
Toronto OpenStreetMap Enthusiasts Meeting osmcalpic 2021-12-14
Washington MappingDC Mappy Hour osmcalpic 2021-12-15 flag
Derby East Midlands OSM Pub Meet-up : Derby osmcalpic 2021-12-14 flag
Reunión mensual de la comunidad española osmcalpic 2021-12-14
京都市 幕末京都オープンデータソン#15:岩倉具視と岩倉村 osmcalpic 2021-12-18 flag
Bonn 146. Treffen des OSM-Stammtisches Bonn osmcalpic 2021-12-21 flag
Lüneburg Lüneburger Mappertreffen (online) osmcalpic 2021-12-21 flag

If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.

This weeklyOSM was produced by Nordpfeil, PierZen, Polyglot, SK53, SomeoneElse, Strubbl, TheSwavu, YoViajo, derFred.

Wikipedia and Apps: A Love Story

17:19, Friday, 03 2021 December UTC

By Robin Schönbächler, Carolyn Li-Madeo, Sudhanshu Gautam and Volker Eckl

We love Apps, Robin Schoenbaechler, CC BY-SA 4.0

Designing for mobile apps presents unique challenges and opportunities compared to traditional websites. Mobile apps run natively on a device and have access to system resources that are harder to access within a web based architecture. Key characteristics of apps are:

  • Apps are designed to fit with the rest of the operating system. When an app fits in with the rest of the OS, it not only looks and feels more at home, it lowers the users learning curve.
  • A deep integration with the OS comes with benefits right out of the box, e.g. accessibility, performance, integration with voice assistants or home screen widgets.
  • Offline capabilities and often reduced data usage: offline capabilities allow users to consume content from anywhere, even when they are not connected to the internet or connectivity is low.

Wikimedia Foundation’s apps are an essential piece to meet our “this is for everyone” design principle. Wikimedia apps are designed with the philosophy of mobile first in mind. One of the core principles of mobile first is to embrace the constraints of a mobile environment and with it, prioritize essential information, as there’s simply not enough room for everything.

Limited connectivity of people in certain areas of the world inspires us to create products that are performant and light on data. When building new features for people using mobile apps, we strive for excellence in user experience and aim to break down complex existing flows and processes.

Wikipedia app on Android

Galaxy Article Android Screenshot,
Robin Schoenbaechler (and Wikipedia contributors),
CC BY-SA 4.0
Wikipedia app on iOS

Galaxy Article iOS, Robin Schoenbaechler (and Wikipedia contributors), CC BY-SA 4.0

Wikipedia app on kaiOS

Galaxy Article kaiOS, Robin Schoenbaechler (and Wikipedia contributors), CC BY-SA 4.0


The apps are here to create mobile first experiences and are not trying to replace existing desktop or community tools. Through the apps, we aim to meet potential users where they are. We are interested in understanding and addressing barriers of those that have been historically left behind, while not compromising the integrity of the workflows of our long-time users on other platforms. Making participation fit naturally the mobile first lives we live.

The apps are a place to experiment. Due to development speed, richer capabilities and unique needs of our user base, we are able to experiment. It is on the apps where we think the future of mobile editing will be discovered. Notably, the apps are where we piloted micro contributions, our most successful editing intervention to this date.

The apps are a forcing function to make our technologies future proof. To provide an example: Right now Wikipedia’s web experience works as a website only and cannot be exported to new mediums. By building on the apps, we create technology that is platform independent and enables next generation experiences. Whether these use artificial intelligence, augmented reality or future technology that is changing our world.


Theming, Robin Schoenbaechler, CC BY-SA 4.0

As mentioned in the introduction, iOS and Android both have platform specific guidelines. When building apps for the global Wikimedia movement, our goal is to create native experiences for the specific platform. When designing for mobile apps, guidelines for the platform sit at the top of the hierarchy. Throughout Wikimedia’s product suite, we follow Wikimedia Design’s visual design principles when providing solutions.

To create a seamless and familiar experience within Wikimedia’s products and services, we apply theming that is based on Wikimedia’s visual style guidelines. Theming allows us to customize the app’s look and feel, to better represent our product’s brand. Theming is reflected in the entire UI, including individual components, like buttons. Here’s an example of applying Material Theming in the Wikipedia Android app:

A standard Material button (image source).
A themed button in the Wikipedia Android app.
Chat toolbar in iOS messaging (image source).
Editing toolbar in the Wikipedia iOS app.
KaiOS standard progress indicator.

Without thumbing kaiOS, Robin Schoenbaechler, CC BY-SA 4.0

We used a planet animation progress indicator to show the vast amount of knowledge available on Wikipedia.

Theming kaiOS, Robin Schoenbaechler, CC BY-SA 4.0

The design style guide’s color palette is also used in the apps. Since both the Android and iOS Wikipedia app is available in four different reading themes (Light, Sepia, Dark and Black), they are using an enhanced color palette for an optimal reading experience. Check out more details about the color palette for Android or iOS here.

The Wikipedia Android app uses on Material Icons.
The Wikipedia iOS app uses SF Symbols.



The “Picture of the day” on Wikipedia for Android puts content from Wikimedia Commons in the spotlight.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

Wikipedia for iOS focuses on the essential in its homescreen widgets.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

Trending article list for readers to discover regionally relevant content.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

How clear is the goal? Based on Wikimedia Design’s “content first” principle, we aim to design apps that are easy to understand and focus on the essentials. When designing, content comes first, control comes second. On mobile, screen real estate is limited, places of usage are unforeseeable and the user’s focus is reduced. We strive to reduce information density while not neglecting an interface’s functional essence. Guiding questions like: What is the essence of this feature? What is the purpose of this particular screen? How much information can be deprioritized (or left out) to convey a UI’s purpose? Clarity stands for designing user interfaces with clear call to actions, generous use of white space, accessible contrast and hierarchy when designing with type or icons. Writing concise and suitable multilingual UI copy supports in reaching their goals more efficiently.

The ‘More’ navigation for Wikipedia on Android is designed to maintain the user’s context.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

The off-canvas menu on Wikipedia for iOS is one tap away from anywhere in the app.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

The ‘Options’ menu lets readers easily navigate an article. It avoids unnecessary key presses.

Where am I? Design for consistency and orientation are key aspects to help users navigate through an interface in a mobile app. We put explicit effort in communicating where users are and where they can go. Spatial awareness in a digital product helps users in achieving goals more directly. This is exemplified by using a consistent navigation, usage of depth and the application of motion. Deliberate usage of animation and transitions help users navigate an interface. Visual layers and realistic motion convey hierarchy, emotion and understanding when using a device.

The ‘Thanks’ interaction in the Wikipedia for Android app.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

The onboarding experience in the Wikipedia for iOS app educates and resonates emotionally.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

How does the design feel on an emotional level? An often forgotten and invisible theme but yet one of the most impactful: emotional design. Along the lines of our ”trustworthy yet joyful” design principle, we believe that preserving the human touch and showing ourselves with our values in our work is essential. Especially on a device that is as personal as your mobile phone. Instead of creating one more cheap and fast mass feature, we follow a philosophy that has been paved by artists, designers, and architects of the arts and crafts movement. After all, we design for humans and strive to create humane and emotional experiences. Through design, we can see and connect with other human beings. To design experiences that are emotional, we consider understanding the needs of the people we are designing for, as the core mission.

Example of ergonomically supporting users in the Wikipedia for Android app.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

Relevant actions are located near the bottom of the screen for easy access.

Robin Schoenbaechler (and Wikimedia Commons contributors), CC BY-SA 4.0

For non-touch interfaces, placement of key actions (e.g. search, read, edit) is important. We linked key actions with commonly used phone keys to make them easy to perform.

Language selector kaiOS, CC BY-SA 4.0

How does the design feel in my hand? Ergonomics, posture, context, and the tactile nature of touch have implications on how users interact with a design. Our design principle “this is for everyone” is part of our core mission and designing for touch is different than designing for a keyboard, mouse or trackpad. When designing for apps, we embrace principles of direct manipulation, an interaction model where effect are immediately visible on the screen to the user. Designing for a touch device goes beyond enlarging buttons for bigger fingers. We deeply consider placement and positioning of elements to achieve an ergonomic user experience while being aware of different device usage types. And, as voice is considered as the most natural machine-human input method available, we make sure the interface reflects it. We design experiences that provide ergonomics on smartphones, tablets and hybrid laptop/touch devices.

Get Involved

The Wikimedia iOS, Android and kaiOS apps are each designed and developed by a specific team to that platform. If you’re interested in learning more about the apps or if you want to get involved, please visit Wikimedia Apps on




Thanks to Lucy Blackwell, Jazmin Tanner & Josh Minor for their contributions.

About this post

This post first appeared in the Wikimedia Diff Blog on 30 November 2021.

Featured image credit

We love Apps, Robin Schoenbaechler, CC BY-SA 4.0

mediawiki-docker-dev in mwcli

22:33, Thursday, 02 2021 December UTC

The original mediawiki-docker-dev environment was created by accident and without much design back in 2017.

In 2020 I started working on a new branch with some intentional design and quite liked the direction.

And now finally, the all of the mediawiki-docker-dev functionality exists in a new home, with more intentional design, tests, stability, releases and more.

I’ve already written a brief history of the tool in a previous post so now I’ll focus on what mediawiki-docker-dev looks like in the mwcli environment for the current version 0.8.0.

The docker / dev commands

The docker command, which by default will have a dev alias, is the entry point to this docker based development environment.

It lists a few high level commands, as well as all of the services that are by default provided by the environment.

  • custom – interacts with a custom docker-compose file that you may want to provide (similar to docker-compose.overrides)
  • docker-compose – passes commands straight through to the correct docker-compose context, allowing raw command execution
  • env – modify the environment variables that make up part of the environment
  • hosts – automatic modification or building of a hosts file to use locally if needed
  • destroy – deletes all containers and volumes
  • resume – restart suspended containers
  • suspend – suspend containers
  • where – lets you know where the docker-compose files actually are

The services are quire self explanatory so I won’t dive into detail here.

Each service set

Each set of services is backed by a docker-compose file of the same name, including, the custom set. And each set of services comes with a few default commands.

  • create – creates containers related to the set of services
  • destroy – destroys the containers and any persistence
  • exec – execute a command in the main container (including bash)
  • suspend – temporarily suspend the containers
  • resume – resume suspended containers

Some service sets will have additional commands. For example redis also has a dedicated cli command that will run redis-cli.

The MediaWiki service set includes commands for composer, fresh, quibble. These run Wikimedia managed images and services with the MediaWiki code you are developing from mounted. These extra containers also have access to the same MediaWiki services and network. So you can for example use fresh to run browser tests against your development wikis.

Exposed web services

Many of the services contained within the development environment are web services, such as MediaWiki, Adminer and more. These are all exposed at subdomains of localhost for ease of access through modern browsers. An additional hosts command exists to help you update your /etc/hosts file for older browsers of command line tools that do not resolve localhost domains automatically.

Right now you’ll find the current web services are as follows…

  • http://*.mediawiki.mwdd.localhost
  • http://adminer.mwdd.localhost
  • http://eventlogging.mwdd.localhost
  • http://graphite.mwdd.localhost
  • http://mailhog.mwdd.localhost
  • http://phpmyadmin.mwdd.localhost

Entry level guide

The development environment is intended to to be flexible. Both easy to get to grips with for new users, with a simple wizard and built in documentation. But also flexible allowing advanced users to add custom services, use custom images for existing services and alter all needed settings.

The initial wizard will guide you through the process of locating, and downloading if needed, your MediaWiki code. Choosing a port to expose the services on, using a composer cache in images, and more.

The MediaWiki install process is also currently abstracted behind a command (though you can run install.php manually if you want). This abstracted command will ensure the needed LocalSettings.php is in place, run a composer update or install if needed. And then run install.php and udpate.php.

Multiple databases are supported, and multiple wikis can be installed and running at once. If you create the mysql service set, the same sequence of commands can be used to create more wikis, just use different options!

Continued work

This project is still a work in progress and a collaboration between the Wikimedia Release Engineering team and me. You can find the current documentation on, git repository on the Wikimedia GitLab install and task tracker on the Wikimedia Phabricator instance. You can read more about the CI setup for the repository in my previous post.

The mwcli tool currently includes more functionality, such as basic Gerrit integration, a GitLab CLI, Codesearch ability and ToolHub querying. I’ll be following up on that functionality in a new blog post soon.

The post mediawiki-docker-dev in mwcli appeared first on addshore.

mwcli CI in Wikimedia GitLab (docker in docker)

22:33, Thursday, 02 2021 December UTC

mwcli is a golang CLI tool that I have been working on over the past year to replace the mediawiki-docker-dev development environment that I accidently created a few years back (among other things). I didn’t start the CLI, but I did this mediawiki-docker-dev like functionality.

As some point through the development journey it became clear that one of the ways to set the new and old environments apart would be through some rigorous CI and testing.

This started with CI running on a Qemu node as part of the shared Wikimedia Jenkins CI infrastructure that is hooked up to Gerrit, where the code was being developed. This ended up being quite slow, and involved lots of manual steps.

A next iteration saw the majority of development take place in my own fork on Github, making use of Github Actions. Changes would then be copied over to Gerrit for final review once CI tests had run.

And finally the repository moved to the new Wikimedia GitLab instance (work in progress), where I could make use of GitLab Runners powered by a machine in Wikimedia Cloud VPS.

Screenshot of GitLab pipelines in action for the mwcli project


I have a dedicated Cloud VPS project for the machine used as a runners for the mwcli project (T294283). Currently 2 runners are configured, each with 4 cores, 8GB memory and 20GB disks running debian buster.

The runners make use of Docker in docker, which is one of the documented ways to use the docker executor per the GitLab documentation. I haven’t done a full review of the possible security implications of this approach yet, and it should be noted the virtual machines only runs CI for this 1 project, and only members of the project have the ability to run the CI.


You need docker installed. You can follow the docker install guide, or do something like this…

sudo apt-get update sudo apt-get remove docker docker-engine containerd runc sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install docker-ce docker-ce-cli
Code language: PHP (php)

And you need code for GitLab runners installed. There is an install guide, and it looks something like this…

curl -LJO "" sudo dpkg -i gitlab-runner_amd64.deb rm gitlab-runner_amd64.deb
Code language: JavaScript (javascript)


Once everything is installed, you are ready to register the runner, and connect it to the GitLab instance and project.

Head to Settings >> CI/CD on your project. Under the “Runners” section you should find a “registration token” which you’ll need to use on the runner.

This token can be used with the gitlab-runner register command, along with a user provided name and some other options such as --limit which limits the number of jobs that the runner can run at once.

sudo gitlab-runner register -n \ --url \ --registration-token xxxxxxxxxxxxxxxxxxxxxxx \ --executor docker \ --limit 3 \ --name "gitlab-runner-addshore-1012-docker-01" \ --docker-image "docker:19.03.15" \ --docker-privileged \ --docker-volumes "/certs/client"
Code language: JavaScript (javascript)

You should now see the runner appear in the GitLab UI.

Further Configuration


Although we specified a limit of 3 jobs for the runner when registering it. This is only runner configuration. A single node and have multiple runners of multiple types (or of the same type). So there is also a node / global concurrency setting that needs to be changed.

sudo sed -i 's/^concurrent =.*/concurrent = 3/' "/etc/gitlab-runner/config.toml" sudo systemctl restart gitlab-runner
Code language: JavaScript (javascript)

Docker mirror

If your CI will make use of images from Docker Hub or any other registry that imposes limits, or if you want to speed up CI, you may want to run and register a local docker mirror.

Again, you can follow a blog post for setup here, or do something like this…

Create the mirror in a container…

sudo docker run -d -p 6000:5000 \ -e REGISTRY_PROXY_REMOTEURL= \ --restart always \ --name registry registry:2
Code language: JavaScript (javascript)

Get the IP address of the host…

hostname --ip-address

Add the mirror to the docker deamon config…

sudo echo '{"registry-mirrors": ["http://<CUSTOM IP>:<PORT>"]}' > /etc/docker/daemon.json sudo service docker restart
Code language: HTML, XML (xml)

And also register it in the runner config, which you should find at /etc/gitlab-runner/config.toml (see these docs for why this is also needed)

[[]] name = "docker:19.03.15-dind" command = ["--registry-mirror", "http://<CUSTOM IP>:<PORT>"]
Code language: JavaScript (javascript)

Finally restart the runner one last time…

sudo systemctl restart gitlab-runner

Example CI

You could then configure some very basic jobs using the GitLab CI configuration file for the project.

image: docker:19.03.15 variables: DOCKER_TLS_CERTDIR: "/certs" services: - name: docker:19.03.15-dind docker_system_info: only: - web stage: check script: - docker system info docker_hub_quota_check: only: - web stage: check image: alpine:latest before_script: - apk add curl jq script: - | TOKEN=$(curl "" | jq --raw-output .token) && curl --head --header "Authorization: Bearer $TOKEN" "" 2>&1
Code language: JavaScript (javascript)

Gotchas & Reading

  • The Wikimedia GitLab instance is still currently a work in progress.
  • If using images from Docker Hub the limit can be annoying. As well as a mirror there is also documentation for providing a key for Docker Hub or another registry. (T288377)
  • Depending on your CI, 20GB of disk can fill up quite quickly. While running at a concurrency of 4 I would occasionally hit disk limitations.
  • When people open merge requests from forks CI will not and can not run using the project runners.
  • Default caching is done per project, per runner, per job / concurrency slot. This can lead to a lot of duplication unless a shared cache is used!

The post mwcli CI in Wikimedia GitLab (docker in docker) appeared first on addshore.

addwiki php libraries 3.0.0

22:33, Thursday, 02 2021 December UTC

Back in 2014 I wrote a small collection of PHP libraries, releasing 2.0.0 of the base library in 2015 for interacting with MediaWiki and Wikibase. My goal back then was to create a stable base that PHP bot frameworks could be built on, while also experimenting with some framework like features in surrounding libraries.

And now, version 3.0.0 has been released, with a couple of new features, such as OAuth authentication, and lots of refactoring to make the libraries easier to work with and contribute to.

Library usage

I find it pretty hard to figure out how many people actually use these libraries and if it is worth keeping them updated, but there are a few notable projects that make use of them, particularly mediawiki-api-base which is a simple client wrapping Guzzle.

Packagist and Github metrics seem to hint that there is some usage. Though I suspect that the download numbers mainly come from usage in other projects CI setups.

Package Github Stars ⭐ Packagist downloads in last 30 days
mediawiki-api-base 33 1248 (41 a day)
mediawiki-api 36 745 (24 a day)
wikibase-api 20 103 (3 a day)

And I can easily hunt out some notable usages (even if some are mine):

So bring on a little refresh, in the form of version 3.0!

What’s changed for users?

The libraries now require PHP 7.4+ and have had typing added throughout.

All libraries have updated namespaces to follow PSR4. The base library uses the namespace Addwiki\Mediawiki\Api and which is shared by mediawiki-api which builds on top of it. The wikibase-api library takes the names space Addwiki\Wikibase etc.

The base library saw some additions such as a RestApi and OAuth authentication has been added to all APIs.

The simple username and password based auth has been replaced by an AuthMethod interface. This interface has a few implementations including UserAndPassword, UserAndPasswordWithDomain, NoAuth and OAuthOwnerConsumer. These can be used when creating an API service object to decide how to authenticate with it.

What’s changed for addwiki devs?

If you want to contribute to these libraries, you’ll now find that all development happens as part of a monorepo in addwiki/addwiki.

This should make contribution easier, as no complex inter library dependencies need to be thought of. This should also lead to making more releases as things are iteratively added or improved, as the release process is much simpiler.

For an overview of the workflows, make sure you read the README!

The post addwiki php libraries 3.0.0 appeared first on addshore.

Headshot of Nina Nakao
Nina Nakao
Image courtesy Nina Nakao, all rights reserved.

As a fourth generation Japanese American, Nina Nakao says she was drawn to study more about the history of her family and community. As a college student, she studied intergenerational trauma within the Japanese American community, and upon graduation, began to work at the Japanese American National Museum. The COVID-19 pandemic shifted her role to focus on a national virtual visits program for the museum, creating opportunities for students to engage virtually with the Japanese American experience highlighted at the museum. Wikipedia is one such opportunity.

“As a museum educator I think it’s vital to think critically about how knowledge is created, distributed, and used – especially by students – and Wikipedia is a huge part of that!” Nina says.

As part of the Smithsonian’s American Women’s History Initiative, they worked with Wiki Education to host a series of Wiki Scholars courses for staff of museums like Nina’s, Affiliates of the Smithsonian. Nina joined the course to help improve biographies of American women related to her museum’s collection.

“I wanted to take the class to understand the processes and standards that go into published Wikipedia articles – particularly in an era of ‘fake news’,” she says. “As a young millennial, I’ve always grown up in a world with Wikipedia as a classroom resource (whether or not my teachers allowed it!), so now, as an educator, I find it crucial to understand the methods that Wikipedia writers and editors use to guide their work, cite their sources, and ground published knowledge in truth and facts.”

With other Smithsonian Affiliate staff members, Nina learned to edit Wikipedia through Wiki Education’s course. Weekly Zoom sessions were supplemented with online tutorials and out of class activities. She loved learning about the community behind Wikipedia, and fun elements like WikiLove and barnstars.

“The class was incredibly formative and well structured. Without the guidance of Zoom classes, I would not have known where to begin in my research, or how to evoke the tone and structure of a high quality Wikipedia article,” Nina says. “Wiki Education also gave me insight into how articles gain readership, are internally linked to each other, and evaluated. The class gave me an insider’s perspective on all of the complex details that make Wikipedia so great!”

Nina chose to improve the article for artist Estelle Peck Ishigo, one of a few people who was not ethnically Japanese but was incarcerated during World War II because she was married to a Japanese American man.

“I chose her because I’ve always been drawn to her emotional charcoal drawings of camp life,” Nina says. “As I learned more about her, I understood the deep struggles she faced throughout every stage of her life, and how her art became a form of resistance and resilience.”

Nina enjoyed her experience in the course and hopes more museum staff have an opportunity to participate in similar courses.

“I hope that these courses continue to be offered as means to creating pathways of access for a more diverse body of editors to learn how to use Wikipedia,” she says. “In a funny way, both museums and Wikipedia function to serve the public and provide trustworthy knowledge and information. Understanding Wikipedia as a daily resource that is used by visitors is vital as the museum field thinks critically about accessibility, engagement, and education and moves toward a more equitable future.”

She looks forward to continuing to edit articles related to upcoming exhibits at the Japanese American National Museum.

“I see Wikipedia as a great tool for those who are just beginning to learn about the Japanese American WWII incarceration,” Nina says. “I’m especially passionate about building up archives on the involvement of Japanese American women in post-war activism – not only within their own community during the fight for redress and reparations in the 1970s and 80s, but also working in solidarity with other women of color during the Civil Rights Movement.”

Image credit: Justefrain, CC BY 3.0, via Wikimedia Commons

Lessons from seven years of remote work

03:46, Wednesday, 01 2021 December UTC

The inspiration for this post is Željko Filipin’s post on the same topic.

Nobody worked remotely during the pandemic, but everybody worked from home.

During the pandemic, office workers had to adjust to working out of their homes. But remote work is different: you’re not working from home, necessarily; you’re working while seperated from a lively, in-person office. You might be in the same city as your co-workers or on the other side of the world.

When you’re physically disconnected from your colleagues, you have to build new skills and adapt your tactics. This is advice I wish I’d had seven years ago when I started working remotely.

Asynchronous communication

Office workers have the luxury of hallway conversations. In an in-person office, getting feedback takes mere minutes. But in a remote work position where you may be on the other side of the planet, communication may take overnight.

To be effective, you need to master asynchronous communication. This means:

Timezones suck

I wish this section was as simple as saying: use UTC for everything, but it’s never that easy. You should definitely give meeting times to people in UTC, but you should tie meetings to a local timezone. The alternative is that your meetings shift by an hour twice a year due to daylight savings.

This all gets more complicated the more countries you have involved.

While the United States ends daylight savings time on the first Sunday in November, many countries in Europe end daylight savings on the last Sunday in October, creating a daylight confusion time.

During daylight confusion time, meetings made by Americans may shift by an hour for Europeans and vice-versa.

I think the only thing to learn from this section is: you’ll mess it up.

Space and Nice tools

Function often follows form. Give yourself a context for capturing thoughts, and thoughts will occur that you don’t yet know you have

– David Allen, Getting Things Done

Working from the kitchen table is unsustainable for your mental health and your back. You need a space that’s primary function is your work, and that space needs to have tools that are a joy to use.

Splurge a bit on tools you’ll use every day: your chair, keyboard, monitor, headphones, webcam, and microphone. These purchases quickly fade into the background of your life. Still, if you ever have to work outside your home office again, you’ll realize how these tools enable your best work.

Buy a nice notebook, and don’t be afraid to absolutely destroy it. I prefer the Leuchtturm 1917 notebooks, but I’m currently trying out the JetPen’s Tomoe River 52 gsm Kanso Noto Notebook. Writing is thinking, and you’ll find your thinking is sharper if you start with pen-and-paper first.

“The beginning of wisdom,” according to a West African proverb, “is to get you a roof.”

– Annie Dillard, The Writing Life

The most important property of your permanent workspace is that it has ample and appropriate light for video calls.

Apart from that, I prefer having a door, but then again, I have a dog and a cat, so your mileage may vary.

Outreachy report #27: November 2021

00:00, Wednesday, 01 2021 December UTC

Highlights We finished reviewing all intern selections We finished handling most invoices for this round We planned and started hosting written exercise sessions Intern selection We’ve been testing shorter periods for the intern selection for a while. It went from a month to three weeks to two weeks this round, and we learned very quickly that two weeks won’t cut it – the ideal is offering mentors and coordinators three weeks to finalize their intern selections.

Five reasons Wikipedia needs your support

18:00, Tuesday, 30 2021 November UTC

In the 20 years since Wikipedia was born, it has grown to become a valued and beloved knowledge destination for millions across the globe. Now with over 55 million articles, its growth has been fueled by a global volunteer force and donors who explore and visit the site regularly.  Supported by contributions from readers around the world, the Wikimedia Foundation, the nonprofit that operates Wikipedia, works to ensure the site remains accurate and ad-free, while supporting the free knowledge infrastructure that makes 18 billion monthly visits to the site possible. 

Reader support has allowed Wikipedia to celebrate 20 years as the world’s only major website run by a nonprofit organization. We continue to rely on this generosity as the need for accurate, neutral information, created in the public interest becomes ever-more acute. Here are five reasons why you should support free knowledge work: 

  1. Ensuring the long term independence of Wikipedia and other projects that keep knowledge free 

“Wikipedia is a unique entity that continues to add value in the lives of me and my loved ones. I feel that the crowd funded nature of Wikipedia’s balance substantially contributes to Wikipedia’s immaterial values. Whenever I have money and wikipedia is in need, I will contribute.”

Donor in the Netherlands

Part of the role of the Wikimedia Foundation is to ensure that the independence of Wikipedia and other free knowledge projects is never compromised. The majority of the funding for the Wikimedia Foundation comes from millions of individual donors around the world who give an average of $15 USD. This model preserves our independence by reducing the ability of any one organization or individual to influence our decisions. It also aligns directly with our values, creating accountability with our readers. You can read more about how our revenue model protects our independence in the Wikimedia Foundation’s guiding principles

Our legal and policy teams also work to uphold our independence, protecting our projects from censorship and advocating for laws that uphold free expression and open up knowledge for anyone to use. Support for this work is essential to securing everyone’s right to access, share and contribute to knowledge-building. 

  1. Keeping Wikipedia safe, secure and inclusive   

Wikimedia values — transparency, privacy, inclusion, and accessibility — are built into our technology. Just around 250 engineering and product staff at the Wikimedia Foundation work with our servers to ensure our projects are always available. That means one technical employee for every four million monthly Wikipedia readers! 

As technology platforms increasingly deal with new threats and risks from bad actors, we develop tools and features that protect editor privacy, maintain security, and respond to attacks. We also work to improve our projects, making them more accessible to people with disabilities, or those who primarily access our sites on mobile. Wikipedia projects are built with the intention to keep bandwidth costs low for readers, so that it’s easy for anyone, anywhere to enjoy their value. 

The open source software maintained by our engineers in cooperation with volunteers around the world, MediaWiki, powers our projects and supports more than 300 languages, many more than any other major online platform. This empowers our communities to make content accessible in more languages than you will find on any other top ten website, and it puts our software on the leading edge of global outreach.

  1. Supporting the global Wikimedia volunteer community to help fill knowledge gaps and improve our projects 

Supported by Foundation grants, Wikimedia volunteer and affiliate campaigns continue to make notable contributions to the free knowledge movement. 

For example, some affiliates are working to add new media files to Wikimedia Commons, the world’s largest free-to-use library of illustrations, photos, drawings, videos, and music:

  • In Europe, Wikimedia UK’s partnership with the Khalili Collections, to share more than 1,500 high-resolution images of items from across eight collections, now sees the uploaded images getting more than two million views per month. 
  • Additionally,  this year’s Wiki Loves Africa campaign resulted in over 8,319 images and 56 video files contributed by 1,149 photographers. The campaign challenges stereotypes and negative visual narratives about Africa. Since the collection began in January 2016, over 72,300 images have been loaded to the platform under a Creative Commons license and have been viewed 787 million times. 

With paywalls and price tags increasingly placed on content, the growing collection of free use files on Wikimedia Commons is becoming even more  vital to our efforts to expose people around the world to new sights, art, and cultures.

  1. Building a future for greater knowledge equity   

Our vision is to create a world in which every human can share in the sum of all knowledge. We know that we are far from achieving that goal and that large equity gaps remain in our projects. From content drives, to inclusive product design and research, there are several ways Wikimedia projects work to advance knowledge equity to ensure diverse, more equitable, accessible and inclusive initiatives. Our Wikimedia in Education initiative, for example, promotes equity in education by expanding access to linguistically and culturally relevant open educational resources, and provides opportunities for teachers and students to participate in knowledge production.  In 2020, the Foundation joined UNESCO’s Global Education Coalition, allowing us to discover new ways to support education for people and communities most affected by the COVID-19 pandemic. 

  1. Making sure you know that we use your donations responsibly 

The Wikimedia Foundation has a two-decade-long track record of using resources efficiently, transparently, and in service of impact — which is why independent nonprofit evaluator Charity Navigator gives us its highest overall ratings for accountability and transparency. It’s also why nonprofit research organization GuideStar gives us its Platinum Seal of Transparency. We remain committed to making the best use of donor funds to support Wikipedia. 

We  invite you to support our mission. You can make a donation to Wikipedia at For more information about the Wikimedia Foundation’s fundraising program, please see the 2020-2021 Fundraising Report. For additional information about donating, see our list of Frequently Asked Questions. Thank you!

Joseph Wahba sitting on bench
Joseph Wahba.
Image courtesy Joseph Wahba, all rights reserved.

Joseph Wahba’s interest in health sciences was sparked in high school — but it wasn’t until he was in his third year of his Health Sciences program at McMaster University that he began to question how we know and understand topics of health. That line of inquiry was prompted by a course he took, taught by Denise Smith, called Health Information — a class that had a Wikipedia assignment.

“My eyes had been open to the crucial importance of effective dissemination of health information. We dealt with various topics in health discussed publicly, including vaccination, diseases that carry stigma, etc.,” he says. “Our final project in the course was to utilize Wikipedia as a platform to share health information that had been gathered from critically appraised academic literature on the topics we chose. Although I had brought my preconceived notions of Wikipedia to the course, I learned to truly appreciate it for what it is: accessible. It is the most accessible platform out there to share information that I have seen up to this day.”

Joseph had long known about Canada’s Food Guide; his mother, a physician, even came to his elementary school class to teach about it. Denise’s class was the first time he considered it as a topic of study, however. In Denise’s class, he learned about how the Food Guide failed to represent people of marginalized communities, including exploiting First Nations groups. As part of his deep dive into the topic, he found reference on Wikipedia to “First Nations nutrition experiments”, in which the Canadian government ran purposeful malnourishment experiments on First Nations people, including children in residential schools. Before he started working on it, the article on the First Nations nutrition experiments was what Wikipedians call a “stub” — in this case, a three-sentence article. Joseph set out to improve the information available on Wikipedia.

“I learned that I knew very little about the experiences of First Nations Peoples in this country. I could never say I truly understand having my language, way of life or cultural identity forcibly stripped in the same fashion as many First Nations Peoples have,” he says. “There are many gruesome accounts documented of the nutritional experiments between the 1940s and 1950s in Canada, led by prominent researchers in the fields of medicine and nutrition, where autonomy, consent, and care of the First Nations participants were disregarded in a blatant and quite frankly, cavalier manner.”

Joseph documented all of these in the now-expanded article on Wikipedia, which today has more than 20 paragraphs and citations to 23 sources.

“Although I dove deep into the historical context and events that had transpired during some of these experiments, what struck me more were the transcribed accounts from survivors,” he says. “A personal account that I covered in the article was that of Alvin Dixon, a former residential school survivor who was among many children subjected to a nutrition experiment conducted in Alberni Residential School, on the CBC Radio One series As It Happens. By listening to his recollection of the experience, the suffering borne by him and his fellow survivors was all but evident to me in his solemn voice.”

Denise, he says, was critical in helping him find articles, providing tips on the best literature to cite and working with archivists who helped dig up items from library catalogs to help Joseph expand the article. Another editor, User:SonOfTheHoundOfTheSea, saw Joseph’s work and started collaborating with him, adding a section on the James Bay Survey. Joseph found this interaction with another contributor a really interesting part of his project. He also enjoyed the ability to see page views of his work, to understand its impact.

Joseph hopes more faculty like Denise assign students to edit Wikipedia as a class assignment. He says learning to address feedback from the instructors as well as other contributors is a critical skill to develop. He also learned how to share complex information using simpler language, making more knowledge accessible to the general public. And, he says, it’s motivating to contribute to something bigger than yourself.

“Wikipedia projects serve a purpose outside of class: to serve the platform, and in turn, the community of people who use it for learning,” he says. “I think a major factor to whether I get excited about assignments is if I can find a purpose for putting in the effort. It feels good to serve a community with work you do as a student since it provides you with motivation to continue to learn and develop the skills necessary to improve.”

Joseph graduated this year and is working full-time at one hospital and volunteering at another, while he applies for medical school. These have kept him busy recently, but he intends to continue editing Wikipedia, likely in the health care topic area. And he particularly appreciated the opportunity Wikipedia brings to raise awareness about Indigenous health topics like the First Nations nutrition experiments.

“I believe that the First Nations and Inuit peoples have endured and still do endure injustices that render it difficult to access healthcare facilities and resources,” he says. “I also think that the First Nations nutrition experiments were only one of many instances throughout history during which Canada did not uphold the needs, beliefs and wants of Canada’s Indigenous Peoples, as a collective. I hope that more and more is done in terms of reparations as well as educating Canada’s youth about the Indigenous perspective of this country’s history.”

To learn more about teaching with Wikipedia, visit

Image credit: Captain108, CC BY-SA 4.0, via Wikimedia Commons

weeklyOSM 592

10:15, Sunday, 28 2021 November UTC


lead picture

Open Indoor Viewer OpenLevelUp [1] | © OpenLevelUp | map data © OpenStreetMap contributors

About us

  • Kai Michael Poppe has extended his multi-purpose-bot to also inform the D-A-CH Telegram group when a new issue of weeklyOSM is published.
    • Kai invited other Telegram groups that are interested to make use of this service too, using a new bot.
    • We have created a spreadsheet in which we ask administrators to enter the Telegram group that they want to be served directly when a new weeklyOSM is published.


  • Managing imagery alignment has always been a problem; bdhurkett found a solution for Burnie (Tasmania) as suburb polygons are well aligned with fences and other property boundaries. They outline how they are incrementally adjusting OSM objects to match these alignments more consistently.
  • SherbetS showed a setup for river mapping using a PlayStation controller and JOSM.
  • SK53 continued his examination of how natural=heath is used in Wales with a detailed example of a small hilltop heathland which was previously unmapped.
  • Voting is underway for the following proposals:


  • Canadian cartographers discussed, on the forum, the relevance of special mapping for flood disasters and landslides in various areas of British Columbia.
  • The Local Chapters and Communities Working Group presented the ‘OpenStreetMap Welcome Tool’. The tool is designed to make it easy to welcome new mappers to your country or region.

OpenStreetMap Foundation

  • On the OSMF mailing list Heikki Vesanto suggested a ‘made with OSM logo’ along the lines of those of Natural Earth and QGIS. Note that this is subtly different from an OSM mark, an idea which was recently revived (as we reported earlier), in that it would be additional to, not a replacement for, existing attribution requirements.
  • Michal Migurski, candidate for the OSMF Board, asked on MetaFilter about ‘organisations that form in spite of their grassroots counterparts’, and asked how ‘new members’ could ‘circumvent a community organisation to form something larger and more organised’.


  • Both State of the Map Africa and the HOT Summit occurred last weekend (20 and 22 November), a little too late for inclusion of highlights in this week’s weeklyOSM. One talk several people noted on Twitter, was by Alazar Tekle from AddisMap, one of the earliest OpenStreetMap startups anywhere in the world. No videos, at present, but you can view their, virtual, presentation at State of the Map 2010 on YouTube.


  • Pascal Neis left a message on the changeset of a Chinese newcomer to ask why they deleted lots of data. Another user in the comments pointed out that this was OSM mapping homework assigned to students by an instructor at Central South University. Other mappers from China also showed their disappointment (zhcn) > en with these homework changesets. Should teachers be responsible for such improper behaviour of students? How should we deal with the teacher’s assignment of editing in OSM as homework?
  • Videos of presentations by Heidelberg Institute for Geoinformation Technology (HeiGIT), given at the recent FOSS4G conference, are now available online. Subjects include: an OSM Confidence Index developed using the framework, and an introduction and update on MapSwipe .

OSM research

  • Alexander Zipf’s group has received funding to start a new project related to climate change actions – ‘GeCO – Generating high-resolution CO2 maps by Machine Learning-based geodata fusion and atmospheric transport modelling’.


  • Here are some selected OpenStreetMap-based maps from the penultimate week of the #30DayMapChallenge, which we have been following all month:


  • The website Visionscarto announced (fr) > en the launch of three tools built with OpenStreetMap and uMap to identify available agricultural lands and their uses by the association ‘Terre de liens’.
  • Hauke Stieler created a web application called OSM Open to find POIs that are open at a selected point in time. The data can also be filtered by tags and the source code is available on GitHub. He has chosen to hide the OpenStreetMap attribution by default.

Did you know …

  • … the existence of the xmas:feature key? While scarcely used (about 2000 items) and mainly in Germany, this ‘in use’ tag was introduced in 2009 and grows each year by a few hundred or so. Its use has been debated and a merging proposal with the temporary: namespace was made, but the proposal has been inactive since 2016.
  • [1] … the Open Indoor Viewer called ‘OpenLevelUp‘?
  • … there is a list of new keys recently created in OSM? Most of them involve typos or ignorance of established tagging principles. Comes with direct links to correct them.
  • OSM Streak, the gamified web application that encourages you to do small tasks in OpenStreetMap every day? There is also a Telegram account named @osm_streak_bot, which you can configure to remind you of your daily task.

OSM in the media

  • The government of France has unveiled (fr) > en its action plan for open source software and digital commons which, as its name suggests, aims to strengthen the use of open source and encourage the opening up of public data. A specific objective is to reference open source software and digital commons such as OpenStreetMap, a free and collaborative digital tool from which other tools such as (fr) > en are derived.

Other “geo” things

  • Archival geodata are now available (pl) > en as WMS layers from the Polish national geoportal.
  • The general secretariat of the German Red Cross (GRC), based in Berlin, is looking (de) for a ‘Manager Geoinformatics’. The position is responsible for the implementation of the project ‘Development of geoinformatics for the international humanitarian aid activities of the GRC – Strengthening the cooperation with the Heidelberg Institute for Geoinformation Technology’. The person chosen will also help develop programmes and basic approaches for the use and support of OpenStreetMap.
  • Reibert wrote (ru) > de about the official state register [uk] > de of permits to harvest timber in Ukraine.
  • SkyNews reported that Mapbox’s route to a stock market listing, through a Softbank SPAC (Special Purpose Acquisition Company), has run out of steam.

Upcoming Events

Where What Online When Country
UCB Brasil + CicloMapa: curso de mapeamento osmcalpic 2021-11-16 – 2021-11-26
UN Map Marathon osmcalpic 2021-11-22 – 2021-11-26
[Online] OpenStreetMap Foundation board of Directors – public videomeeting osmcalpic 2021-11-26
Brno November Brno Missing Maps mapathon at Department of Geography osmcalpic 2021-11-26 flag
HOTOSM Training Webinar Series: Beginner JOSM osmcalpic 2021-11-27
Amsterdam OSM Nederland maandelijkse bijeenkomst (online) osmcalpic 2021-11-27 flag
長岡京市 京都!街歩き!マッピングパーティ:第27回 元伊勢三社 osmcalpic 2021-11-27 flag
Bogotá Distrito Capital – Departamento Resolvamos notas de Colombia creadas en OpenStreetMap osmcalpic 2021-11-27 flag
泉大津市 オープンデータソン泉大津:町歩きとOpenStreetMap、Localwiki、ウィキペディアの編集 osmcalpic 2021-11-27 flag
Biella Incontro mensile degli OSMers BI-VC-CVL osmcalpic 2021-11-27 flag
津山のWEB地図作り~OSMのはじめ方~ osmcalpic 2021-11-28
Chamwino How FAO uses different apps to measure Land Degradation osmcalpic 2021-11-29 flag
OSM Uganda Mapathon osmcalpic 2021-11-29
Salt Lake City OpenStreetMap Utah Map Night osmcalpic 2021-12-02 flag
Paris Live Youtube Tropicamap osmcalpic 2021-12-01 flag
Missing Maps Artsen Zonder Grenzen Mapathon osmcalpic 2021-12-02
Bochum OSM-Treffen Bochum (Dezember) osmcalpic 2021-12-02 flag
MapRoulette Community Meeting osmcalpic 2021-12-07
San Jose South Bay Map Night osmcalpic 2021-12-08 flag
London Missing Maps London Mapathon osmcalpic 2021-12-07 flag
Berlin OSM-Verkehrswende #30 (Online) osmcalpic 2021-12-07 flag
Landau an der Isar Virtuelles Niederbayern-Treffen osmcalpic 2021-12-07 flag
Stuttgart Stuttgarter Stammtisch (Online) osmcalpic 2021-12-07 flag
CASA talk: Ramya Ragupathy, Humanitarian OpenStreetMap Team osmcalpic 2021-12-08
London London pub meet-up osmcalpic 2021-12-08 flag
Chippewa Township Michigan Meetup osmcalpic 2021-12-09 flag
Großarl 3. Virtueller OpenStreetMap Stammtisch Österreich osmcalpic 2021-12-09 flag
Berlin 162. Berlin-Brandenburg OpenStreetMap Stammtisch osmcalpic 2021-12-10 flag
[Online] 15th Annual General Meeting of the OpenStreetMap Foundation osmcalpic 2021-12-11
Grenoble OSM Grenoble Atelier OpenStreetMap osmcalpic 2021-12-13 flag
臺北市 OSM x Wikidata Taipei #35 osmcalpic 2021-12-13 flag
Toronto OpenStreetMap Enthusiasts Meeting osmcalpic 2021-12-14
Washington MappingDC Mappy Hour osmcalpic 2021-12-15 flag
Derby East Midlands OSM Pub Meet-up : Derby osmcalpic 2021-12-14 flag
Helechosa de los Montes Reunión mensual de la comunidad española osmcalpic 2021-12-14 flag

If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.

This weeklyOSM was produced by Lejun, Nordpfeil, PierZen, RCarlow, SK53, Strubbl, TheSwavu, YoViajo, conradoos, derFred.

Train the Trainer 2022 – call for participants

12:42, Thursday, 25 2021 November UTC

Would you like to receive training on how to deliver Wikipedia editing events? Wikimedia UK are now inviting expressions of interest in our next round of Train the Trainer, due to take place in early 2022. We are delighted to say that we’ll once again be partnering with Trainer Bhav Patel.    

Volunteer trainers play a key role in the delivery of Wikimedia UK programmes, helping us to achieve our strategic objectives by delivering Wikimedia project training to new and existing editors across the country.  Demand for training often outstrips staff capacity to fulfill, and we’re conscious that our existing networks do not always allow us to reach all the communities with whom we’d like to work.  

In the past, we’ve offered our main Train the Trainer programme as a 3-4 day in-person training course, and it has often focussed on training design and pedagogy. This time however, we’re taking a slightly different approach, which we hope will offer more flexibility to our volunteer trainers, and which we have developed in response to feedback from the community, and from partner organisations.  

The aim of this round of training will be to equip Volunteer Trainers with the skills, experience and resources to deliver a standard ‘Introduction to Wikipedia’, such that would take place at a standard online editathon or wiki workshop  Drawing on the experience of a number of trainers and staff, we have developed a set of training slides and exercises which can be delivered without the requirement for the Volunteer Trainer to do their own course design.  In time, and should they so desire, members of this cohort could be supported to deliver training in-person, and with their own design.  

Expressions of interest are welcomed from all, however given the current demographic mix of our training network, we are particularly interested in hearing from women, members of the LGBT+ community, and non-white people.

Dr Sara Thomas and Bhav Patel outline the content of the Train the Trainer course.

Course content and key dates

The course would be organised as follows, all sessions would be held online over Zoom:

  • Briefing: Thursday 27th January, 6-7pm. Introduction session.
  • Experience: Saturday 29th January, 1-5pm. Trainees would attend an Editathon / Wiki Workshop as participant observers.
  • Debrief: Sunday 30th January, 1-5pm. Trainees would discuss and debrief the Saturday session, exploring how and why the training was put together.
  • Practice: February. Trainees will run their own online Editathons / Wiki Workshops in pairs or groups of three. These sessions will be organised by Wikimedia UK, probably with partner organisations who will be aware that they are helping new trainers. We are also open to trainees setting up their own practice sessions, if they know a group with whom they’d like to work.

What we would expect from you if you decide to join 

  • Full attendance at the training course as outlined above.
  • To lead training for a minimum of 2-3 events per year. This would be a mixture of third party events which our Programmes Team would field to you, and those you would organise yourself. Please note that we do receive requests for training to be delivered within office hours.  
  • To be responsive to communication from Wikimedia UK staff and Event Organisers, including in advance of the event, and to complete basic reporting, including returning sign up sheets, afterwards.
  • Familiarity with, or desire to increase your knowledge of the Wikimedia Projects, particularly Wikipedia. Pre-course support can be provided if you feel that you would benefit from this in order to fully participate in the training.
  • To represent Wikimedia UK well during the time in which you are volunteering.
  • To adhere to our Safe Spaces policy, and the Code of Conduct. 

What you can expect from us

  • Full training and support to become a trainer for editathons and similar events.
  • Access to materials for participants.
  • Ongoing support from the Programmes Team.
  • Job references upon request (paper / email / LinkedIn as required).
  • Reasonable volunteer expenses where appropriate.

How to apply

Please fill in the Google Form here. Applications will close on the 9th December, and all successful applicants will be notified by 14th December.

Further background information

Volunteer Trainer is one of two main volunteer roles available at Wikimedia UK, the other being Board Member. In 2020, in light of the demand for online training, we ran an Online Training for Online Trainers course for our existing trainer network, and our last in-person training for new trainers took place in November 2019

The Wikimedia UK Volunteer Trainer Role description.

The Wikimedia Foundation’s Security team is an often invisible force that works tirelessly to protect the information and software of Wikipedia and our other projects. The internet has changed a lot since Wikipedia was created in 2001, and that change has brought with it myriad new security challenges.

From our vast army of diverse volunteer editors who create and maintain the online encyclopedia and its companion projects, to the millions of people around the world who use them every day, our security experts protect our community’s privacy and ensure safe and secure access to invaluable educational resources, acting in real time to confront cyber attacks.

John Bennett, The Wikimedia Foundation.

The Wikimedia Foundation’s Security team is committed to fostering a culture of security. This includes growing security functions to keep up with ever-evolving threats to the health of Wikipedia and the free knowledge movement at large.

It also includes equipping those who are closest to the challenges with appropriate knowledge and tools so they can make good security and privacy decisions for themselves.

The Wikimedia Foundation’s Director of Security John Bennett recently shared in the following Q&A how the Foundation is getting ahead of changing security vulnerabilities, as well as positioning itself at the cutting edge of championing privacy and security on our collaborative platforms.

Q: Why is the work of the Security team so important right now?

The world has come to rely on Wikipedia’s knowledge. We are also living through a moment in history where we are seeing the greatest number of threats to free and open-source knowledge. As we have seen over the past few years, disinformation and bad actors online can pose huge threats to democracy and public health. Wikipedia volunteers work in real time to fact check and ensure the public has safe, reliable access to critical information.

Wikipedia’s continued success as a top-10 site with hundreds of millions of readers means that it will continue to be a target for vandals and hackers. We have to constantly evolve our security efforts to meet new challenges and the growing sophistication of hacking and malicious behavior.

“We are living through a moment in history where we are seeing the greatest number of threats to free and open-source knowledge.”

Security and privacy are key elements in our work to be champions of free knowledge. Though fundamental, this behind-the-scenes work often goes unnoticed. You don’t recognize how important security systems are until they are broken. Investing in a culture of security now will allow Wikipedia to protect its record of the sum of human knowledge for generations to come.

Q: Craig Newmark Philanthropies recently invested $2.5 million in the Foundation’s security operations. What does this investment mean for your work?

This generous new funding is allowing Wikipedia and the Foundation to evolve with the times and get ahead of ongoing threats from hackers and malicious internet users. Over the next two years, we are boosting our security capabilities to an even more thorough level than where we’ve been before.

To take a step back, this investment from Craig is going to our Security team, which has the mission to serve and guide the Foundation and Wikimedia community by providing security services to inform risk and to cultivate a culture of security.

This donation is actually Craig’s second in support of our work. In 2019, Craig funded efforts to vigorously monitor and thwart risks to Wikimedia’s projects. That first investment allowed us to grow and mature a host of security capabilities and services. These include application security, risk management, incident response, and more. While threats to our operations happen nearly every day, we work proactively to prevent cyber attacks by following best practices, leveraging open source software to aid our security efforts, and by performing security reviews.

But to keep up with changing security threats, we need to do much more, and that’s what this new funding will help us to do — take our security to the next level. We’re very grateful to Craig for facilitating that. As the founder of craigslist, he has been a long-time supporter of the free knowledge movement and the work we do at Wikipedia, or as he calls it, “the place where facts go to live.”

Q: What are the Security team’s priorities for the near future?

We have developed a comprehensive three-year security strategy with three areas of focus:

First, cyber risk. Security risk is a tool that we use to assess potential loss and potential opportunity. It’s a framework for us to evaluate our priorities. We need to create a common language and understanding of risk within the Foundation and our communities. To that end, we will be rolling out a series of “roll your own” risk assessments for our staff and communities to learn about security and privacy best practices and equip them to make the best, informed decisions for themselves.

“Understanding and having an appreciation for security and privacy is in everyone’s best interest.”

Second, security architecture. Through this pillar of work, we will deploy robust security services and capabilities for the Foundation and our community projects, including Wikipedia. There are two projects I am particularly excited about. The first is a new internal differential privacyservice for those seeking to safely use and release data. This will enable our staff, volunteers, researchers, and others to consume and share data in a safe and privacy-respecting way. The second project is an effort to move application security practices and tooling closer to those people who are creating code, which will enhance our current security practice and add velocity.

Third, capabilities management. Our main goal with this area of our work is to get better at what we do. It is essentially an ongoing internal audit of our security work, with the ultimate goal of improving security efficacy and creating solutions for Foundation staff and community members. We will evaluate the effectiveness of all of our security and privacy services, as well as establish standards and practices to modify or end services if needed.

Q: What does a secure culture at Wikimedia look like, and how can other online platforms follow the Wikimedia Foundation’s lead?

Understanding and having an appreciation for security and privacy is in everyone’s best interest. What I mean is that by creating an understanding of risks, threats, and vulnerabilities, we are teaching others how to appreciate and how to apply an appropriate lens to various security and privacy situations.

In a large online community like ours, we want people to be comfortable with their security and privacy practices and in asking questions. In the spirit of Wikimedia, our team conducts this work with a human-first approach. We know we are going to have vulnerabilities and threats to our platforms and technology stack — that’s inevitable; but one of our greatest strengths to mitigate these challenges is our community. Empowering them and others to help understand and promote security and privacy is key to creating the culture of security we are seeking.

Q: Any closing thoughts?

Wikipedia at its core is a bold idea that anyone can access and contribute to the world’s knowledge. Our platforms were built on the notion that security and privacy sustain freedom of expression. Security doesn’t mean policing the community of volunteer contributors that make Wikipedia work, but rather empowering all of our users and staff with security practices and resources that will protect and expand our reach. By making Wikipedia sustainable and safe from cyberthreats, we are setting an example for other online platforms that a culture of security can and should be a collaborative effort.

“We are setting an example for other online platforms that a culture of security can and should be a collaborative effort.”

I am super grateful to be part of this work and for the amazing group of people I get to collaborate with on a daily basis. Maryum Styles, Hal Triedman, James Fishback, Samuel Guebo, Sam Reed, Scott Bassett, Manfredi Martorana, David Sharpe, and Jennifer Cross make up a small but super powerful team. I am a huge believer in this team and what it can do and can’t wait to see what’s next!

Indigenous knowledge on Wikipedia and Wikidata

18:07, Tuesday, 23 2021 November UTC

In a presentation at WikidataCon, Érica Azzellini said something that got me thinking: “A mountain could also be an instance of a divine being”.

I was born in a town built on the slopes of a single hill standing beside the sea on the west coast of Trinidad. Though it rises less than 200 m above its surroundings, the hill is the only high point between a flat plain to the east, and the Gulf of Paria to the west. The hill is also Nabarima, the Guardian of the Waters, and the residence of one of the four Kanobos of the Warao, who are an indigenous people of the area. Knowing this, I headed to Wikidata to try to incorporate Érica’s suggestion.

And I ran into problems immediately. In Wikidata, information is modeled as part of a “triple”, where the thing being modeled (the particular hill that my home town is built on) is associated with a property that takes a specific value. In this case, the property I was interested in is called “instance of”, and it’s straightforward enough to assign that property the value “hill”: San Fernando Hill is an instance of a hill. But it’s also the residence of a Kanobo.

So what, precisely, is a Kanobo? A divine spirit, of a sort. A grandfather spirit. There’s a part of my brain that handles unstructured, nonlinear information effectively. But that doesn’t help much when you’re trying to add values to Wikidata.

And how do I model “residence of” a divine being? For guidance, I looked at Mount Olympus, the home of the Greek gods. I tried Valhalla. I even checked out the Apostolic Palace in Vatican City. None of these left me the wiser. To model the hill properly I suspect I would have to model it as an instance of a “residence of a Kanobo”, but first I would need to create an item for “residence of a Kanobo”. And to do that, I’d need to create an item for Kanobo. It’s difficult, I’m out of my depth, so I end up going with “instance of” a religious site. Whose religion? Neither Wikidata nor Wikipedia will tell you. And even if you found your way to the Warao people article on Wikipedia, you’d learn that they are an indigenous group in Venezuela, with little hint of their presence in Trinidad. Archaeologists like Arie Boomert believe that the Warao were the original inhabitants of Trinidad, but the borders drawn by the Spanish and the British left the Warao cut off, foreigners in what is by right their homeland.

Across Wikipedia, the connection between indigenous people and their lands is cut off. While land acknowledgements have become common, especially in academic settings, there’s a large gap between knowing whose land you’re on and understanding how those people relate to this land. If we’re lucky, a Wikipedia article will tell us the indigenous name of a particular geological feature, but it’s extremely rare for the article to document more than that.

The Denali article documents seven indigenous names for the mountain and group them into two categories of meaning — “the tall one” and “big mountain” — but says nothing about how indigenous Alaskans see or relate to the mountain. It’s in the category “sacred mountains”, but the article fails to explain why. Visit the Wikidata item for Denali and you’ll find nothing about sacredness or spiritual meanings.

Wikipedia and Wikidata aren’t notably bad in this regard, but I believe they should be better. Much better. It’s a problem that’s systemic — it’s hard to add content to Wikidata when the statements don’t exist to build the relationships. And it’s harder to add the statements to Wikidata when the relevant articles don’t exist on Wikipedia. But in the end, it’s hard to write about Kanobos when you don’t actually understand what they are.

Non-indigenous contributors can — and should — work to improve the coverage of Indigenous content across Wikimedia projects, but unless the movement includes more Indigenous people writing about their own communities, we will always fall short. That challenge is exacerbated by the fact that Indigenous communities aren’t interchangeable, just as manitō isn’t interchangeable with Kanobo.

Native American Heritage Month is a good time to reflect on our movement’s shortfalls in this regard. How do we work in partnership with Indigenous communities to tell their own stories? And how do we convey an invitation honestly, knowing that our sourcing policies that exclude so much knowledge?

But while we grapple with ideas, we also need action. Do you, or your colleague teach at Tribal Colleges in the US? Put them in touch with our Wikipedia Student Program. Do you know someone who can sponsor a Wiki Scholars course or a Wikidata course with a focus on Indigenous communities? Please get in touch.

Image credit: Denali National Park and Preserve, Public domain, via Wikimedia Commons

Three months of Connected Heritage

12:10, Monday, 22 2021 November UTC

By Dr Lucy Hinnie, Digital Skills Wikimedian at Wikimedia UK.

As we begin to look towards 2022, and move towards the end of 2021, we thought it was a good time to reflect on the first three months of the Connected Heritage project at Wikimedia UK.

The projects so far

In August, Leah and I began our posts as Digital Skills Wikimedians. Our first task was to familiarise ourselves with the cultural heritage landscape in England and Wales, and to identify potential participants for our first series of introductory webinars. Many emails, tweets and messages were sent out into the world, and we were lucky to have a great response to our offering.

September was the month of webinar creation: we worked hard to design an hour of content that was welcoming, informative and engaging, and offered an overview of the project and our vision. We rehearsed with some willing Wikimedia UK colleagues and developed the presentation into something we are very proud of!

The webinars started in earnest in October. We were blown away by the enthusiasm from participants, and the wide variety of groups and organisations represented. We ran four webinars, and engaged with new faces from all over the cultural heritage sector.

November has been busy thus far: we ran an additional webinar for evening participants, and our first Wikithon for potential partners who had attended a webinar and were interested in the next step. We are in the preliminary stages of our first partnerships, and broadening our understanding of what our audience is looking for. The Wikithon in particular was a great success, with over 10 new editors trained and engaging with Wikimedia through Wikipedia and Wikimedia Commons.

What next?

We have another webinar running before the year concludes: if you’re thinking ahead to 2022, and wishing you’d attended one earlier, now is the time! The session will run from 2pm on Thursday 2nd December and we’d love to see you there. We’ve had great feedback from participants saying they are feeling more confident, more engaged and positive about Wikimedia and open knowledge.

Thinking further ahead?

If you’re feeling the end of year burnout already, and would rather look towards 2022, we’re one step ahead: we’ve set up four introductory webinars and a Wikithon! Start your 2022 with some Connected Heritage, we’ve got webinar sessions running on 18th January, 2nd February, 17th February and 4th March, and an International Women’s Day Potluck Wikithon on Friday 11th March. You can sign up now via Eventbrite.

I’d like to partner with you – how do I do this?

In short, let’s talk. We have a meeting calendar set up here, and you can book in for a slot to chat with us about your questions regarding your organisational needs and aims. Or you can email us at We’re looking forward to hearing from you.

Tech News issue #47, 2021 (November 22, 2021)

00:00, Monday, 22 2021 November UTC
previous 2021, week 47 (Monday 22 November 2021) next

weeklyOSM 591

10:44, Sunday, 21 2021 November UTC


lead picture

Prof. Leonardo Gutierrez & students of the Colegio Salesiano in Duitama, Columbia [1] | © Colegio Salesiano, Duitama

About us

Mapping campaigns


  • LySioS compared (fr) > en before and after mapping of a park, which was recommended to him for children’s activities.
  • SK53 looked at some worldwide solar data (we reported last week) and compared it with OSM data for China.
  • Polish mappers discussed (pl) > en , on the forum, whether any special mapping is required for areas currently under emergency regulations on the border with Belarus.
  • Requests have been made for comments on the following proposals:
    • snow_chains to map where and when you need to use snow chains on your vehicle.
    • defensive_work=* to tag defensive structures in historic/pre-modern fortifications.
    • network:type=basic_network to distinguish nameless connections in the cycle/hiking route network from named routes and numbered node networks.
  • The outlet=* proposal for tagging culvert or pipeline outlets with more details, was approved with 20 votes for, 0 votes against and 0 abstentions.


  • [1] Over more than 10 years, successive cohorts of students of the Colegio Salesiano in Duitama, Columbia under the guidance of Professor Leonardo Gutierrez have been continually mapping, refining, and expanding the public transport data available to the people of Duitama via the BusBoy app. Professor Leonardo Gutierrez is a long-time OSM contributor. In late 2014, he organised a live teaching session with his students and Humanitarian OSM contributors.
  • Jaime Crespo provided (es) > en
    short notes from an informal online meeting of members of the Spanish community which have been added (es) > en to the wiki.
  • Charlie Plett reports on his effort to map all Primary and Secondary schools in the Corozal District, Belize.
  • Cyberjuan presents a thematic summary of what was discussed at the event Building Local Community in OSM: Tips, Tricks and Challenges, organised by the HOT Community Working Group on November 8th.
  • Jennings Anderson revisited an old graphic exploring OpenStreetMap contributor life spans. The original work, published in 2018 (as we reported), used data up to 2014. The updated graphs cover the period to 2021.
  • On Wednesday 1 December at 20:30 (UTC+1), Adrien, Donat and Florian will have their first Live mapping session (fr) on their new YouTube channel Tropicamap. Live mapping sessions will take place on the first Wednesday of each month and aim to reach a younger public. This first mapping session will be aimed at the French toy shop cooperative JouéClub.
  • PlayzinhoAgro took (pt) > en his second anniversary as an OSM contributor as an opportunity to report on his OSM activities over the past year.


  • During the UN Map Marathon 2021, which will take place from 22 to 26 November, UN Mappers are offering two online training sessions:
    • ‘Running with JOSM’, on Monday 22, aimed at those who want to approach the use of JOSM for the first time;
    • ‘From marathon to maps: how to use OSM data’, on Friday 26, focused on the use of OSM data with QGIS.

    The sessions will be held in English, French and Portuguese. Once registered, it will be possible to participate in the mapping competition. The best mapper of the map marathon will be announced on Friday.

OSM research

  • Annett Bartsch was interviewed by Doug Johnson, from ArsTechnica, about the study she and her team published criticising the poor quality of mapping data above the Arctic Circle. The study claims to depict a more accurate representation of the local human footprint and its long-term impact.


  • The #30DayMapChallenge, which we covered last week, and the week before, continues. Once again we have selected some of the maps using OpenStreetMap data:
    • Day 10: Raster – Not surprisingly, given that OSM is a vector dataset, entries were somewhat scarce. A 3-D jigsaw puzzle by D&G Placenames stood out.
    • Day 11: 3D – gonsta’s tactile map of Cherkasy (Ukraine).
    • Day 12: Population – OSM data were used to show where people live in the French commune of Orsay (Come_M_S).
    • Day 13: Natural Earth data – Federica Gaspari managed to include some OSM data as well as those of Natural Earth.
    • Day 14: New tool – Prettymaps (by Marcelo Prateles, reported earlier) was a popular choice for a new mapping tool (Bill Morris, Heikki Vesanto, University of Pretoria Youthmappers). Clare Powells’s 3-D visualisation of buildings of Dubai using Cesium also demonstrated OSM data nicely.
    • Day 15: Without using a computer – needless to say this was one challenge where OSM was not a lot of help, but Justin Roberts worked out a way, mapping a GPS trace of a route for later incorporation in OSM.
  • Did you know it was possible to find all the maps published during the 30DayMapChallenge by searching on Twitter ‘30DayMapChallenge Day‘ followed by the day number?
  • Cartographer John Nelson, of ESRI, provided some general advice as to when to add a north arrow to a map.

Open Data

  • skunk has written (de) > en a small tool that helps to match open government datasets with Wikimedia (Wikipedia, Wikimedia Commons, Wikidata) and OpenStreetMap.


  • Rohner Solutions has (de) > en put up a doner kebab search tool using OSM data, but it requires you to enter or select a place name or to share your location to get a result.

Did you know …

  • …that JOSM is used by less than 10% of the contributors, but since 2010 has provided the majority of the total of OSM edits? UN Mappers opened a poll, during GISDay, to explore the reasons and to propose free training sessions on this powerful editor. You can still participate this Sunday!
  • … that you can deactivate all quests at once in the task selection settings of StreetComplete? Then you can activate the questions that interest you – for example, questions about incomplete addresses.
  • … the European OpenGHGMap, by the Norwegian University of Science and Technology and collaborators, show a city-level carbon dioxide emissions inventory for Europe?
  • … the tool showing the content of relations with type=destination_sign, direction_* tags on guideposts, as well as destination tags on highways and guideposts?

OSM in the media

Other “geo” things

  • Google’s flood forecasting system is now live in all of India and Bangladesh, and they are working to expand to countries in South Asia and South America.
  • grin described the setup they have been using to collect real-time kinematic positioning data (we reported earlier).
  • Our favorite #30DayMapChallenge: the Mapping Cube video from ArtisansCartographes. Have fun practising your mapping cubic dexterity.

Upcoming Events

Where What Online When Country
Черкаси Open Mapathon: Digital Cherkasy osmcalpic 2021-10-24 – 2021-11-20 ua
Pista ng Mapa 2021 osmcalpic 2021-11-13 – 2021-11-20
UCB Brasil + CicloMapa: curso de mapeamento osmcalpic 2021-11-16 – 2021-11-26
MSF Geo Week Global Mapathon osmcalpic 2021-11-19
State of the Map Africa 2021 osmcalpic 2021-11-19 – 2021-11-21
Maptime Baltimore Mappy Hour osmcalpic 2021-11-20
Lyon EPN des Rancy : Technique de cartographie et d’édition osmcalpic 2021-11-20 flag
Bogotá Distrito Capital – Municipio Resolvamos notas de Colombia creadas en OpenStreetMap osmcalpic 2021-11-20 flag
New York New York City Meetup osmcalpic 2021-11-21 flag
UN Map Marathon osmcalpic 2021-11-22 – 2021-11-26
HOT Summit 2021 osmcalpic 2021-11-22
Bremen Bremer Mappertreffen (Online) osmcalpic 2021-11-22 flag
San Jose South Bay Map Night osmcalpic 2021-11-24 flag
Derby East Midlands OSM Pub Meet-up : Derby osmcalpic 2021-11-23 flag
Vandœuvre-lès-Nancy Vandoeuvre-lès-Nancy : Rencontre osmcalpic 2021-11-24 flag
Düsseldorf Düsseldorfer OSM-Treffen (online) osmcalpic 2021-11-24 flag
[Online] OpenStreetMap Foundation board of Directors – public videomeeting osmcalpic 2021-11-26
Brno November Brno Missing Maps mapathon at Department of Geography osmcalpic 2021-11-26 flag
Bogotá Distrito Capital – Municipio Resolvamos notas de Colombia creadas en OpenStreetMap osmcalpic 2021-11-27 flag
泉大津市 オープンデータソン泉大津:町歩きとOpenStreetMap、Localwiki、ウィキペディアの編集 osmcalpic 2021-11-27 flag
長岡京市 京都!街歩き!マッピングパーティ:第27回 元伊勢三社 osmcalpic 2021-11-27 flag
HOTOSM Training Webinar Series: Beginner JOSM osmcalpic 2021-11-27
Amsterdam OSM Nederland maandelijkse bijeenkomst (online) osmcalpic 2021-11-27 flag
Biella Incontro mensile degli OSMers BI-VC-CVL osmcalpic 2021-11-27 flag
津山のWEB地図作り~OSMのはじめ方~ osmcalpic 2021-11-28
Chamwino How FAO uses different apps to measure Land Degradation osmcalpic 2021-11-29 flag
OSM Uganda Mapathon osmcalpic 2021-11-29
Salt Lake City OpenStreetMap Utah Map Night osmcalpic 2021-12-02 flag
Paris Live Youtube Tropicamap osmcalpic 2021-12-01 – 2021-12-03 flag
Missing Maps Artsen Zonder Grenzen Mapathon osmcalpic 2021-12-02
Bochum OSM-Treffen Bochum (Dezember) osmcalpic 2021-12-02 flag
MapRoulette Community Meeting osmcalpic 2021-12-07
San Jose South Bay Map Night osmcalpic 2021-12-08 flag
London Missing Maps London Mapathon osmcalpic 2021-12-07 flag
Landau an der Isar Virtuelles Niederbayern-Treffen osmcalpic 2021-12-07 flag
Stuttgart Stuttgarter Stammtisch (Online) osmcalpic 2021-12-07 flag
CASA talk: Ramya Ragupathy, Humanitarian OpenStreetMap Team osmcalpic 2021-12-08
Chippewa Township Michigan Meetup osmcalpic 2021-12-09 flag
Großarl 3. Virtueller OpenStreetMap Stammtisch Österreich osmcalpic 2021-12-09 flag
Gaishorn am See Dritter Österreichische Online OSM-Stammtisch osmcalpic 2021-12-09 flag

If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.

This weeklyOSM was produced by Lejun, Nordpfeil, PierZen, SK53, SeverinGeo, Strubbl, Ted Johnson, TheSwavu, derFred, tordans.

I am oppressed – slam poetry from a Wikipedia sockpuppet

22:43, Saturday, 20 2021 November UTC

I am oppressed
I am oppressed

The mistake was that I was ignorant of your rules,
no more!
and the gentleman from Pakistan,
who was literally from Pakistan!
framed me

The mistake was that I was ignorant!
of your rules, no more,
and the gentleman from Pakistan, who was literally from Pakistan,
framed me?

But I reject this!

I want to talk to a wise person!
you have to
be a judge to be anything but what is happening to me
is a huge injustice

tl;dr – a blocked sockpuppet was complaining on their talk page, and it made for some awesome slam poetry.


idea by Tamzin, formatting by TheresNoTime

The post I am oppressed – slam poetry from a Wikipedia sockpuppet appeared first on TheresNoTime.

2021 Arbitration Committee Elections

17:40, Saturday, 20 2021 November UTC

It’s that time of the year again where we subject those brave few to the criticisms of the community at large, and select those we wish to represent us on the English Wikipedia’s Arbitration Committee.

What even is an Arbitration Committee?

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve.

Wikipedia:Arbitration Committee

For those of you not familiar with the legalese of Wikipedia, the role of the arbitration committee (ArbCom) can be summed up as the the final word in the enforcement of policy (i.e. editor conduct) and the last line of appeal.

And y’all elect them?

Members of the committee are elected yearly, and either serve one year (50% support) or two year (60% support) terms (for reasons explained here).

Our elections work by appointing candidates in “decreasing order of their percentage of support, as calculated by support/(support + oppose), until all seats are filled or no more candidates meet the required support percentage.” [1] This year, we have eleven candidates for eight vacant seats.

The lucky 11

Opabinia regalis (t · c)

Cabayi (t · c)

Donald Albury (t · c)

Enterprisey (t · c)

Izno (t · c)

Beeblebrox (t · c)

Wugapodes (t · c)

Worm That Turned (t · c)

Thryduulf (t · c)

Banedon (t · c)

Guerillero (t · c)

So who are you voting for?

Eh, I’m not sure yet really – I was going to do a voters guide (User:TheresNoTime/ACE2021) but frankly that’s a lot of poring over things. I’ll likely make my mind up much nearer the time, after reading through the answers to the questions.

ArbCom needs both new members and institutional memory, ideally a majority of those who have never served on the committee before lest it get stuck in “the old ways of doing things”.

The post 2021 Arbitration Committee Elections appeared first on TheresNoTime.

Using Wikidata to promote epistemic equity

16:57, Friday, 19 2021 November UTC
Thami Jothilingam
Thami Jothilingam
Image by Jordan Kawai, all rights reserved.

As a cataloguer for the University of Toronto Scarborough Library, Thami Jothilingam sees infinite possibilities for Wikidata. That’s why she signed up to take Wiki Education’s Wikidata Institute course.

“Metadata is foundational to knowledge creation, as it forms the building blocks of knowledge infrastructure,” Thami says. “Historically, the form and process of this knowledge creation was performed primarily by the socially privileged groups in a society, which has resulted in epistemic bias in library discovery and access. Wikidata is the largest global free and open knowledge base that enables and democratises the creation of knowledge infrastructure, access, and discovery.”

Thami says she was also drawn to the course on a more personal level, too.

“As someone who is both BIPOC and queer and thus belongs to multiple marginalised communities, it is important for me to learn the skills to identify the absences/silences/erasures in knowledge creation/infrastructure and to also actively find tools to fill those gaps with a vision to contribute to epistemic equity and inclusivity,” she says. “I think Wikidata is one of those powerful tools, and I wanted to learn more about it.”

The course gave her those skills. Meeting twice a week for three weeks over Zoom, the Wikidata Institute provided practical knowledge about Wikidata and a community of other scholars studying alongside her. She says the combinations of practical, hands-on exercises, coupled with the engaging and thought provoking discussions in class, made this a perfect introduction to Wikidata for her.

“I am a cataloguer and a community archivist, and learning and exploring the endless possibilities of Wikidata and open linked data helped me to rethink the metadata creation process,” Thami says. “I strive to be mindful and conscious of the archival praxis with which I engage and what informs that praxis — what we archive, how we archive, and how we share/disseminate it. I believe in open knowledge and open access, and Wikidata helps to realise that vision both individually and collectively. Wikidata also helped me to rethink the entire process — how knowledge is organised and classified, how the ontologies are being made, how to democratise that process, how to make that knowledge creation process open and perform it collaboratively, how to navigate and find tools to address coloniality of knowledge, how can we develop and ensure a praxis with epistemic equity and inclusivity.”

As part of the course, Thami created several new Wikidata items, including one for I. Pathmanabhan Iyer, a collector, publisher, and community archivist from Sri Lanka. His 80th birthday was during the course, so Thami felt inspired to create his item in remembrance of his birthday. She also created other new items related to the Upcountry Tamil community in Sri Lanka.

“Sri Lanka’s Malaiyaka Tamilar, or Upcountry Tamils, are the descendants of nineteenth-century Indian labourers who were brought to work on the country’s British-owned tea, coffee, and rubber plantations. This community has suffered political disenfranchisement and discrimination, while adequate healthcare, education, and economic opportunity remain inaccessible to this day. I have been working closely with some grassroots organisations from the community for over a decade now, and it is important to rethink and see past the colonial and postcolonial traces, and to decolonise the power structures that were built through words and languages,” Thami says. “By creating more data, linked data, particularly metadata in multiple languages related to social and cultural histories of marginalised communities, we can develop ethical, equitable, and inclusive models for ontology development, data creation, access, and discovery.”

Thami’s engagement with Wikidata didn’t end with the conclusion of the Wikidata Institute course. She’s now working with a faculty member to develop a digital history project assignment that involves creating metadata from an archival collection. Students will work in groups to create metadata, and Thami will help move that information to Wikidata. Thami also collaborated with UTSC Library’s Digital Scholarship Unit to contribute about 800 entries to Wikidata from the S. J. V. Chelvanayakam Fonds.  

“I like when you create new items and find other items/instances to be linked, and it’s very thrilling when everything comes together, linked, and you can follow the links as if you’re following a narrative, a data narrative in this context,” she says. “Wikidata is the largest free and open knowledge base in the world, and anyone from any field of study/work can contribute and engage with it to develop it even further.”

Image credit: Loozrboy from Toronto, Canada, CC BY-SA 2.0, via Wikimedia Commons

Talking strategy with Wikimedia UK’s community

12:40, Friday, 19 2021 November UTC

Last week I had the pleasure of facilitating an online meeting for members of our community to help shape the future direction of Wikimedia UK. This was attended by a broad cross section of our community including staff, trustees, partners, editors and donors. I was particularly pleased to see a number of former staff and trustees of the charity, all of whom are still closely involved in the movement. 

Aim of the session

Wikimedia UK works on a three year strategic planning cycle, and we are now developing our new strategy for 2022 to 2025. I gave a brief overview of the process that the organisation is currently engaged in and what’s happened so far. Our schedule is aligned with our application deadline for funding from the Wikimedia Foundation, for which we’ll be applying for multi-year funding for the first time.


As part of the introductions, everyone shared their aspirations for the meeting, with key themes being to make connections, understand Wikimedia UK’s priorities and engage with the wider community. The meeting was also another opportunity (following our AGM in July) to introduce our new Chair of Trustees, Monisha Shah. Monisha shared a little of her own background, and why access to knowledge is so important to her. She explained that she has a portfolio career focused on board roles within the arts, culture and media sectors, following high level roles at the BBC. Monisha emphasised her interest in hearing from the community. She noted that she is not active on social media but that volunteers were welcome to contact her via LinkedIn or the Wikimedia UK team. 

Blue Sky Thinking

After this introduction, we split into three breakout groups to finish the statement “wouldn’t it be fantastic if…” for what we’d like Wikimedia UK to achieve in the next three years. This generated lots of great ideas and objectives which coalesced into some key themes, as follows:


A high proportion of responses to the prompt question above were focused on equitable participation and representation. This ranged from diversifying the UK’s editors, administrators and membership, through to working with small language Wikipedias, delivering diaspora outreach, and supporting initiatives to repatriate knowledge as a form of decolonisation.  


There were several responses focused on the climate crisis, with an aspiration for us to be able to offer wide-ranging and trusted information about the climate crisis across multiple languages. There was a question over whether Wikimedia UK should be applying pressure on the government regarding the crisis. On a practical level, it was felt that in the first instance Wikimedia UK needs to identify what we can do to support editors documenting and sharing information about climate change (including those involved with WikiProject Climate Change)


Many responses to the prompt statement “wouldn’t it be fantastic if” involved the opening up of knowledge and information. Under this general umbrella was an aspiration that all publicly funded institutions should commit to ethical open access as their default position; and that we are able to address copyright law to ensure that publicly funded research has to be made available under an open licence. Other responses included more partnerships with heritage organisations, local history initiatives and archives; more Wikimedians in Residence; and more work with diverse communities and collections. A number of responses were specifically about images – such as every notable structure in the UK having a photo and Wikidata item, and working with external partners to ensure an image for every UK article. 


It’s clear that the Wikimedia UK community remains deeply concerned about misinformation and disinformation. There is a strong commitment to helping young people understand how knowledge is created and shared, and develop information literacy skills. There is also a clear ambition to have an impact on the school curriculum – particularly in England (following our success in Wales) – and to have more residencies in Universities. 


A number of responses were focused on the public’s understanding of Wikimedia. In particular, it was felt that there needs to be more understanding that Wikipedia is a tertiary source that can be edited by anyone, and greater awareness and use of the sister projects, such as Wikisource. It was noted that Wikimedia UK should have sufficient technical development capacity to be actively contributing to MediaWiki development for Wikimedia’s sister projects. The perennial issue of the distinction between Wikimedia UK and the Wikimedia Foundation was also raised.


Two out of the three breakout groups identified an objective to diversify Wikimedia UK’s funding base so as to be less reliant on our core grant from the Wikimedia Foundation. It was also suggested that the role of affiliates will be under more scrutiny with the creation of the Movement Charter and Global Council; and that within that context, Wikimedia UK needs to be clear about its purpose and relevance. Other comments were more focused on community engagement, with a number of responses around a theme of developing closer relationships between the affiliate and online communities, and enabling people who engage with our programmes to become more involved with the work of the organisation, contributing to the movement in broader ways.

Emerging Strategic Themes

After this very productive session, I introduced participants to the key themes which have emerged from the board and staff away days held earlier in the autumn. Once these are finalised, they will form the basis of our programme development and delivery over the next three years:

  • Knowledge Equity
  • Information Literacy
  • Climate Crisis

A number of other areas have been identified, which we believe are essential to delivering an effective programme. These are still in draft form, but include community, advocacy, communications, equity, diversity and inclusion, and organisational resilience and sustainability.

It was encouraging to see the extent of the overlap between the themes that emerged from the board and staff away days, and the priorities identified through this community session. 

Engaging Volunteers

At this point I handed over to Daria Cybulska, Wikimedia UK’s Director of Programmes and Evaluation, to lead the final session of the meeting. This was focused explicitly on community, and asked participants to respond to the following questions, in a plenary discussion:

  1. As a community member, where do you see an opportunity to get involved in the emerging strategy, and what would you need from WMUK to support that?
  2. How could the Wikimedia UK community deliver the ideas generated so far?

These prompted a wide range of responses, contributions and further questions. I’ve summarised the key discussion areas below, all of which have given the team food for thought in terms of volunteer engagement and support:

  • Do we have communities of interest or communities of place? Do volunteers see themselves as aligned with a particular project – e.g. English Wikipedia, Wikimedia Commons – or the chapter? And does this matter?
  • People’s journey into Wikipedia is often through competitions such as Wiki Loves Monuments. How can we use this knowledge to galvanise more participation? Other entry points are vandalism and correcting typos. How can we use this knowledge to support editor recruitment? There’s something important about small, accessible tasks as a way to start. It could be correcting typos, or adding categories and references to articles.
  • This led to an interesting discussion about the use of the word ‘editathon’ which might suggest something that’s a slog, requiring stamina and discipline. Should we change the language to focus more on words like workshops, training, introductory sessions etc. It was noted that increasingly, work lists for online editing events have tasks across a broad range of activities, reflecting different levels of digital confidence and time constraints.

Wrapping up and next steps

I wrapped up the session by explaining that I would be sharing the draft strategic framework for 2022 – 2025 later this month (November) and welcome feedback on it. Please watch this space for that! And thanks again to everyone who attended. It was wonderful to see people (even if it was over Zoom) and to hear from our community about what’s important to them in the creation of Wikimedia UK’s next three year strategy. 

A roadmap for Programs & Events Dashboard

19:48, Wednesday, 17 2021 November UTC

Programs & Events Dashboard now has a public roadmap. Based largely on the results of the recent 2021 Programs & Events Dashboard user survey, this roadmap sketches out the current plans for improvements to Programs & Events Dashboard.

The roadmap will evolve over time, and you can use it to keep track of what features we’re currently working on and plan to work on next. For anything we’re actively working on, it will also link to the main Issue thread for that feature, which is a good place to ask questions and provide feedback. (The Dashboard’s Meta talk page is also always a welcome place for questions and suggestions.)

I’m looking forward to getting started on the first couple projects on the roadmap: around January, I’ll be doing user research to develop a clearer idea of what a ‘campaign of campaigns’ feature will look like, and I also hope to mentor an Outreachy intern to work on enhancing the way the Dashboard presents statistics for Wikidata-focused programs.

Programs & Events Dashboard is used for more than 2,000 programs and events each year, with more than 600 active ongoing events at a time. Supporting the needs of program organizers across the Wikimedia movement through supporting and improving the Dashboard is one of Wiki Education’s key priorities. If you have ideas about the future of the Dashboard and how it can better serve Wikimedians around the world, we’d love to hear from you. (And if you’re a Rails or React developer or UX designer interested in helping out as an open source contributor, we’d also love to hear from you!)

Many thanks to P-8 Digital Skills Project “Strengthening Digital Skills in Teaching”, ETH Zürich and ZHAW for inviting me to speak at their OER Conference 21. Slides and transcript of my talk, which highlights the work of Wikimedian in Residence, Ewan McAndrew, GeoScience Outreach students and Open Content Curation Interns, are available here.

Before we get started I just want to quickly recap what we mean when we talk about open education and OER.

The principles of open education were outlined in the 2008 Cape Town Declaration, one of the first initiatives to lay the foundations of the “emerging open education movement”. The Declaration advocates that everyone should have the freedom to use, customize, and redistribute educational resources without constraint, in order to nourish the kind of participatory culture of learning, sharing and cooperation that rapidly changing knowledge societies need.  The Cape Town Declaration is still an influential document that was updated on its 10th anniversary as Capetown +10, and I can highly recommend having a look at this if you want a broad overview of the principles of open education.

There are numerous definitions and interpretations of Open Education, some of which you can explore here.

One description of the open education movement that I particularly like is from the not for profit organization  OER Commons…

“The worldwide OER movement is rooted in the human right to access high-quality education. The Open Education Movement is not just about cost savings and easy access to openly licensed content; it’s about participation and co-creation.”

Though Open Education can encompass many different things, open educational resources, or OER, are central to any understanding of this domain.

UNESCO define open educational resources as

“teaching, learning and research materials in any medium, digital or otherwise, that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions.”

And you’ll see that this definition encompasses a very wide class of resources, pretty much anything that can be used in the context of teaching and learning, as long as it is in the public domain or has been released under an open licence.

This definition is taken from the UNESCO Recommendation on OER, which aims to facilitate international cooperation to support the creation, use and adaptation of inclusive and quality OER.  The Recommendation states that

“in building inclusive Knowledge Societies, Open Educational Resources (OER) can support quality education that is equitable, inclusive, open and participatory as well as enhancing academic freedom and professional autonomy of teachers by widening the scope of materials available for teaching and learning.”

Central to the Recommendation, is the acknowledgement of the role that OER can play in achieving the 2030 Agenda for Sustainable Development and particularly Sustainable Development Goal 4: to ensure inclusive and equitable quality education and promote lifelong learning opportunities for all. 

OER at the University of Edinburgh

Here at the University of Edinburgh, we believe that open education and the creation of open knowledge and open educational resources, are fully in keeping with our institutional vision, purpose and values, to discover knowledge and make the world a better place, while ensuring that our teaching and research is diverse, inclusive, accessible to all and relevant to society.  The University’s vision for OER is very much the brain child of Dr Melissa Highton, Assistant Principal Online Learning and Director of Learning and Teaching Web Services. Our student union were also instrumental in encouraging the University to support OER, and we continue to see student engagement and co-creation as being fundamental aspects of open education. This commitment to OER is more important now than ever, at a time of crisis and social change, when we are emerging from a global pandemic that has disrupted education for millions, and we’re embracing new models and approaches to teaching and learning.  

OER Policy

In order to support open education and the creation and use of OER, the University has an Open Educational Resources Policy, which was first approved by our Education Committee in 2016 and reviewed and updated earlier this year.  Our new policy has adopted the UNESCO definition of OER, and the update also brings the policy in line with our Lecture Recording and Virtual Classroom Policies. The policy itself has been shared under open licence and is available to download along with several of our other teaching and learning policies.

As one of the few universities in the UK with a formal OER policy, this new policy strengthens Edinburgh’s position as a leader in open education and reiterates our commitment to openness and achieving the aims of the United Nations Sustainable Development Goals, which the University is committed to through the SDG Accord. 

It’s important to be aware that our OER Policy is informative and permissive. It doesn’t tell colleagues what they must do, instead its aim is to encourage staff and students to engage with open education and to make informed decisions about using, creating and publishing OERs to enhance the quality of the student experience, expand provision of learning opportunities, and enrich our shared knowledge commons. Investing in OER and open licensing also helps to improve the sustainability and longevity of our educational resources, while encouraging colleagues to reuse and repurpose existing open materials expands the pool of teaching and learning resources and helps to diversify the curriculum. 

OER Service

In order to support our OER Policy we have a central OER Service, based in Information Services Group, that provides staff and students with advice and guidance on creating and using OER and engaging with open education. The service runs a programme of digital skills workshops and events focused on copyright literacy, open licencing, OER and playful engagement.  We offer support directly to Schools and Colleges, work closely with the University’s Wikimedian in Residence, and employ student interns in a range of different roles, including Open Content Curation interns.  The OER Service also places openness at the centre of the university’s strategic learning technology initiatives including lecture recording, academic blogging, VLE foundations, MOOCs and distance learning at scale, in order to build sustainability and minimise the risk of copyright debt.

And we also manage Open.Ed a one stop shop that provides access to open educational resources produced by staff and students across the university. We don’t have is a single central OER Repository as we know from experience that they are often unsustainable, and it can be difficult to encourage engagement.  Instead, our policy recommends that OERs are shared in an appropriate repository or public-access website in order to maximise their discovery and use by others. The OER Service provides access to many channels for this purpose on both University and commercial services, and we aggregate a show case of Edinburgh’s OERs on the Open.Ed website.

We don’t have is a formal peer review system for open educational resources.  The review process that different materials will undergo will depend on the nature of the resources themselves. So for example we trust our academic staff to maintain the quality of their own teaching materials. Resources created for MOOCs in collaboration with our Online Course Production Service, will be reviewed by teams of academic experts. OERs created by students in the course of curriculum assignments will be formally assessed by their tutors and peers.  And if these resources are shared in public repositories such as our GeoScience Outreach OERs, which I’ll come on to say more about later, they may also undergo a second review process by our Open Content Curation Interns to ensure all third-party content is copyright cleared and no rights are being breached.  While open content shared on Wikipedia is open to review by hundreds Wiki admins, thousands of fellow editors, and millions of Wikipedia users.

OER in the Curriculum

As a result of this strategic commitment to OER, we have a wide range of open education practices going on across the University, but what I want to focus on today are some examples of integrating open education into the curriculum, through co-creation and OER assignments.

 Engaging with OER creation through curriculum assignments can help to develop a wide range of core disciplinary competencies and transferable attributes including digital and information literacy skills, writing as public outreach, collaborative working, information synthesis, copyright literacy, critical thinking, source evaluation and data science.

Wikimedia in the Curriculum

One way that colleagues and students have been engaging with open education is by contributing to Wikipedia, the world’s biggest open educational resource and the gateway through which millions of people seek access to knowledge.  The information on Wikipedia reaches far beyond the encyclopaedia itself, by populating other media and influencing Google search returns. Information that is right or wrong or missing on Wikipedia affects the whole internet and the information we consume. Sharing knowledge openly, globally and transparently has never been more important in building understanding, whether about the Covid pandemic, the Black Lives Matter movement, or other critical issues. And the need for a neutral platform where you can gain access to knowledge online for free has never been more vital in this era of hybrid teaching, remote working, and home schooling.

Working together with the University’s Wikimedian in Residence, Ewan McAndrew, a number of colleagues from schools and colleges across the University have integrated Wikipedia and Wikidata editing assignments into their courses.  Editing Wikipedia provides valuable opportunities for students to develop their digital research and communication skills, and enables them to contribute to the creation and dissemination of open knowledge. Writing articles that will be publicly accessible and live on after the end of their assignment has proved to be highly motivating for students, and provides an incentive for them to think more deeply about their research. It encourages them to ensure they are synthesising all the reliable information available, and to think about how they can communicate their scholarship to a general audience. Students can see that their contribution will benefit the huge audience that consults Wikipedia, plugging gaps in coverage, and bringing to light hidden histories, significant figures, and important concepts and ideas. This makes for a valuable and inspiring teaching and learning experience, that enhances the digital literacy, research and communication skills of both staff and students.

Here’s Dr Glaire Andersen, from Edinburgh College of Art, talking about a Wikipedia assignment that focused on improving articles on Islamic art, science and the occult.

“In a year that brought pervasive systemic injustices into stark relief, our experiment in applying our knowledge outside the classroom gave us a sense that we were creating something positive, something that mattered.

As one student commented, “Really love the Wikipedia project. It feels like my knowledge is actually making a difference in the wider world, if in a small way.”  

Other examples include Global Health Challenges Postgraduate students who collaborate to evaluate short stub Wikipedia articles related to natural or manmade disasters, such as the 2020 Assam floods, and research the topic to improve each article’s coverage.

History students came together to re-examine the legacy of Scotland’s involvement in the TransAtlantic Slave Trade and look at the sources being used in evaluating the contributions of key figures like Henry Dundas but also balancing this against and presenting a more positive view of Black History by creating new pages such as Jesse Ewing Glasgow.

And Reproductive Biology Honours students work in groups to publish new articles on reproductive biomedical terms. Being able to write with a lay audience in mind has been shown to be incredibly useful in science communication and other subjects like the study of law.

And I want to pause for a moment here to let one of our former Reproductive Biology students to speak for herself. This is Senior Honours student Aine Kavanagh talking to our Wikimedian Ewan about her experience of writing a Wikipeda article as part of a classroom assignment in Reproductive Biology in 2016.

And the article that Aine wrote on high-grade serous carcinoma, one of the most common and deadly forms of ovarian cancer, which includes 60 references, and diagrams created by Aine herself, has now been viewed over 130,000 times. It’s hard to imagine another piece of undergraduate coursework having this kind of global impact.

Last year, in collaboration with Wikimedia UK, the UK chapter of the Wikimedia Foundation, our Wikimedian co-authored the first ever booklet dedicated to UK case studies of Wikimedia in Education which you can download under open licence here.   Also many of the resources Ewan has created during his residency, including editing guides and inspiring student testimonies, are freely and openly available and you can explore them here.

Open Education and Co-creation – GeoScience Outreach

Another important benefit of open education is that it helps to facilitate the co-creation of knowledge and understanding.  Co-creation can be described as student led collaborative initiatives, often developed in partnership with teachers or other bodies outwith the institution, that lead to the development of shared outputs.  A key feature of co-creation is that is must be based on equal partnerships between teachers and students and “relationships that foster respect, reciprocity, and shared responsibility.”

One successful example of open education and co-creation in the curriculum is the Geosciences Outreach Course.  This optional project-based course for final year Honours and taught Masters students, has been running for a number of years and attracts students from a range of degree programmes including Geology, Ecological and Environmental Sciences, Geophysics, Geography, Archaeology and Physics.   Over the course of two semesters, students design and undertake an outreach project that communicates some element of their field.  Students have an opportunity to work with a wide range of clients including schools, museums, outdoor centres, science centres, and community groups, to design and deliver resources for STEM engagement. These resources can include classroom teaching materials, websites, community events, presentations, and materials for museums and visitor centres. Students may work on project ideas suggested by the client, but they are also encouraged to develop their own ideas. Project work is led independently by the student and supervised and mentored by the course team and the client.

 This approach delivers significant benefits not just to students and staff, but also to the clients and the University.  Students have the opportunity to work in new and challenging environments, acquiring a range of transferable skills that enhance their employability.  Staff and postgraduate tutors benefit from disseminating and communicating their work to wider audiences, adding value to their teaching and funded research programmes, supporting knowledge exchange and wider dissemination of scientific research.  The client gains a product that can be reused and redeveloped, and knowledge and understanding of a wide range of scientific topics is disseminated to learners, schools and the general public. The University benefits by embedding community engagement in the curriculum, promoting collaboration and interdisciplinarity, and forging relationships with clients.

The Geosciences Outreach course has proved to be hugely popular with both students and clients.  The course has received widespread recognition and a significant number of schools and other universities are exploring how they might adopt the model.

A key element of the Course is to develop resources with a legacy that can be reused by other communities and organisations. Open Content Curation student Interns employed by the University’s OER Service repurpose these materials to create open educational resources which are then shared online through Open.Ed and TES where they can be found and reused by other school teachers and learners.  These OERs, co-created by our students, have been downloaded over 69,000 times.

Here’s Physics graduate and one of this year’s Open Content Curation Interns, Amy Cook, talking about her experience of creating open education resources as part of the Geoscience Outreach course.


We’re hugely proud of the high-quality open education resources created and shared by our GeoScience students and Open Content Curation Interns, so we were delighted when this collection won the Open Curation Award as part of this year’s OEGlobal Awards for Excellence.


These are just some examples of the way that open education and OER have been integrated into the curriculum here at the University of Edinburgh, and I hope they demonstrate how valuable co-creating open knowledge and open educational resources through curriculum assignments can be to develop essential digital skills, core competencies and transferable attributes.  There are many more examples I could share including academic blogging assignments, open resource lists, student created open journals, open textbooks, and playful approaches to developing information and copyright literacy skills.  Hopefully this will provide you with some inspiration to start thinking about how you can integrate engagement with OER in your own courses, curricula and professional practice. 

The missing bedrock of Wikipedia’s geology coverage

16:56, Tuesday, 16 2021 November UTC

The Catoctin Formation is a geological formation that extends from Virgina, through Maryland, to Pennsylvania. This ancient rock formation, which dates to the Precambrian, is mostly buried deeply under more recent geological deposits, but is exposed in part of the Blue Ridge Mountains. And until a student in Sarah Carmichael’s Petrology and Petrography expanded it this Spring, Wikipedia’s article about the Catoctin Formation was only two sentences long. Now, thanks to this student editor, Wikipedia has a readable, informative, and well-illustrated article that’s almost 2,000 words long.

Despite having almost 6.4 million articles, there are still plenty of topics that are missing from Wikipedia. But it still surprises me when an entire class finds a lane as empty as this one did. In addition to working on two stubs, students in the class created 15 new articles.

The Roosevelt Gabbros is an intrusive igneous rock formation in southwestern Oklahoma. A gabbro is a magnesium and iron-rich rock formed by the cooling of magma. The Roosevelt Gabbros are named after the town of Roosevelt in Kiowa County, Oklahoma, and are one of the geologic formations that make up the Wichita Mountains. Other new articles created by the class include Red Hill Syenite, an igneous rock complex in central New Hampshire, the Ashe Metamorphic Suite in Ashe County, North Carolina and the Central Montana Alkalic Province, a geological province occupying much of the middle third of the state of Montana.

Content related to geology and mineralogy on Wikipedia is underdeveloped. From individual minerals to a 600,000 km2 geological basin, student editors in past classes have been able to create new articles about broad, substantive topics. And where articles exist, a lot of them are stubs.

Wiki Education’s Wikipedia Student Program offers instructors in geology and mineralogy — and other subjects — the opportunity to fill these content gaps by empowering students to contribute content as a class assignment. For more information, visit

Image credit: Alex Speer, CC BY-SA 4.0, via Wikimedia Commons