In March 2018, Facebook began automatically rewriting links to use HTTPS using the HSTS preload list. Now all Wikimedia sites (including Wikipedia) do the same.

If you're not familiar with it, the HSTS preload list tells browsers (and other clients) that the website should only be visited over HTTPS, not the insecure HTTP.

However, not all browsers/clients support HSTS and users stuck on old versions might have outdated versions of the list.

Following Facebook's lead, we first looked into the usefulness of adding such functionality to Wikimedia sites. My analysis from July 2018 indicated that 2.87% of links on the English Wikipedia would be rewritten to use HTTPS. I repeated the analysis in July 2019 for the German Wikipedia, which indicated 2.66% of links would be rewritten.

I developed the SecureLinkFixer MediaWiki extension (source code) to do that in July 2018. We bundle a copy of the HSTS preload list (in PHP), and then add a hook to rewrite the link if it's on the list when the page is rendered.

The HSTS preload list is pulled from mozilla-central (warning: giant page) weekly, and committed into the SecureLinkFixer repository. That update is deployed roughly every week to Wikimedia sites, where it'll take at worst a month to get through all of the caching layers.

(In the process we (thanks Michael) found a bug with Mozilla not updating the HSTS list...yay!)

By the end of July 2019 the extension was deployed to all Wikimedia sites - the delay was mostly because I didn't have time to follow-up on it during the school year. Since then things have looked relatively stable from a performance perspective.

Thank you to Ori & Cyde for bringing up the idea and Reedy, Krinkle, James F & ashley for their reviews.

This is about understanding data in Wikidata. The article is about understanding what you can and cannot do with incomplete data, it is not so much about the Ecological Society of America.

The most recent work started with the news of a new Wikipedia article. Prof Cottingham is a 2015 fellow of the esa, there is a category for fellows, adding her and other missing fellows to Wikidata showed that for one fellow there was no Wikipedia article. At the time there were 90 known fellows and for only two it was known when they became a member.

I expected that new fellows would be known to Wikidata not just as an "author string" but that they would be an "item". So I added 14 of the 2019 cohort and found this not to be the case. I then looked up the known fellows from the esa webpage, added their date to Wikidata because I wondered if it were particularly the older fellows that are represented in Wikipedia.

While adding the dates, I added many alternate names to aid disambiguation, I removed one item and found two false friends; fathers mistaken for their son. When I was done, I had a good impression of the data on the website and even though I do not have the full numbers, I feel to be correct in my belief that it is the old ecology/ecologists that are represented in Wikipedia.

When you scrutinize the list of fellows, you will find included "Early Career Fellows", they are "elected for advancing the science of ecology and showing promise for continuing contributions" and they take part for a limited amount of time. Programs like these are known from all over the world and from many science orgs. This time I did not spend time on them but from previous experience I can safely say that promising is putting it mildly.

Wikidata is a wiki and as such, the work that I did is of value even though it is incomplete. I did not add all the missing fellows for instance. The esa is very much an organisation for America (check the employment of its fellows) and it takes pride in global attention and solicits membership fees from all over the world. It takes a lot of additional data when you want to compare if its subject matter is biased towards America and in what way.

For many of the fellows I added, there are papers with "author strings" waiting to be linked to an author. The same can be said for the fellows that are still missing. It can be compared to other ecological organisations but how to deal with the differences takes a completely different understanding. It takes more data to make this possible but the data does not need to be complete, that is the beauty of averages.

Honorable Minister Prasad:

I am writing today on behalf of the Wikimedia Foundation to express our concerns regarding the Ministry’s proposed changes to the intermediary liability rules and its negative impact on access to knowledge and participation for Indians online. The Wikimedia Foundation is the nonprofit that hosts and operates Wikipedia, along with a number of other free, collaborative  projects aimed at providing access to knowledge for everyone. We write on behalf of our organization, but also on behalf of the hundreds of millions of volunteer readers and editors who use and contribute free knowledge to Wikipedia and the other Wikimedia projects every month.

This November, readers in India visited Wikipedia over 771 million times, the 5th highest number of views to Wikipedia from any country in the world. Not only do Indians access Wikipedia in large numbers, they are integral and valued contributors to the encyclopedia, which is available in 23 languages spoken across India. Since 2017, the Foundation has sponsored a content creation contest aimed at growing the number of available articles in local languages across India. The 2017-2018 contest resulted in the addition of almost 4500 new articles to Indic language Wikipedias. Given the prominence and popularity of Wikipedia among internet users and contributors across India, as well as its status as a vital learning resource, we hope that you will take our comments into account when considering and notifying the new draft rules on intermediary liability.

During the consultation on these rules at the beginning of 2019, we joined a coalition of organizations to voice our worries about the negative effects of the draft proposal on freedom of expression and innovation. We appreciate the opportunity to submit that letter and welcome the Ministry’s willingness to consider comments on the bill. Unfortunately, it has been nearly a year since the official consultation on these draft rules was closed, but neither participants in the consultation nor the public have seen a new draft of these rules since that time. We trust the Ministry has taken the valid concerns that we and many others have raised into account when revising the draft rules. We encourage the Ministry to release the current draft and make sure there is  a robust and informed debate about how the internet should be governed in India.

We remain concerned that requirements in the bill will hinder our mission to provide free access to knowledge in India, rather than support it. The most negative effects on websites posed by the requirements in the draft bill might be mitigated by the introduction of definitions of social media intermediaries and a layered approach to obligations like those laid out by the Data Protection Bill of 2019 (section 26), which sets different requirements for significant fiduciaries. Yet, even with such an approach we remain concerned about requirements which encourage or necessitate automated filtering of user uploads, either explicitly or implicitly through short takedown times, and could severely disrupt the availability and reliability of Wikipedia. Both of these qualities depend on a unique and effective content moderation system that allows people to decide openly what information is appropriate for the encyclopedia.  The information that is included on Wikipedia is collected and curated by thousands of global volunteers who work together to make knowledge available for everyone. Wikipedia is structured by individual languages, not geographic markets. People work together in real-time to write articles about topics of interest on Wikipedia. Some people make larger changes to the articles and add substantial sections with new information, others focus on incremental improvements by correcting grammar. Together, they work to ensure information stays neutral and is based on reliable sources. They also remove content that does not meet the site’s quality standards. This collaborative system of people would be severely disrupted by obligatory filtering systems that monitor for and automatically remove illegal content across the website. Short response times for removals that would essentially require the use of automatic systems would interfere with people’s ability to collaborate in real time “on Wiki”, the collaborative, open editing model that has been crucial to Wikipedia’s growth.

Requirements to quickly and automatically remove content that may be illegal in one jurisdiction without meeting globally accepted human rights standards are also antithetical to Wikipedia’s global perspective and reach. People around the world see the same content on Wikipedia—someone in New Delhi could collaborate on the same English Wikipedia article alongside an editor in Berlin. This process makes Wikipedia articles richer and more reflective of how the world understands a given topic. As such,  it is impossible to restrict changes inside a Wikipedia article from being visible in one country and not another. Fulfilling mandatory content removal requirements from one country would leave problematic gaps in Wikipedia for the whole world, break apart highly context-specific encyclopedic articles, and prevent people from accessing information that may be legal in their country. Wikipedia’s broad reach and cross-cultural collaboration is integral to our goal of providing access to knowledge for everyone, and these requirements significantly hinder that goal.

We are also concerned about the material burden that some requirements in the draft bill would place on the Wikimedia Foundation’s nonprofit model that operates to serve people around the world. While it may be possible for larger companies to comply with local incorporation rules, it would be an unrealistic burden for a global nonprofit with limited resources to comply with local incorporation requirements. Rules which require the removal of content or cooperation with law enforcement within short time periods could also prove impracticable without significant additional investments in either new employees or technology. We fear that such burdens will consume vital resources that would otherwise be directed to providing access to knowledge and reliable, neutral information to Indian citizens.

Finally, we believe that imposing traceability requirements on online communication is a serious threat to freedom of expression as it could interfere with the ability of Wikipedia contributors to freely participate in the project. An important feature of Wikipedia is that the website does not track its users. This is important for data protection reasons and readers’ and contributors’ autonomy alike. However, it is also crucial for the safety of Wikipedia contributors who contribute or moderate content on sensitive topics, or who contribute from regions where their personal safety could be at risk for editing Wikipedia. Requiring websites to track their users will discourage free communications and has the potential to even discourage legal economic activity on the internet, especially in countries where online censorship is prevalent —  but not only there.

Based on the concerns raised above, we are worried that the Ministry may set rules that could have serious implications for Wikipedia and its mission to provide free access to neutral, reliable information. We humbly ask that the Ministry release a new draft of their proposed intermediary liability rules, which we hope have taken into account our comments in the public consultation from last year, and work with civil society groups to address any remaining concerns before notification of these rules.

Amanda Keton, General Counsel
Wikimedia Foundation

Anayasa Mahkemesi bugün, Türkiye’de iki buçuk yıldan fazladır süren Vikipedi’ye erişim engelinin anayasaya aykırı olduğuna hükmetti. Türkiye’nin en yüksek mahkemesinin aldığı bu yeni karar ışığında Türkiye’de erişimin yakında yeniden sağlanacağını ve engelin kalktığına dair güncel bilgi aldığımızda bu bildiriyi yeniden güncellemeyi umuyoruz. Bilgiye evrensel erişime verilen bu önemli onayı memnuniyetle karşılama hususunda Türk halkına, ve Vikipedi’ye güvenen milyonlarca okuyucu ve gönüllüye katılıyoruz.

Wikimedia Vakfı Genel Müdürü Katherine Maher, “Türkiye Anayasa Mahkemesi’nin bugünkü kararı bilgi edinme hakkı için önemli bir adım. Ülkenin en yüksek mahkemesi dünyadaki ülkeler için emsal oluşturarak, Türk halkı için ifade özgürlüğü ve bilgiye erişim lehine bir tavır aldı. Erişimin, bu karar ışığında en kısa zamanda tekrar sağlanacağını ümit ediyoruz.”, dedi ve ekledi: “Bizler, Wikimedia olarak, herkesin her yerden bilgiye özgürce erişimini sağlamaya kararlıyız.”

Erişim engelinin yasallığı Avrupa İnsan Hakları Mahkemesi’nde yeniden değerlendirilmekte. Bu dava Mayıs 2019’da açıldı ve Mahkeme tarafından öncelikli statü verilmesiyle etkin şekilde hızlandırıldı. Türk hükûmeti davaya ilişkin yazılı görüşlerini Ocak ayında sunacaktır.

Vikipedi,tüm bilgilerin herkese her yerden erişilebilir olması gerektiği inancını taşıyan bir gönüllü topluluğu tarafından oluşturulup güncellenir. Onların emekleri sayesinde Vikipedi  yaklaşık 19 yıllık yaşamında dünyanın en popüler ve sevilen web sitelerinden birisi olmuştur. Türkçe konuşanlar için Türkçe konuşanların kendisi tarafından yazılan Türkçe Wikipedia (Vikipedi) de dahil, yüzlerce dilde mevcuttur.

Bu yeni kararla, 80 milyonluk Türk halkının  Vikipedi’nin tüm dil sürümlerine yeniden engelsiz erişebileceğini umuyoruz. Her yaştaki ve özellikteki – öğrenciler, akademisyenler, meslek insanları – abiyogenezden Osmanlı İmparatorluğu’na ve Süper Lig‘e kadar birçok konuda bilgiye kolayca erişebilmeli. Ve Türkiye vatandaşları, Türkiye’nin kültürü ve tarihi hakkındaki büyük  küresel sohbetin ve dünyanın tüm bilgisinin bir parçası olmalı.

İleride, Vikipedi’nin  Türk halkı için değerli bir kaynak olmaya devam edeceğini ve halihazırda 335bin maddesi bulunan Türkçe Vikipedi’nin büyüyüp gelişmeye devam edeceğini umuyoruz. Türkiye’de daha fazla kişinin Vikipedi’de düzenleme yapıp bilgiyi dünyayla paylaşma ortak çabasına katkı vereceğini umuyoruz.

Bugün Anayasa Mahkemesi’nden alınan karar özgür bilgi ve Türk vatandaşları için güzel bir gelişme olsa da, insanlık tarihinin en geniş bilgi koleksiyonunu özgürce, açık bir şekilde ve işbirliği içinde inşa etme yeteneğimize yönelik başka tehditler de var. Buna rağmen bugün, bilgi ve diyalogun gücüne inananlar için iyi bir gün. Bu sonuç bizi yüreklendirdi ve bundan böyle de bilgiye herkesin erişebildiği bir dünyaya doğru çalışmaya devam edeceğiz.

Today, the Turkish Constitutional Court has held that the more than two and a half year access ban of Wikipedia in Turkey was unconstitutional. We hope that access will be restored in Turkey soon in the light of this new ruling from Turkey’s highest court and will update this statement if we receive notification that the block has been lifted. We join the people of Turkey, and the millions of readers and volunteers who rely on Wikipedia around the world, to welcome this important recognition for universal access to knowledge.

“Today’s decision from the Turkish Constitutional Court is an important step for the right to knowledge. The country’s highest court has taken a stand in favor of freedom of expression and access to information for the people of Turkey, setting a precedent for countries around the world. We hope access will be restored at the soonest in light of this decision,” said Katherine Maher, CEO of the Wikimedia Foundation. “At Wikimedia, we remain committed to ensuring everyone, everywhere has the right to freely access information.”

The legality of the access ban is currently being considered by the European Court of Human Rights. That case was brought in May 2019, and has been granted priority status by the Court, effectively expediting it. The Turkish Government is due to provide its written observations in the case in January.

A community of volunteers creates and maintains Wikipedia, guided by the belief that all knowledge should be freely accessible to everyone, everywhere. Thanks to their efforts, over its nearly 19-year history Wikipedia has become one of the world’s most popular and beloved websites. It is available in hundreds of languages, including the Turkish Wikipedia (Vikipedi) — written by Turkish speakers for Turkish speakers.

With this new ruling, we hope that  the more than 80 million people of Turkey will once again have unrestricted access to all languages of Wikipedia. People of all ages and backgrounds — students, academics, professionals — should be able to  easily access information on a wide range of topics, from abiogenesis to the Ottoman Empire to the Super League. And the people of Turkey should be a part of the biggest global conversation about the culture and history of Turkey, and all the world’s knowledge.

Looking ahead, we hope that Wikipedia will continue to be a valuable resource for the people of Turkey, and that Turkish Wikipedia, currently with more than 335,000 articles, will continue to grow and improve. We hope that more people in Turkey will edit Wikipedia and contribute to the collective effort to share knowledge with the world.

While today’s ruling from the Constitutional Court is a welcome development for free knowledge and the people of Turkey, there are other threats to our ability to continue to freely, openly, and collaboratively build the largest collection of knowledge in human history. Despite this, today is a good day for those of us that believe in the power of knowledge and dialogue. We are encouraged by this outcome and will continue to work towards a world in which knowledge is freely accessible to all.

It is not a list when it is the result of a query

09:32, Thursday, 26 2019 December UTC
A list is a presentation of data. When a list is maintained manually, the list IS the data, when the data is the result of a query, it REPRESENTS the data.

The difference is quite important. Changing the information in a query is in the definition of the query, changing the data is a matter of re-running the query. Changing the information in a list is a lot of work and therefore there is no integrity in the data itself, it is always potluck what quality the data is.

In the Wikipedia world, Listeria is king of the queried lists. For some its use is controversial but things are changing for the better. Projects like Women in Red use Listeria a lot, their work is possible because people add notable women in Wikidata. The queries work on the basis of awards, professions, nationality enabling volunteers to write the articles they care to write. This works because once an article is written they are automagically removed from the lists.

On the English Wikipedia consensus has it that manual lists are to be preferred. However, emperically the quality of automated lists perform better {{REF}} and as data in Wikidata does not suffer from "false friends" even the support for "red links" is vastly superior.

There is no point in anecdotal evidence who is best. When the English Wikipedia has a black link for Stephen Fleming on its page for the Spearman medal first, it is an obvious start for a new item on Wikidata that is more than just a person who won the Spearman medal. It then becomes a target for lists of the special interest groups who aim to cover "their" subject matter well.

The next stage of the acceptance of lists relies on the realisation that "consensus" does not serve us well particularly when it trumps established facts. It will serve us well in politics and, in what Wikimedia projects could be.

What is it about Jess Wade?

09:32, Thursday, 26 2019 December UTC
It is not only that Jess writes Wikipedia articles. Others do as well. It is not only that she engages girls with science; it is why she enthuses about female (STEM) scientists. Others do as well. It is not that only that her tweets engage us with for instance the #PhotoHour, that she wants us to read the (fabulous) books by Angela Saini, she also organises for schools to have Inferior in their library for girls to read and become a scientist as well.. What makes her special is that she engages people to be part of what she communicates so well.

Take me for instance, Jess is on Twitter and I read her daily new article. For the person she writes about I enrich the information on Wikidata and ensure that the "authority control" is set in the Wikipedia article. What I add is award information, authorities, employment and education info. I often add awards and depending on how interesting an award is to me I add other recipients as well.

It is not only me, there are many more people inspired by Jess who get involved, they read the books she champions, donate so that more girls read Inferior, follow her on Twitter, write articles and also get involved, are involved. It all happens because of the enthusiasm that Jess brings to us all. This enthusiasm, the involvement is what I so cherish. When the inevitable naysayers come along it dampens the positivity, the sense that we are making a difference.

When you want to know how important the women she writes about can be, consider Joy Lawn she tweets really effectively as well... It shows how women scientists really effectively communicate the relevance of science. It is vitally important for us to know about the science, the subjects they champion. At that it may be our Jess but actually, it is Dr Jess Wade, she is a scientists first, she promotes science and Wikipedia is a vehicle to get the message out.
Ed Erhard wrote famously in 2018 "Why didn’t Wikipedia have an article on Donna Strickland, winner of a Nobel Prize?" A year later we can say that it is extremely likely that a Donna Strickland, a Margaret Nakakeeto are known in Wikidata if only because they are a co-author of a paper (technically: an "author string").

When Ed wrote his article, it was to highlight the gender gap we have in Wikipedia. Arguably relevant and important and it needs the attention it gets. However, it does not follow that it is the only "gap" that needs addressing, it even does not follow that the gender gap is the gap with the biggest impact.

When you consider Africa and particularly science in Africa, the subjects that matter in Africa most are reflected in for instance the Scholia for the members of the South African Academy of Science. As far as I now know, its gender ratio is 27% and this is a list with a mix of Wikipedia articles and Wikidata items. It shows the attention African science gets in Wikipedia nicely.

In Africa there is a huge amount of attention for maternal and neonatal care (eg Uganda) and as programs impact the health and survival of women, it follows that more women will become notable, notable for Wikipedia.

By giving attention to female African scientists, the subjects they are known for gain relevance. Their Scholias are developed, including links to co-authors and papers. It will improve the likelihood that when African science awards are announced, we will at least know the recipients in Wikidata.

In November 2019, Tim Berners-Lee and the Web Foundation launched the Contract for the Web, a set of rules designed to address the challenges facing digital communication and participation— from threats to online privacy and security to connectivity and digital inclusion.  The multi-stakeholder effort outlines nine principles for governments, companies and citizens designed to safeguard the future of the Web.

The Foundation has not yet signed on to the Contract and we’d like to address why.

The Wikimedia Foundation participated in the Core Group and the Working Group on Principle 6, “Developing technologies that support the best in humanity and challenge the worst,” which aims to support positive technology that puts people first. The Contract aligns with our goal to foster a web where everyone can find and access knowledge freely.

We deeply support the principles of the Contract for the Web. At Wikimedia, we are committed to fostering a digital information sphere that is accessible for everyone, that offers strong privacy protections, that supports free expression and open collaboration, and safeguards the web from bad actors that seek to monopolize and use it for harm. All of these principles align closely with the commitments underlined in the Contract for the Web.

We chose not to sign the Contract at this time because we still have open questions about how the Contract will be implemented to maximize its impact. In particular, we are exploring how each signatory will be held accountable to these commitments. We are especially interested in seeing concrete steps towards enforcement mechanisms that ensure big technology companies that endorse the Contract will change their attitudes and current practices that violate the principles in the Contract.

The world’s biggest challenges, from the global climate crisis to disinformation online, can only be solved if we work together and ensure that everyone is doing their part. Active reporting, transparency, and clear indicators for progress are critical to ensuring the implementation of the Contract for the Web. However, it will take clear, direct, and enforceable systems to ensure we’re all contributing to a better internet for everyone.

We’re happy to see that creators and supporters of the Contract for the Web are considering opportunities for enforcement and accountability as the Contract enters its next phase of planning. We look forward to continuing discussions around  implementation of the Contract and remain in full support of the principles it sets out to achieve.

Wikimedia Foundation 

Identity verification on Wikipedia

13:24, Monday, 23 2019 December UTC

Wikipedia has many active features which are broken. Conventional product development practices the idea of the minimum viable product in which there is a schedule of feature rollout to give most users a narrow but working experience. With Wikipedia, anyone can integrate a new product into the user experience and if there is any oversight it is by volunteers. Some places in Wikipedia have more oversight and some places less, but there are critical high profile usage patterns where the process lacks any evidence of coordinated design. I will share one, which is Wikipedia’s identity verification process. Earlier this year I drafted Wikipedia:Identity verification as guidance on this.

“Identity verification” is when a website confirms the identity of the person or organization operating a user account in order to confirm some privilege to that account. This comes up often in two contexts in Wikipedia: one is confirming that we are securing compatible copyright licenses from rightsholders when accepting files into Wikimedia Commons, and the other is when a person or organization claims a right to edit an article about themselves or something close to them. One challenge with all of this is that performing reasonable identity verification of the sort most people would imagine is complicated and expensive. If Wikipedia had a hard definition for its standard of verification then either the standard would need to be low to meet, or costly to achieve, or we could usually achieve it but with frequent flagged exceptions. In practice right now we strive for a high standard but have many exceptions, and the entire process is more labor intensive than I would like. Another challenge is that upon confirming an identity, we lack consensus about what rights this should confer. The challenge of implementing an identity verification process is something I would talk about with Wikipedians, but the issue of user rights is a matter of broader public interest since it affects how everyone’s community relates to Wikipedia. I will say more about that second challenge here.

Commons has a default practice of encouraging high-profile copyright holders to hide their identity when making media uploads to Commons. I think that this has always been a long-standing procedural error. Like for example, if famous photographer or an organization wishes to apply a free and open license to media, then the usual case would be they have no need to hide their identity, because obviously the copyright of the media could only come from that well-known organization or person when it is common knowledge that they should be the copyright holder.

I would like to change the Wikipedia default practice from “hide the identity of the copyright holder of media” to “only by request, hide the identity of the copyright holder of media”. My expectation is that in 90%+ of cases, most people and organizations would not request to have their identity hidden for media uploads. I see no reason to believe that users are even aware that we have a practice to obscure their identity. We go to a lot of trouble to hide identity, and hiding identity means that people who reuse media can have doubts for 95 years (until public domain) that the media actually came from the copyright holder.

This confusion starts because there is a longstanding practice of our Wikimedia Commons community performing a number of functions to assist with media uploads by email. The Volunteer Response Team is a global collective serving every language and wiki project for anything necessary to discuss by email. Many wiki administrator type people call this service Commons:OTRS which is a name I try to avoid because those letters are a brand name of some software we use for part of the process. There are documentation pages about various language and subgroups of this team at the Wikidata item.

A particular problem is the use of template PermissionOTRS. The designed use of this template is to mark that privately, trusted Wikipedia administrators have performed identity verification with the copyright holder of media uploaded in Wikipedia. The identity verification validates a copyright license which the uploader has chosen, and all of this indicates that a file has a free and open copyright license. I advocate for this process to happen in public rather than in an email backchannel.

Suppose that the office of a public figure, like a corporate executive, the head of a nonprofit organization or NGO, faculty at universities, a government officer, or anyone similar writes into our email service to share their publicity photo. In the Volunteer Response Team queue, there is a process for identity verification of the rights holder, then if the file passes verification, it gets that template above marking it to communicate that in private email the file has come to Wikimedia Commons with a Wikimedia compatible copyright license from the copyright holder. This typically happens for organizations which publish photos with conventional copyright on their website, but then want some of those photos mirrored in Wikipedia with a compatible copyright license.

The change that I think I want is that by default, when anyone with a public identity applies a copyright license to content with Wikimedia Commons mediating as witness to their application of the license, then we do all this in public, seeking to publish the identify the copyright holder. Again, our norm right now is privacy, and I would still allow that privacy for special cases by request, but I believe that in many cases this privacy is an unwanted burden to the rightsholder, our readers, and a labor burden in our administration.

If I were to go further, I wish that we could make all this OTRS permission process public by default, so that anyone could review the license. The norm for Commons is associating uploads with Wikimedia accounts, which is already pseudonymous and which anyone can have. For people who write in by email, the new process I want is that by default we share some aspect of their identity (maybe their name, maybe their domain name if it is a company or org or personal domain), or we offer them some other pseudonymous process including perhaps the one we have in place if they request it.

I know that this is a discussion which needs to include various agents in the Volunteer Response Team. It is still so challenging to organize meetings in the Wikipedia space. I meet monthly for online chats with about 10 Wikipedia groups already. I have capacity to meet more, and volunteers enjoy joining these meetups, but no one enjoys administering the meetup rooms to poll for an acceptable time and perform the hosting functions of the online meeting space. There are many such challenges which have in their solution a few meetings with voice and video.

I was thinking about this issue because on 20 December 2019 Slate published an article about some public relations representative trying to share a publicity photo of a United States presidential candidate. The article is able to generate a lot of journalism over what ought to be straightforward, which is the act of uploading a photo. I recognize that there are multiple issues in the article, but one that has my attention is how we scale up the ability of every public figure and organization in the world to be able to upload publicity photos in the Wikipedia network without requiring so much of our human attention and to operate at scale.

Flickr's future

01:58, Monday, 23 2019 December UTC


The CEO of FLickr, Don MacAskill, sent out an email to all Flickr users a few days ago, talking about how they need more paying customers to make the service sustainable. I thought it was a reasonably honest-sounding email, admitting the trouble of hosting billions of photos, but but some bits of the internet took it as either a) Flickr being greedy and wanting to get rid of more free-tier users; or b) another death knell, and a good reason to do whatever's possible to get photos off Flickr.

I use Flickr a bit, because it's an easy way to share photos with family. In doing so, people I know sometimes end up using it and I would rather they did that instead of using Google Photos or Facebook because it's got better a metadata system and keeps the full original files. It'd certainly be better if everyone used open source software and self-hosted their stuff, but if that's not going to happen then I'd rather people used Flickr. The other thing I use it for is as a staging ground for photos I want to put on Wikimedia Commons.

There's a bit of a discussion happening on Commons about whether it's worth bulk-importing public open-licensed photos from Flickr. I sort of think it's a good idea, but also suspect that there's tens of thousands of poor quality photos and near-duplicate shots. It'd create ever more of a backlog of undescribed things on Commons, which'd be annoying.

These days, I'm finding more and more that there's a distinction in photos I take, broadly in two categories: photos of people and places that are dear to me, which serve as memories and that I want to look back over periodically; and photos of places and objects that strike me as more generally interesting. The former group I add very personal captions to (offline), and they're much more like private journal entries; the latter I try to get onto Commons because I hope that they are useful to other people and crucially I want them to be open to being described by anyone. I don't often want the same photo to be in both categories.

Tech News issue #52, 2019 (December 23, 2019)

00:00, Monday, 23 2019 December UTC
TriangleArrow-Left.svgprevious 2019, week 52 (Monday 23 December 2019) nextTriangleArrow-Right.svg
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎Tiếng Việt • ‎dansk • ‎français • ‎lietuvių • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎ไทย • ‎中文 • ‎日本語 • ‎한국어

weeklyOSM 491

14:10, Sunday, 22 2019 December UTC


lead picture

Haiku poems created with OpenStreetMap locations 1 | © satellite studio, Map data © OpenStreetMap contributors

  • We would like to thank all our readers, across the nine different language versions of the newsletter, who show their appreciation of our work through their interest and feedback. This gives the editors confirmation that we are doing many things right, providing the OSM community with the most important new information we observed.Mapping and celestial observation have always been sisters. As civilisations across the earth are about to celebrate the winter solstice in many different ways, we wish all our readers, their families and their friends a peaceful and contemplative holiday. We hope that the next issue will be published just in time for 2020 (as you know, the editors don’t have any holidays or vacations, so if you would also like to participate, please contact us 😉 ).


  • Rebecca Firth tweeted a graphic suggesting steps in a five-stage process of adding detail on OpenStreetMap. The steps reflect typical HOT projects and, as suggested in the comments, there are likely to be similar steps but with different emphasis in other OSM activities.
  • Kevin Bullock, of Maxar, announced in an OSM diary entry that Maxar background imagery for OSM is being taken off-line. There have been a number of cases which suggest that access to this imagery is being abused. Maxar are working with the developers of at least one map editor to improve security. Comments provide links to various technical discussions on this point.
  • The Norwegians are converting their outdated last 100 phone boxes into small libraries. The tagging for this is amenity=public_bookcase. Have fun tagging.
  • The OpenStreetCam and ImproveOSM platforms are moving to Grab following a new partnership with Telenav. Grab, a Singapore-based ride-hailing company is an OSMF corporate member. Their remote mapping team work on missing roads in South-east Asia. In the past this has led to some mapping quality issues and their subcontractor GlobalLogic, caused controversy with an orchestrated signup of 100 employees to OSMF in 2018.


  • Valeriy Trubin continues his series of interviews with OSM contributors. He talked to SviMik (ru) (automatic translation) from Estonia about mass imports and to wowik (ru) (automatic translation) from Russia about validators.
  • Ilya Zverev wrote (ru) (automatic translation) a rather sad note about the future of OSM. In short, he says that if nothing changes within 2 to 3 years the project will die. In his blog post he names the lack of control over areas such as tagging, the website, development resources and the licence as causes.
  • The OSM Turkey community arranged its first Mappy-Weekend, focusing on Trabzon and Rize cities in the Black Sea Region, with participation of Yer Çizenler (automatic translation), KTU Mapping Software and Technologies Student Society and Mapeado Colaborativo (automatic translation) on 14 and 15 December. The event’s details were shared on the wiki page (automatic translation).
  • In 2020 Germany will outlaw (de) (automatic translation) apps that warn users of speed cameras. The message sparked a lengthy discussion (de) (automatic translation) in the German forum about the extent of the regulation and the legality of OSM-related apps such as OsmAnd with such functionality.
  • Eugene Alvin Villar, who has been contributing to Wikipedia since 2002 and to OSM since 2007, posted a blog entry about how he held talks with Edward Betts about Wikidata+OSM at the State of the Map 2019 in Heidelberg and WikidataCon 2019 in Berlin. He links to the presentations held at the two conferences and explains why he thinks OSM and Wikidata should be linked together.

OpenStreetMap Foundation

  • Frederik Ramm, outgoing treasurer of the OpenStreetMap Foundation, released the Treasurer’s Report for the December 2019 annual general meeting of the OSMF.
  • The OSM Licensing Working Group has published the minutes of the meeting of 14 November 2019. The largest topic was the draft attribution guidance.
  • Christoph Hormann wrote a brilliant analysis of the OSMF board elections. A must read!
  • In his user blog, Frederik Ramm calls for an end to “leadership nonsense”. OSM is a project of hobbyists, makers and activists. Demanding management experience for the OSMF board does not benefit diversity efforts.
  • Ilya Zverev wrote a blog post provocatively titled “OWG Must Be Destroyed”, in which he outlines the problems he sees with the Operations Working Group. In the comment thread, Andy Allan suggests that readers who want a more practical set of suggestions, from a former OWG member, could read his ideas. The criticism isn’t new. Over five years ago, Ed Freyfogle pointed to organisational deficits and reported that he’d been told: “if you want to contribute as a sysadmin and get into the tech details you need to live in London and go to the pub with those guys.”
  • Heather Leson laments, in a diary entry, the absence of women or people from the global South on the new OSMF board. She makes some suggestions as to how this might be remedied in the future.
  • Manfred Reiter calls in his user blog (automatic translation) to return to the facts in the diversity discussion, to moderate the tone of voice and to carry out a thorough basic investigation with academic participation.


  • After the State of the Map comes the preparations for the next State of the Map. It will be held 3 to 5 July 2020 in Cape Town, South Africa. You can find initial information on the conference website and OSM wiki.
  • The second State of the Map Baltics after 2013 will take place place in Riga, Latvia, on 6 March 2020. As Ilya Zverev announced, the conference language will be English.
  • The FOSSGIS 2020 (de) (automatic translation), Germany’s main conference on free and open geosoftware and OSM is scheduled for 11 to 14 March 2020 in Freiburg. The last day, which is traditionally called “OSM Saturday(de) (automatic translation), will offer many talks and meetings about OSM-related topics.

Humanitarian OSM

  • The HOT Blog shows six ways by which life and survival in problematic areas of the world can be improved through better maps. Support is provided through the current fundraising projects. Donations are requested.
  • “What we Learnt from Mapping African Megacity Dar es Salaam” – a blog post by Hawa Adinani (HOT).
  • Felix Delattre invited people to sign up if they wanted to help test Tasking Manager 4. Testing will take place in three phases between now and 8 Febuary.


  • Andreas Binder introduced his winter layer, which displays winter hiking trails, snowshoe trails, ski tours, cross-country trails and many more.
  • Hidde Wieringa posted a detailed guide on how to create a cycling map with open data. The toolchain involves Mapnik, PostgreSQL with PostGIS extension, Python and GDAL and makes use of OSM and SRTM elevation data. The scripts he used are available on GitHub.
  • [1] A new website, OpenStreetMap Haiku, is an online poem generator created by Satellite Studio. The geo-fueled generator uses a map location and OpenStreetMap data to create randomised haikus using a database of coordinate-dependent words.
  • The company Targomo launched a tool to visualise POIs. The purpose isn’t to provide another POI map but to provide an API that allows you to find distribution patterns for better knowledge about people’s movements, local preferences and urban planning.
  • According to the Telegram channel “Urban data(ru) (automatic translation), Russian programmer Ilya Aralin created a map of cottage villages (ru) around Moscow (Russia). There are several layers of data: transport, ecology, shops. On this map villages are marked with different colors: blue – good conditions for life, red – bad conditions. Unfortunately, the OSM attribution is not properly specified.


  • Hacker News discusses the “new look” Since that discussion a number of issues have been addressed, and further contributions are welcome.
  • GoldenCheetah, a data analysis tool for power meters (mainly cycling computers) is gradually switching from Google to OSM. There are currently several candidates for release 3.5. The release is expected in early 2020.
  • Alex Wellerstein describes his experiences with using Google Maps for his NUKEMAP. A lack of support, stagnating API, and a pricing model that is “insane and punishing if you are an educational web developer that builds anything that people actually find useful” made the decision to move to MapBox’s service and OpenStreetMap data an easy one.

Open Data

  • The newly founded, Swiss based NGO European Water Project, which aims to reduce waste, has written instructions for adding new drinking fountains to OSM and asks for help with it. The NGO wants to make use of the fountain data with a Progressive Web Application that they are developing, which will allow users to refill water bottles.
  • How to distinguish terms like “Open” Data (including OSM), “Open” Source and “Open” Standards is explained quite precisely with many links in an article at GeoSpatial World.


  • Leaflet is probably the most widely used library for displaying maps, but a recent thread on Hacker News is a useful reminder of all the features which have been added to OpenLayers in the recent past. Noteworthy is mourner, Leaflet’s developer, explaining differences between Leaflet and OpenLayers.
  • Richard Fairhurst explains how he achieved a substantial performance improvement in rendering tiles using Mapnik for The key trick is to avoid compositing terrain (hill shading etc) on the fly.
  • Abdishakur wrote a guide on how to use OSMnx to access OpenStreetMap data with Python using Google Colab.

Did you know …

  • … about the Wikimedia Commons mobile app (Android only) with which you can quickly upload pictures to Wikipedia Commons? OSM is used as a basemap.
  • … about OpenSeaMap? OpenSeaMap is an open source project aiming to create a free nautical chart of the world.
  • … about the MapTourist (ru) (translation) website, where daily maps from OpenStreetMap data for Garmin navigation devices and apps are published?
  • OSMHydrant, a Leaflet map that shows all hydrants on OSM? More and more fire brigades are using this map. It is available in 11 languages.
  • … that many cities in the world have real-time public transport maps based on OSM? For example, Tallinn (Estonia), Saransk (Russia) and Murmansk (Russia).
  • … that there are more than 1.5 million objects with a “fixme” tag? Maybe you can have a look and fix some in your vicinity? You can use Overpass-Turbo to find them.

Other “geo” things

  • According to a blog entry, there are now one billion images on the Mapillary platform, all open and freely available for mapping in OpenStreetMap. The 500,000,000 mark was passed eight months ago.
  • Almost one million Romanian citizens from outside of Romania voted during the second round of the Romanian presidential elections in November 2019. Giorgio Comai used OSRM to work out how best Romanian voters in Italy could reach a polling station.
  • The Google Blog explains how they use various sorts of imagery (including street-level imagery), to create the maps available via Google Maps.
  • Google Maps are testing a new feature for identifying well-lit areas, which in turn may be used for finding routes perceived to be safer.
  • Martin Dobiasa, a core QGIS developer, talks about new QGIS 3D capabilities and future plans for QGIS 3D.
  • Not so long ago we wrote about an eagle who flew too far and drove Russian scientists into debt because of roaming charges. On this website you can watch how these eagles are flying around the world.
  • Ryan Morrison wrote on the Daily Mail Online that scientists have created “the most precise map yet” of the land underneath Antarctica’s ice sheet, to help them predict the impact of climate change on the continent.
  • At the end of November in Moscow (Russia) “Moscow Central Diameters” (a system of surface railway transport) was launched. “Let’s bike it”, the Russian movement for the rights of cyclists, analysed the scheme of train movement and created a map (ru) (automatic translation) of railway crossings. Most of them are inconvenient and unsafe for pedestrians and cyclists.

Upcoming Events

Where What When Country
Alice PoliMappers Adventures 2019 2019-12-01-2019-12-31 everywhere
London London Xmas Pub meet-up 2019-12-19 united kingdom
Biella Incontro Mensile 2019-12-21 italy
Düsseldorf Stammtisch 2019-12-27 germany
hosted by Chaos Communication Congress 36C3 OpenStreetMap assembly 2019-12-27-2019-12-30 germany
London Missing Maps London 2020-01-07 united kingdom
London Geomob LDN (featuring OSMUK) 2020-01-08 united kingdom
Stuttgart Stuttgarter Stammtisch 2020-01-08 germany
Berlin 139. Berlin-Brandenburg Stammtisch 2020-01-09 germany
Bochum Mappertreffen 2020-01-09 germany
Nantes Rencontre mensuelle 2020-01-09 france
Riga State of the Map Baltics 2020-03-06 latvia
Valcea EuYoutH OSM Meeting 2020-04-27-2020-05-01 romania
Cape Town State of the Map 2020 2020-07-03-2020-07-05 south africa

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Elizabete, Nakaner, Polyglot, Rogehm, SK53, Silka123, SomeoneElse, Guillaume Rischard (Stereo), SunCobalt, TheSwavu, YoViajo, derFred, geologist, jinalfoflia.

#Science and America first

10:23, Saturday, 21 2019 December UTC
Several US American science organisations are quite adamant that for them, it is America first. Stupidity has its place and these days the United States has a lot of it particularly as those same science organisations expect people from the rest of the world to accept "pre-eminence" of the USA.

There may be good reasons to be a member of these organisations but from my perspective, it is one thing to be with stupid, it is another to have these organisations argue their case on "your" behalf. So when you are a scientist, chances are that we already know you at Wikidata. We may even know about your science, your co-authors, your memberships.

Take for instance Prof Lise Korsten, she is probably South African, this is her Scholia. She has many co-authors and for some we do not know their gender and for most we do not know their nationality. We do not know if she is a member of any science organisation and we do not know that for her co-authors either. So you may add your professional memberships at Wikidata, your nationality and when you do know the nationality of your co-authors, you may add that as well.

In this way we make obvious to US American stupid that science is global.

Surface Pro X thoughts after a few weeks

03:51, Friday, 20 2019 December UTC

After a few weeks using the fancy new Windows 10 ARM64 tablet, the Surface Pro X, I’ve got a few thoughts. Mostly good so far, but it remains an early adopter device with a few rough edges (virtually — the physical edges are smooth and beautiful!) Note that my use cases are not everyone’s use cases, so some people will have even more luck, or even less luck, getting things working. :) Your mileage can and will vary.


It’s just gorgeous. Too gorgeous. It’s all black-on-black labeled with black type. Mostly this is fine, but I find it hard to find the USB-C ports on the left side when I’ve got it propped up on its stand. :)

Seriously though, my biggest hardware complaint is that the bezels are too small for its size when holding as a tablet in the hands — I keep hitting the corners with my fat hands and opening the start menu or closing an app. I’m still not 100% sold on the idea of tablets-with-keyboards at this size (13″ diagonal or so).

But for watching stuff, the screen is *fantastic*. The 3:2 aspect ratio is also much better for anything that’s not video, while still not feeling like I’ve wasted much space on a 16:9 letterbox.

The keyboard attachment is pretty good. Get it. GET IT. I got the one that also has the cradle for the pen, which I never use but felt like I had to try out. If I did more art I would probably use it.

Performance and emulation

The CPU is really good. It’s got a huge speed boost over the Snapdragon 835 and 850 in older ARM64 Windows machines, and feels very snappy in native apps like Firefox or the new Edge. With 4 high-power CPU cores and 4 low-power cores, it handles multithreaded workloads fairly well unless they get confused by the scheduler… I’ve sometimes seen things have background threads get pushed to the low-power cores where they take a long time to run.

(In Task Manager, you can see the first 4 cores are the low-power cores, the next 4 are high-power.)

x86 Windows software is supported via emulation, both for store apps and regular win32 apps you find anywhere. But not everything works. I’ve generally had good luck with tools and applications – Visual Studio, VS Code, Chrome, Git for Windows, Krita, Inkscape all run. But about 1/2 of the Steam games I tried failed to run, maybe more. And software that’s x64-only won’t run at all, as there’s no emulator support for 64-bit code.

Emulated code in my unscientific testing runs 2-3 times slower than native code on sustained loops, but you can expect loading-time stuff to be slower because things have to get traced/compiled on the first run through or when code is modified in memory.

Nonetheless, 2-3 times slower than really-fast is still not-bad, and for UI-heavy or i/o-heavy applications it’s not too significant. I’ve had no real complaints using the x86 VS Code front-end, but more complaints with, say, compiling things in Visual Studio. :)

Web use case

Most of what I use a computer for these days is in a web browser environment, so “using the web” is big. Firefox has an optmized, native ARM64 build. Works great. ’nuff said.

Oh also Edge preview builds in the Dev and Canary channel are ARM64 native and run great, if you like that sort of thing.

Chrome, however, has not released a native build and will run in x86 emulation. If you need Chrome specifically it *will install and run* but it will be slow. Do not grab custom Chromium builds unless you’re using them only for testing, as they will not be secure or get updated!

Developer use case

I’m a software developer, so in addition to “everything that goes in a web browser” I need to use tools to work on a combination of stuff, mostly:

  • PHP and client-side JavaScript code (MediaWiki, a few other bits)
  • weird science C / JavaScript / emscripten / WebAssembly stuff (ogv.js, which plugs into MediaWiki’s video player extension)
  • research work in Rust (mtpng threaded PNG compressor)

LAMP stuff

I’m used to working in either a macOS or Linux environment, with Unix-like command line tools and usually a separate GUI text editor like Visual Studio Code, and never had good experiences trying to run the MediaWiki LAMP-stack tools on a Windows environment in years past. Even with Vagrant managing a VM, it had proved more fragile on Windows for me than on Mac or Linux.

WSL (Windows Subsystem for Linux) has changed that. I can run a Debian or Ubuntu system with less overhead and better integration to the host system than running in a traditional VM like VirtualBox or Hyper-V. On the Surface Pro X, you get the aarch64 distribution of Ubuntu or Debian (or whatever other supporting distro you choose to install) so it runs full speed, with no emulation overhead.

I’ve been using a MediaWiki git checkout in an Ubuntu setup, using the standard PHP/Apache/MySQL/whatevers and manually running git & composer updates. The main downside to using WSL here is that services don’t get started automatically because it doesn’t run the traditional init process, but “service mysql start” etc works as expected and gets you working.

For editing, I use Visual Studio Code. This is not yet available as an ARM64 optimized build (the x86 frontend runs in emulation), but does in 1.41 now include ARM64 support for WSL integration — which means you can run the PHP linter on your code running inside the Linux environment while your editor frontend is a native Windows GUI app. No wacky X11 hacks required.

emscripten stuff

The emscripten compiler for WebAssembly stuff works great, but doesn’t ship ARM or ARM64 builds for any platform yet in the emsdk tool.

You can build manually from source for now, and hopefully I can get builds working from the emsdk installer too (though you still would have to run the build yourself).

The main annoyance I had was that Ubuntu LTS currently ships an old node.js, which I had to supplement with a newer build to get my environment the way I wanted it for my scripts. :) This was pretty straightforward.

Rust stuff

Rust includes support for building code for Windows ARM64 — it has to to support things like Firefox! — but the compiler & tools distribution comes as x86. I’m sure this will eventually get worked out, but for now if you install Rust on Windows you’ll get the x86 build and may have to manually add the aarch64 target. But it does work — I can compile and run my mtpng project for Windows 10 ARM64 on the device.

Within a WSL environment, you can install Rust for Linux aarch64 and it “just works” as you’d expect, as well.

Final notes

All in all, pretty happy with it. I might have preferred a Surface Laptop X with similar specs but a built-in keyboard, but at a desk or other …. “surface” … it works fine for typey things like programming.

Certainly I prefer the keyboard to the keyboard on my 2018 MacBook Pro. ;)

Some little-known bird books from India - M.R.N. Holmer

05:54, Thursday, 19 2019 December UTC
A fair number of books have been written on the birds of India. Many colonial-era books have been taken out of the clutches of antique book sellers and wealthy hoarders and made available to researchers at large by the Biodiversity Heritage Library but there are still many extremely rare books that few have read or written about. Here is a small sampling of them which I hope to produce as a series of short entries.

One of these is by M.R.N. Homer (Mary Rebekah Norris Holmer - 6 June 1875 - 2 September 1957) - a professor of physiology at Lady Hardinge Medical College who was also the first woman board member in the Senate of Punjab University and a first for any university in India. Educated at Cambridge and Dublin University she worked in India from 1915 to 1922 and then returned to England. She wrote several bits on the methods of teaching nature study, and seems to have been very particular about these ideas. From a small fragment, it would appear that she emphasized the use of local and easily available plants as teaching aids and she deplored the use of the word "weed". Her sole book on birds was first published in 1923 as Indian Bird Life and then revised in 1926 as Bird Study in India. The second edition includes very neat black-and-white  illustrations by Kay Nixon, a very talented artist who illustrated some Enid Blyton books and apparently designed posters for the Indian Railways.

A rather sparse Wikipedia entry has been created at - more information is welcome!

A scanned version of her bird book can now be found on the Internet Archive - Holmer came from a Christian Sunday School approach to natural history which shows up in places in the book. Her book includes many literary references, several especially to R.L.S. (R.L.Stevenson). In another part of the series we will look at more "evangelical" bird books.

Improving PDF Annotations from GoodReader

05:00, Wednesday, 18 2019 December UTC

For many years now, I’ve printed out PDFs and scribbled annotations on them. I then dictate my annotations (i.e., excerpts and comments) into a text file that I can transform and include in my bibliographic mindmap system (see in thunderdell).

With the purchase of an iPad—I gave up on waiting for a decent Android tablet—I’m now annotating PDFs via the GoodReader app. Of course, the accuracy of the text highlighted is only as good as the PDF. The copyable text, generated by OCR, can have conjoined words or suffer from errors resulting from misunderstood ligatures, accents, or cruft. Also, the actual page number of the PDF probably doesn’t correspond to the document’s pagination.

With the short python script, I use a dictionary to correct OCR errors and transform from the GoodReader format into that used by This doesn’t correct everything (e.g., words with capitals) and can introduce a few errors itself—but it’s greatly improved on the original OCR. The --number argument also lets you correct the page numbers by an offset.

Semantic Wikibase

00:00, Wednesday, 18 2019 December UTC

Imagine the data from Wikidata being available in your own wiki for you to query, visualize and enrich. Imagine you could use all the standard Semantic MediaWiki tools such as interactive inline visualizations on this data.

Sadly for now imagination is all we have. This article outlines functionality which we (and everyone we talked to) think would be of great value. We are looking for funding so we can make it a reality.

Project highlights

The goal of this project is to increase consumability of data from Wikidata in other wikis.

The goal is achieved by creating a bridge between Wikibase, the software that runs Wikidata, and Semantic MediaWiki (SMW). This bridge would build on top of Wikibase federation and allow using data from Wikibase in SMW just like native SMW data. Such a connection is relatively simple and brings with it many time-tested features not currently available for Wikibase data.


  • Consumption of data from Wikidata in typical structured wikis
  • Enriching local data sets with data from Wikidata
  • Visualizing data from Wikidata with all SMW visualization formats
  • Using existing SMW tooling to analyze and work with Wikidata data
  • Creating (simplified) projections of Wikidata data
  • Comparing local values with those in Wikidata
  • Doing all the above with data from any Wikibase instance

The project would create a foundation for further integrations like contributing data back to Wikidata.


We would create a new Semantic Wikibase extension for MediaWiki. This extension

  • Supports importing data from Wikidata/Wikibase into Semantic MediaWiki
  • Builds on top of SMW so existing tools and visualization formats can be used
  • Provides tools to select which entities are imported
  • Provides tools to project the data and map to local data models
  • Takes care of (optionally) updating the imported data
  • Allows linking local properties and concepts to Wikidata ones

We would follow Agile practices, meaning that we would be refining and extending the list of functionality to deliver through the development process by testing assumptions and soliciting feedback from stakeholders.


We are uniquely suited to develop this project since no one else has significant technical experience with both Wikibase and SMW. In particular, Jeroen De Dauw is one of the SMW maintainers. He also worked on Wikibase for about 5 years in a senior technical position, making more code changes than anyone else.

We expect that to get project right we need between half a year and one year with two people (FTEs). The required time will depend on the exact scope and includes paying a proper amount of attention to usability and sustainability. This translates into a budget between 200k EUR and 70k EUR.

We think this is an insanely cheap price tag for the value this project would bring. For comparison, the Wikimedia Foundation spends several millions a year on software development, with the full yearly cost of a single employee exceeding 70k EUR.

If you can fund (part) of this project or have ideas on how we can get it funded, please get in touch. We'd also like to hear from you if you have a use case that would greatly benefit from this project being realized or otherwise have ideas on which functionality would be of greatest value.

On 16 December 2019, Judge T.S. Ellis, III, issued a ruling in Wikimedia Foundation v. NSA, our case against the United States National Security Agency’s Upstream mass surveillance practices. We filed this lawsuit in March 2015 with eight co-plaintiffs, to protect Wikimedia users’ ability to read and edit the projects without being subject to government surveillance. The District Court dismissed the case in October 2015, but in 2017, the Fourth Circuit Court of Appeals reversed that decision as to the Foundation, sending the case back to the District Court. Unfortunately, in this week’s ruling the Court held that the Wikimedia Foundation lacks standing to proceed with its claims. We are disappointed by this outcome, and are consulting with counsel on next steps.

This ruling follows a summary judgment hearing on 30 May 2019. At that hearing, the government argued that the Foundation’s case against the U.S government’s mass surveillance practices should be dismissed for lack of standing. In particular, the government argued that we had not provided sufficient evidence of the government’s surveillance for the case to proceed. It also claimed that the case could not proceed because it would require the Court to consider information the government claims is protected by the state secrets privilege. In other words, the government contended that the case cannot be litigated without disclosing information about Upstream surveillance that would harm U.S. national security—and, accordingly, in its view, the entire case must be dismissed.

Our attorneys at the American Civil Liberties Union explained to the Court why the government’s arguments are wrong. They argued during the hearing that the Foundation had presented sufficient evidence and expert testimony to proceed to the next phase of the case. Additionally, ACLU attorneys explained that the state secrets privilege does not apply, because the Foreign Intelligence Surveillance Act creates a process by which courts must review privileged information in electronic surveillance cases such as ours. If the government believes that the lawsuit implicates sensitive information, it cannot withhold that information from the Court and argue for the dismissal of this lawsuit on that basis. Instead, FISA’s procedures apply, and the Court is required to examine the government’s sensitive evidence behind closed doors.

In yesterday’s ruling, the Court largely accepted the government’s contentions. In particular, it held that the Foundation had not presented sufficient evidence that the NSA was monitoring Wikimedia communications, but that even if we were able to present sufficient evidence, the state secrets privilege would prevent the matter from proceeding. The Court found that further litigation on the standing issue would require the government to disclose classified details of how Upstream surveillance operates, and it refused to conduct a closed-door review of the evidence. We respectfully disagree; we believe that the government’s public disclosures about the program offer more than enough evidence to show that the NSA is using Upstream to surveil the communications of Wikimedia users and Foundation staff.

We’re grateful for the counsel of our attorneys at the ACLU, the Knight First Amendment Institute at Columbia University, and Cooley, LLP, with whom we are currently consulting on next steps. We will continue to provide updates to the case as there are further developments. You can find a timeline, past updates, and more information on our resources page on the case.

Jim Buatti is a Senior Legal Counsel at the Wikimedia Foundation.

Aeryn Palmer is a Senior Legal Counsel at the Wikimedia Foundation.

Leveraging open data at the National Library of Wales

21:20, Monday, 16 2019 December UTC
Jason Evans at EuropeanaTech 2018 – image by Europeana EU CC BY-SA 4.0

By Jason Evans, National Wikimedian at the National Library of Wales

Over 7 years ago the National Library of Wales made the decision not to claim any rights to digital reproductions of public domain works. I was then employed as a Wikimedian, in partnership with Wikimedia UK, to actively begin sharing this content openly on Wikimedia Commons.

To date we have shared over 17,000 images to Commons. Over 70% of these images are now in use on a Wikimedia project, including Wikipedia where views of pages containing our images have reached nearly 730 million. This demonstrates the massive reach which can be achieved through sharing with Wikipedia.

About a year in to the residency, following an introduction to Wikidata from Histropedia’s Navino Evans, we began exploring the possibility of sharing our rich metadata for our open content to Wikidata. We took the time, with the help of volunteers, to create items for relevant artists and photographers, to map descriptive tags to Wikidata depicts statements and to insure that data had Welsh language labels wherever possible.

The best way of exploring and visualising this data was Crotos. The Crotos project is a search and display engine for visual artworks. This website is powered by WikiCommons for images and Wikidata for metadata.  At its launch in 2014 the site contained just 8000 images, but by the end of 2019 there are nearly 200,000 paintings, sculptures, drawings, prints, photographs and more.

Crotos allows users to explore visual content shared on Wikimedia platforms

For years Crotos has been the go to platform for searching, discovering and simply enjoying the one of the worlds largest collections of free art. For Wikimedian’s it is also incredibly useful for demonstrating how structured data can be used to enrich search and discovery of artwork and other visual material, using various Wikidata properties such as depicts, artist, publisher and collection in order to filter content. Images can be displayed on a map based on places depicted and content can be filtered by date of creation. It’s a simple, yet highly effective tool for exploring digital content.

And this got me thinking. Many GLAMs dont have the resources to produce nice portals for exploring their digital content. The National LIbrary of Wales is in a stronger position than many, but even so we have to focus on our core functions – providing access to all our content, including books and archives through a central catalogue system. Our online catalogue is very good for searching for books or specific items from the collection, but it is less useful for those who wish to browse or explore our substantial archive of digital content in a visual and engaging way.

So I reached out to Benoît, the founder of Crotos, and he kindly added Welsh as a language to the Crotos interface. This was great, as it allowed us to benefit from the Welsh language labels, already in Wikidata to give access to our collections and others through the medium of Welsh. Following this, Benoît and I had several discussions at various events about the value of Crotos and the potential for it to form the bases of bespoke interfaces for individual institutions. This would certainly be of benefit to the National Library, but more generally for many smaller GLAMs, such a clear and tangible benefit could help tip the scales towards an open strategic approach.

The new National Library of Wales ‘Dwynwen’ interface

We are incredibly grateful to Benoît for taking this idea forward. He started modifying a version of Crotos especially for National Library of Wales content! Over a few weeks we tweaked the new site to suit our needs and our collection. The website, named Dwynwen (the Welsh Saint of lovers), retains all the functionality of the Crotos site and adds a few additions, such as links from each image to our own IIIF image viewer, and the addition of a ‘Published in’ facet. ‘Cosmos’ and ‘Calisto’ have been renamed ‘Browse’ and ‘Map’ to fit better with our own standards. Our version simply limits results to items that are part of the National Library of Wales collection.

Content can also be explored on a map using location data for places depicted in artworks

Speaking about the project, Benoît said;

Since its origins, the web has provided fantastic opportunities to freely explore digital reproductions of artworks, to get information about them, to link them, to browse collections, for knowledge or simply for the pleasure of art experience. Little by little cultural institutions shared their collections online. At the same time, volunteers through the world build or participate in websites about artworks. Wikimedia projects, led by Wikipedia and the goal to share and to give access to all the knowledge, are major players in this movement with many contributors and wide diffusion. Wikimedia Commons, the free-use media repository, and Wikidata, the Wikimedia knowledge database, are great places to gather and structure digital heritage assets and a place where institutions and volunteers can work together. With all that has already been gathered and the technologies that come with them, it is possible to create interfaces, including in the field of art.

Jason Evans, had the idea to create a subset dedicated to the collections of the National Library of Wales, and so Dwynwen was born. And what a great idea! The quantity and quality of the metadata makes it possible to encourage new explorations in those collections. So, for example, we can see more than a hundred views of Snowdon, discover Wales at the end of the 19th century through John Thomas’ photographs, explore prints and illustrations by publications or see extracts of Peniarth Manuscripts.

Thanks a lot to the team investment of the National Welsh Library and Wikidata volunteers that make the Dwynwen possible. Enjoy!

For the National Library, this will give our users a new, and better way of exploring our digital content. Whether you are looking for something specific, like images of Donkeys, bearded men, just the images of women or you just want to explore our photographs, prints or artworks, then ‘Dwynwen’ makes this easy, fun and intuitive.

Paintings of Women – Selecting only paintings of Women

But this will also be a fantastic tool to demonstrate the value of our open access activity to management, partners and funders, and we hope to use this and other Wiki powered software such as the Dictionary of Welsh Biography timeline, currently in development to change the way we think about giving access to our digital content, and to step up efforts to harness the power of linked open data for content delivery.

‘Dwynwen’ is the result of the National Library of Wales’ enthusiasm and belief in open access principals, together with the hard work of numerous volunteers. We are incredibly grateful to Benoît, for having the vision to develop Crotos, and for his generosity in adapting the platform for us. We are also greatly indebted to Simon Cobb, our Wikidata Visiting Scholar who has done so much work to help share our data to Wikidata, and all the volunteers on Commons and Wikidata who have helped us to share and described our digital content.

Tech News issue #51, 2019 (December 16, 2019)

00:00, Monday, 16 2019 December UTC
TriangleArrow-Left.svgprevious 2019, week 51 (Monday 16 December 2019) nextTriangleArrow-Right.svg
Other languages:
Deutsch • ‎English • ‎Tiếng Việt • ‎français • ‎lietuvių • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎македонски • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎العربية • ‎ไทย • ‎中文 • ‎日本語

weeklyOSM 490

14:40, Sunday, 15 2019 December UTC


lead picture

58 sites dedicated to biodiversity 1 | Map data © OpenStreetMap contributors | Imagery © Mapbox

OSM Foundation elections

  • The results of the elections for four seats on the OSMF board were announced during the Annual General Meeting. The successful candidates were
    1. Guillaume Rischard (Luxembourg)
    2. Allan Mustard (USA),
    3. Mikel Maron (USA),
    4. Rory McCann (Ireland).

    The voting for the final seat was extremely close between Rory McCann and Michal Migurski (USA).

    weeklyOSM congratulates the successful candidates, and thanks all who took the trouble to stand for election.


  • Michael Behrens has made a request for comments on a proposal for specifying the function of different parts of a hiking trail. Examples include: approaches from public transport stops; and excursions to viewpoints or peaks near the main trail.
  • A request for comments on a proposal to adopt leisure=skatepark for tagging an area designated and equipped for skateboarding, in-line skating, BMXing, or scootering.
  • Markus reported that the vote on the pedestrian lane proposal ended with 62% approval, and as a result was unsuccessful.
  • Sebastian Martin Dicke asked for comments on a proposal for noting whether a lawyer’s office offers notary services.
  • Martin Scholtes has called for comments on a proposal to use a park_drive tag with amenity=parking to indicate if a parking area is designated for people parking to join a carpool journey.
  • Voting has opened on the telecom distribution points proposal. The proposal is for a method to map pieces of equipment, often small boxes, allowing one or more individuals or households to connect to a single telecom local loop upstream cable.
  • France 24 reports about the mapping project that Code for Africa has been doing in Makoko, a floating neighbourhood in Lagos, Nigeria. Read the article and watch the video.
  • The NGO CartOng shares in a blog post their initiative to map St Laurent du Maroni, in French Guiana, with the full involvement of local inhabitants in mapping activities.
  • MapRoulette 3.5 is now available, with much improved task search using the map, and many other smaller improvements along with bug fixes. Check it out on!


  • [1] The map Haie magique gave information about 58 sites dedicated to biodiversity in 19 communes of Île-de-France.
  • The local community of contributors to OpenStreetMap in Manila, Philippines organised its first LGBTQI-themed MapBeks mapathon, training newcomers to add data to OSM using MapContrib. Check out Mikko’s OSM Diary post.
  • The Belgian contributor juminet made a case for a “more integrated” website for Andy Allan (who does most of the website development currently) encourages him to actually get involved.
  • kartonage posted a quick reflection on his Mapillary contributions this year.
  • thomersch wrote a blog post on his calendar tool.
  • Antara Tithil, one of the core team members of the Bangladesh Open Innovation Lab (BOIL), is featured on the Daily Observer as “Woman in Mapping”.
  • Frederik Ramm is critical of Facebook’s involvement in OpenStreetMap and also of a Facebook employee applying for a board position at OSMF. A lot of interesting comments. 😉


  • krizleebear talks about possible automated edits on admin_centre linking. Detailed documentation of rationale and implementation can be found on our wiki.

OpenStreetMap Foundation

  • Peter Barth asked the candidates running for the OSMF Board some follow up questions, particularly on the subject of conflicts of interest.
  • OSM UK has set up the “OSMUK Talent Pool” to connect companies and organisations with mappers based in the UK who can carry out paid or volunteer work related to OpenStreetMap. weeklyOSM says: great idea – congrats!
  • Mikel Maron outlined some of his ideas on OSMF governance. He discusses splitting the Advisory Board into a group of Local Chapters, and a group for corporates, the need for rules around how the Board and Working Groups work together, and creating a conflict of interest policy.


  • The call for presentations for SotM 2020, to be held in Cape Town, South Africa from 3 to 5 July 2020, is now open.
  • The Ivorian newspaper FratMat reports (fr)(automatic translation) about the success of State of the Map Africa in Grand-Bassam in Ivory Coast.
  • The State of the Map Africa organising committee shared pictures of the successful conference on Flickr.
  • A three day Regional Understanding Risk 2019 conference, sponsored by the World Bank, was held in Abidjan, Côte d’Ivoire from 20 to 22 November. The last day was conjoint with the SotM Africa 2019 conference.

Humanitarian OSM

  • The HOT project #7446 is looking for help with the mapping of Djibouti after a severe flood. Due to the very dense environment the task is limited to intermediate and advanced mappers.
  • Micheal Yani, coordinator for OSM’s new South Sudan community, is looking for donations to fund workshops, the training of South Sudanese refugees in Uganda and elsewhere, and mapping devices.
  • Leon, a GIS specialist and environmental engineer in Bolivia, is asking for donations to build a geographic information system for the country. The system is intended to be a joint project of the local OSM community, public institutions, aid entities, researchers and the private sector. The area suffered from raging forest fires and a very high rate of deforestation recently. So far only $250 of the target $7,000 has been received.
  • Alice Goudie wrote a blog article about Missing Maps’ work after Hurricane Dorian hit the Bahamas in September 2019. The blog covers the Missing Maps contribution, the importance of OSM and offers some illustrative photos of “OSM in action”.


  • A new open access book on Digital Earth has been released. OpenStreetMap gets special attention in the chapter “Citizen Science in Support of Digital Earth” – written by Maria Antonia Brovelli, Marisa Ponti, Sven Schade and Patricia Solís.


  • Christian Reinstorf, aka Spiekerooger, offers (de) (automatic translation) online maps in seven different languages, based on OpenStreetMap data.

Open Data

  • Marco Minghini reports that there is a trainee position available at the European Commission–Joint Research Centre (JRC), in Ispra (Italy). The deadline for applications is January 10, 2020.


  • SomeoneElse felt it was worth summarising existing tools and approaches for tag transformation of OSM data. This was in response to a recent thread on the talk mailing list, started by Sören Reinecke. His cases include both rendering and routing examples of how to make “the ‘super detailed’ tagging” in OSM into something more appropriate for a particular use, e.g., rendering OSM data.


  • The OsmAnd team announced the release of version 3.10 of its navigation software for iOS. The new version comes with a redesigned navigation preparation screen, support for Online SQL maps, improved control of contour lines and much more.

Did you know …

  • … about GeoHipster? An independent online publication dedicated to chronicling the state, issues, and direction of the geospatial industry as seen by the people working in it.
  • … how to write good changeset comments?
  • … how to tag the origin of dishes in restaurants and goods in shops For restaurants the tag cuisine is used; for goods in shops it is origin.
  • … the environmental zone in your city? Or is it still missing? Have a look at the Wiki and complete your low emission zone.

OSM in the media

  • Watch the video of Steven Johnson giving an introduction to OpenStreetMap at a TEDx talk for an educator’s perspective.
  • Onlinekhabar reports that locals of Budhiganga municipality (Nepal) recently got training on how to map their resources, landmarks and upload the data to OpenStreetMap.

Other “geo” things

  • Christopher Barrington-Leigh and Adam Millard-Ball have written a paper on using OpenStreetMap to calculate a measure of street connectivity. They found that there was a strong association between low connectivity and increased vehicle travel, energy use and CO2 emissions.
  • Weetracker reports that there are only six African countries where flying drones is tantamount to breaking the law. Those countries are: Algeria, Cote d’Ivoire, Kenya, Madagascar, Morocco and Senegal.
  • GeoChicas have created a map of the places around the world where “Un violador en tu camino” (in English “A rapist in your path”) has been performed, with links to reports of the events. The protest performance has the aim of demonstrating against violations of women’s rights.

Upcoming Events

Where What When Country
Alice PoliMappers Adventures 2019 2019-12-01-2019-12-31 everywhere
AoA and other changes Voting on OSMF board elections 2019-12-07-2019-12-14 world
Mannheim Mannheimer Mapathons 2019-12-12 germany
Munich Münchner Stammtisch 2019-12-12 germany
Nantes Réunion mensuelle 2019-12-12 france
Berlin 138. Berlin-Brandenburg Stammtisch 2019-12-13 germany
Berlin DB Open Data XMAS Hack 2019-12-13-2019-12-14 germany
Helsinki OSM Mapathon @ Mapbox 2019-12-13 finland
San Juan OpenStreetMap Workshop for Metro Manila Bikers 2019-12-14 philippines
Lüneburg Lüneburger Mappertreffen 2019-12-17 germany
Nottingham Nottingham pub meetup 2019-12-17 united kingdom
Digne-les-Bains HÉRuDi : l’Histoire Étonnante des Rues de Digne 2019-12-17 france
London London Xmas Pub meet-up 2019-12-19 united kingdom
Biella Incontro Mensile 2019-12-21 italy
Düsseldorf Stammtisch 2019-12-27 germany
hosted by Chaos Communication Congress 36C3 OpenStreetMap assembly 2019-12-27-2019-12-30 germany
Valcea EuYoutH OSM Meeting 2020-04-27-2020-05-01 romania
Cape Town State of the Map 2020 2020-07-03-2020-07-05 south africa

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Elizabete, Jorieke V, Kleper, PierZen, Rogehm, SK53, SunCobalt, TheSwavu, YoViajo, derFred, geologist.

Visual Map Editor for MediaWiki

00:00, Saturday, 14 2019 December UTC

We have added a visual editing interface to the Maps extension for MediaWiki. This brings collaborating on geospatial information in your wiki to a new level.

Starting with version 7.13.0, released earlier today, the Maps extension supports inline visual editing of GeoJSON.

GeoJSON is an open standard for representing geospatial information. There are several online editors that allow editing GeoJSON, such as


Inline visual editing is supported both in #display_map and #ask. The editor shows when using the geojson parameter. Example: {{#display_map:geojson=Berlin}}

Assuming you have edit rights, the map will show an edit button on the left, below the zoom control. Clicking this button will make the map go into edit mode. You can tell the map is in edit mode by the edit button being replaced by several edit controls. These new controls allow adding markers and drawing shapes.

After your first edit, a save button will appear below the edit controls. Clicking this button gives you a dialog to enter an edit summary and then causes the map to exit edit mode. This is all done without the page reloading.

Tooltip text for markers and shapes can be edited by clicking the marker or shape while in edit mode.

When entering edit mode the map verifies it has the latest version of the GeoJSON. If not, it will get the latest version and display it. This minimizes the risk you override changes when using the standard MediaWiki page caching settings.

A full overview of recent improvements can be found in the release notes. You can also play around with the visual editor on the Semantic MediaWiki sandbox wiki on pages GeoJson:Berlin and Berlin. Note that you need an email-verified account.

A walkthrough of the new features is available in video format:


The editor can only edit information stored in GeoJSON. Markers or shapes defined in wikitext cannot be edited. You can however combine the two.

The editor only supports Leaflet.

The biggest limitations listed under "next steps".

Next steps

We are looking for someone to fund development of one or all of the below features.
Contact us now.

Full in-line editing

The biggest usability problem with the current visual editing experience is the need to have a dedicated page in the GeoJSON namespace which is referenced with the geojson parameter.


Things would be simpler if this extra page was not needed. It'd also avoid caching problems, as right now edits directly on the GeoJSON namespace will not be reflected on pages where it is used via the geojson parameter unless those pages are purged.

Ultimately all the user should need to tell the map is to turn the editor on. Or perhaps not even that if it is on by default for the wiki (which would be configurable).


The map would then store the markers and shapes in GeoJSON inside the page itself. Besides being much more user friendly, this approach integrates with features like watching or protecting the page.

If you'd like to fund this feature, please contact us.

Style editing

While Maps now supports display of styled shapes, based on the simplestyle specification, editing of these styles is not possible via the visual editor. To edit these styles you either need to edit them in the GeoJSON source, or export this source to an off-site editor that does support styling and then import it again.

The same limitations apply to marker styling. Right now it is not possible to change which markers are used in the visual editor.

If you'd like to fund this feature, please contact us.

Semantic MediaWiki integration

Semantic MediaWiki is one of the most useful MediaWiki extensions out there. Especially if you are working with data. Maps integrates with Semantic MediaWiki, allowing you to store and query coordinates, possibly combined with other information.

Right now it is not possible to store the geographical information contained in the GeoJSON into Semantic MediaWiki. At least not without manually converting it into wikitext, storing that somewhere and then somehow keeping things in sync.

It is possible to create a new integration that allows automatically storing information contained in the GeoJSON into Semantic MediaWiki. Initially this would just be for markers/locations, though this could be extended to polygons and other shapes as well.

If you'd like to fund this feature, please contact us.

Getting Maps

All Professional Wiki hosting plans include the Maps extension. If you want to install the extension on your own, please refer to the installation instructions.

Contact us to fund further Maps development or to get Professional Support.

Overflowing stacks in WebAssembly, whoops!

21:09, Friday, 13 2019 December UTC

Native C-like programming environments tend to divide memory into several regions:

  • static data contains predefined constants loaded from the binary, and space for global variables (“data”)
  • your actual code has to go in memory too! (“text”)
  • space for dynamic memory allocation (“heap”), which may grow “up” as more memory is needed
  • space for temporary data and return addresses for function calls (“stack”), which may grow “down” as more memory is needed

Usually this is laid out with the stack high in the address space and the heap lower in the address space, if I recall correctly? Allocating more heap is done when you need it via malloc, and the stack can either grow or warn of an overflow by using the CPU’s memory manager to detect use of data pages incremented beyond the edge of the stack.

In emscripten’s WebAssembly porting environment, things are similar but a little different:

  • code doesn’t live in linear memory, so functions don’t have memory addresses
  • because code return addresses and small local variables also live separately, only arrays/structs and vars with address taken must be on stack.
  • usable memory is continguous; you can’t have a sparse address space where stack and heap can both grow.

As a result, the stack is fixed size and there’s some fragility, but the stack uses less space usually.

Currently the memory layout is to start with static data, follow with the stack, and then the heap. The stack grows “down”, meaning when you reach the end of the stack you end up in static data territory and can overwrite global variables. This leads to weird, hard to detect errors.

When making debug builds with assertions there are some “cookie” checks to ensure that some specific locations at the edge of the stack have not been overwritten at various times, but this doesn’t always catch things if you only overwrote the beginning of a buffer in static data and not the part that had the cookie. :) It also doesn’t seem to trigger on library workflows where you’re not running through the emscripten HTML5/SDL/WebGL runtime.

There’s currently a PR open to reduce the default stack size from 5 MiB to 0.5 MiB, which reduces the amount of memory needed for small modules significantly, and we’re chatting a bit about detecting errors in the case that codebases have regressions…

One thing that’s come up is the possibility of moving the stack to before static data, so you’d have: stack, static data, heap.

This has two consequences:

  • any memory access beyond the stack end will wrap around 0 into unallocated memory at the top of the address space, causing an immediate trap — this is done by the memory manager for free, with no false positives or negatives
  • literal references to addresses in static data will be larger numbers, thus may take more bytes to encode in the binary (variable-length encoding is used in WebAssembly for constants)

Probably the safety and debugging win would be a bigger benefit than the size savings, though potentially that could be a win for size-conscience optimizations when not debugging.

In this day and age science is of the utmost importance. When I am pointed to a conference where an African scientist gives the plenary lecture; the message is on display in the picture. I take an interest.

When you want to disseminate research, when you want the science to be known by society, you have to pick your platform. You can do worse than choosing for the Wikimedia projects.

Professor Esther Ngumbi is employed at the University of Illinois at Urbana–Champaign. Her ORCiD profile has only one paper but at Wikidata we knew of others. As she is now known at Wikidata with her papers, she has a Scholia. At first there was only one co-author, a bit sparse, so others were added. They were linked to the papers they have on Wikidata. The same was done for some authors who cited professor Ngumbi..

When you, your science is known in Wikidata, you are more likely to get a Wikipedia article and yes, working for an American university helps. An ORCiD profile that is open will be even more potent when you trust organisations like your university, CrossRef to update your ORCiD when it knows about your papers, your new papers.

In this day and age where our ecology is no longer stable, it is vital to know and respect the science. While we aim for the best we have to be prepared for the worst; we have to see it coming. It is why our Wikimedia projects should inform about all the science and not just what a Wikipedia article has as a reference.

Jack needs help, so do we and, so do our audiences

19:17, Wednesday, 11 2019 December UTC
Jack penciled his aspirations for Twitter in a tweet. In it he states: "... Second, the value of social media is shifting away from content hosting and removal, and towards recommendation algorithms directing one’s attention. Unfortunately, these algorithms are typically proprietary, and one can’t choose or build alternatives. Yet."

It is good news that Jack seeks a way out, he intends to hire a "small independent team of up to five open source architects, engineers, and designers" and "Twitter is to become a client of this standard"..

In the Wikimedia projects we have similar challenges and opportunities. We cannot expect for all kinds of reasons that scientists who are very much in the news (aka relevant) there to be a Wikipedia article Dr Tewoldeberhan is a recent example but there is no reason why we cannot have her, her work and the work of any other scientist in Wikidata. With tools like Scholia we already have a significant impact by making more known that just what may be found in a Wikipedia. Jack, we do know many scientists by their Twitter handle, they already make the case for their science on Twitter. This makes it easy for you to link to and expand on Scholia. What we give our readers is more to read so that they can find conformation for what they read.

Jack, Wikidata is not proprietary, Scholia is not proprietary and the Wikimedia motto is "to share in the sum of all knowledge". Together we can shift focus from what we have read before in the Wikipedias to what there is to read on the Internet. Put stuff in context and bring the scientists who care to inform about their science in the limelight.

What we do not have is the pretense that we cover everything well. we do aim to cover everything notable well. What we provide is static, Twitter is much more dynamic and together we will change the landscape. Great technology combined with both the Twitter and Wikimedia communities has the potential of being awesome.

Supporting the Turkish Wikimedia community from the UK

17:18, Wednesday, 11 2019 December UTC
A tram on Istanbul’s Istiklal Street in the snow – image by Jwslubbock CC BY-SA 4.0

By John Lubbock, Communications Coordinator

As many of you know, Wikipedia has been blocked in Turkey since 2017. While it’s still possible to access the site from Turkey via proxy sites and VPNs, it’s much harder to edit Wikipedia from Turkey, which means that the content is not being updated and the Turkish language version of Wikipedia is not growing.

I’ve been visiting Turkey regularly since 2012 to visit my partner who used to live there, and have written about the country quite often in my spare time as a freelance journalist. So I care quite a lot about access to information in Turkey, and about supporting the Turkish Wikimedia User Group there.

A couple of months ago, I asked a BBC journalist who had been a correspondent in Turkey if he would be willing to share some of the photos he had taken in Turkey on Wikimedia Commons, because some of them were quite useful images of political events, like government press conferences, political campaign rallies and the aftermath of serious terrorist incidents. Unfortunately, the BBC claimed copyright on these images and asked for them to be removed from Commons. This was because an employee’s content (produced in the course of doing their job) is the copyright of their employer, and because the BBC have an agreement with Getty Images to let them use all their staff’s photos, even those which are low resolution, taken on smartphones and posted on Twitter.

To make up for this setback, I have decided to publish my own photos from my many visits to Turkey over the years. So far I’ve uploaded over 1500 images, which is far higher than the roughly 250 images which were donated by the BBC journalist. You can see them all in the Category:Photos of Turkey by John Lubbock on Commons. Here’s just a few of them.

Most of these photos are of Istanbul, but I’ve also visited Fethiye, Adana, Diyarbekir, Antalya, Tatvan and a few smaller towns in the East. There are some good images of the recently opened Adana Archaeological Museum, the Istanbul Archaeological Museum and the Fethiye Archaeological Museum, because, well, I like museums and one of the best things about visiting Turkey is the wide range of cultures and civilisations which have existed in Anatolia over the past few thousand years whose remains are everywhere for you to see.

In the context of the Turkish government’s blocking of Wikipedia and the ongoing European Court of Human Rights case brought by the Wikimedia Foundation to pressure Turkey to unblock the site, I think it’s important to show that the Wikimedia community can still support the Turkish Wikimedia community in various ways. That’s why I’m running a Wikipedia workshop for Turkish speakers in January to improve content on the Turkish Wikipedia about cultural subjects.

I am also working with Wikidata trainer and Histropedia creator Nav Evans to try to improve data about heritage sites in Turkey, which can hopefully be used by the Turkish User Group to run their first Wiki Loves Monuments next year. In the past, they have been unable to do this because the Turkish government’s own data about heritage sites is quite messy and hard to incorporate into Wikidata. Wikidata and Wikimedia Commons are not blocked in Turkey, so we hope that working on these project will show people in Turkey that Wikimedia projects can be important for promoting and preserving cultural heritage in Turkey which is such a large factor in their tourism industry.

Wikimedia UK would like to run more events in future for speakers of other lanaguages which can help to improve content in those languages and to meet our commitment to improving the diversity of content and contributors to Wikipedia. If you are a speaker of a language which doesn’t currently have a lot of content on Wikipedia, please consider getting in touch with Wikimedia UK and talking to us about running an event.

This Month in GLAM: November 2019

14:05, Tuesday, 10 2019 December UTC

At a time of growing polarization, misinformation, and limits placed on freedom of speech, assembly, and privacy, as well as ongoing conflict—understanding our human rights is a critical part of our daily lives. It dictates everything from how we gather in our communities and speak about the issues and causes we care about, to how to pursue freedom and prosperity.

But much of the knowledge about these rights is hidden within institutional systems or specialized publications that make it hard to access and understand them.

To address this challenge, this Human Rights Day, Wikipedia volunteers, the Wikimedia Foundation, and UN Human Rights are collaborating on a global campaign — #WikiForHumanRights — to improve and add articles about human rights on Wikipedia. The campaign will make knowledge of human rights more accessible for all. It will launch today, on 10 December, timed with the 71st anniversary of the Universal Declaration of Human Rights and run through 30 January. Everyone is invited to participate.


To exercise our own human rights and stand up for those of others, we have to first understand them. As a top website viewed by hundreds of millions of people every month, Wikipedia provides a free, trusted, and multilingual resource to help make this information more easily accessible to the world.

“At Wikimedia, we know that free access to knowledge is a fundamental human right—that anyone, anywhere should have the ability to learn more about the world around them. When we have greater access to knowledge, our societies are more informed, just, and equitable,” said Katherine Maher, Executive Director of the Wikimedia Foundation.

The #WikiForHumanRights campaign builds on this commitment to make knowledge about human rights more easily accessible for everyone to learn about their basic human rights and how to uphold them. The campaign focuses on improving, adding, and translating Wikipedia articles about two key topics—the Universal Declaration of Human Rights, the founding document outlining everyone’s fundamental rights, and youth activism, the young people who stand up for human rights every day and the issues they defend.

“To ensure that everyone has access to fundamental human rights, it’s critical that people first know their rights. By teaming up with Wikimedia, we are making critical knowledge about human rights available in as many languages as possible,” said Laurent Sauveur, Director of External Relations at UN Human Rights.


The Universal Declaration of Human Rights was born out of World War II, in recognition of the need to protect and uphold freedom and equality for everyone, everywhere. Drafted by representatives with different legal and cultural backgrounds from all regions of the world, the Declaration was proclaimed by the United Nations General Assembly in Paris on 10 December 1948 as a common standard of achievement for all peoples and all nations. It universalized human rights for the first time, holding that all people are entitled to these rights, regardless of country or government. It also placed on every human being the responsibility to stand up for others when abuses of these rights occur. Volunteer editors will be creating and translating the article about the Universal Declaration of Human Rights on Wikipedia throughout the campaign.

Today there are 1.2 billion youth aged 15-24 years globally, accounting for one out of every six people worldwide. There are more adolescents and young people alive today than at any time in human history. With the rise of such transformational young leaders as Greta Thunberg and  Malala Yousafzai, youth have been major drivers of political, economic, and social change.

There is still so much more knowledge to add, improve, and translate about human rights. We need your help to make more knowledge about this critical topic available.

How to get involved

If you’re interested in getting involved in the campaign, there are several ways you can participate:

  • Join an edit-a-thon

Check out this page to learn about local events near you and online edit-a-thons to add and improve articles about human rights. Many events will provide support with learning how to edit if you’re a newbie and will also provide lists of topics needing articles on Wikipedia. New events are still being added, so please continue to check!

Want to host your own event? Learn how with the event toolkit.

  • Share human rights topics that should have articles on Wikipedia

Tell us which human rights topics are not represented in your local language Wikipedia, and add them to the campaign list of topics.

  • Tell us why human rights are important to you

Help us amplify the campaign from now through the 30th of January on social media using the hashtag #WikiForHumanRights. Tell your followers and the world why you think getting to know your human rights is important. You can also re-tweet messages from @Wikipedia and @Wikimedia throughout the week.

  • Share photos of your events

Have photos of an edit-a-thon you ran with your community? Consider uploading them to Wikimedia Commons or sharing them on social media. Be sure to tag @Wikipedia and use the hashtag #WikiForHumanRights and we’ll share your stories!

This campaign is part of a new partnership between the Wikimedia Foundation and UN Human Rights to expand the availability of knowledge about human rights online. It builds on the impactful work of Wikimedia Argentina, the local Wikimedia chapter dedicated to supporting the Wikimedia projects and mission in the country, and their WikiDerechosHumanos project. Working with partners such as the UN, the project has been expanding Wikimedia’s human rights-related content for several years now through a series of edit-a-thons and events. Wikimedia Argentina is playing a leading role in the #WikiForHumanRights campaign and in facilitating this wider partnership to take shape on a global scale.

By partnering with the UN’s Human Rights Office, we hope to support Wikimedians from around the world to create, improve, and expand content about human rights in all Wikimedia projects and across the nearly 300 languages of Wikipedia.

Follow us on @Wikipedia and @Wikimedia for event details and updates as the campaign continues through the 30th of January and check back for updates on the event page. You can also follow our collaborators @UNHumanRights to learn more about human rights and the campaign!

Jorge Vargas is Senior Partnerships Manager at the Wikimedia Foundation. Follow them on Twitter at @jorgeavargas.

Alex Stinson is a Senior Strategist on Community Programs at the Wikimedia Foundation. Follow them on Twitter at @sadads.

While I was off on strike I was able to spend some time finishing a project I’ve been working on for a couple of months; editing the Wikipedia page for Dunfermline College of Physical Education.  I was inspired to update the existing page by the recent Body Language exhibition at the University of Edinburgh Library which delved into the archives of Dunfermline College and the influential dance pioneer Margaret Morris, to explore Scotland’s significant contributions to movement and dance education. And the reason I was so keen to improve this page, which was little more than a stub when I started editing, is that my mother was a student at Dunfermline College from 1953 – 1956, and when she died in 2011 my sister and I inherited her old college photograph album.  

My mother was not a typical Dunfermline student. Unlike many of her fellow students, who were privately educated and went straight to the college on leaving school, my mother was educated at the Nicolson Institute in Stornoway, and after leaving school she took an office job while working her way through the Civil Service exams.  She’d been working a year or so when the college came to the island to interview prospective students, and her father suggested she apply.  Her interview was successful, and she was awarded a place and a bursary to attend the college, which at that time was in Aberdeen.  Having experienced a degree of independence before going to Dunfermline, my mother chaffed at the rigid discipline of the residential college, which expected certain standards of decorum from its “girls”.  She didn’t take too kindly to the arbitrary rules, and it’s perhaps no surprise that her motto in the college year book was “Laws were made to be broken”.  She did however make many life-long friends at college and she went on to have a long and active teaching career.

My mother worked as a PE teaching on the Isle of Lewis, first as a travelling teacher working in tiny rural schools across the island, and later in the Nicolson Institute.  She passionately believed that all children should be able to enjoy physical education, regardless of aptitude or ability, and she vehemently opposed the idea that the primary role of PE teachers was to spot and nurture “talent”.  Her real interest was movement and dance and many of the children she taught in the small rural schools where convinced she was really just a big playmate who came to play with them once a week.  Sporting facilities were pretty much non-existent in rural schools in the Western Isles the 1970s. Few schools had a gyms or playing field, so she often organised games and sports days on the machair by the beaches. The first swimming pool in the islands didn’t open until the mid 1970s and prior to that she taught children to swim in the sea, on the rare occasions it was sufficiently calm and warm.  None of the schools she taught in had AV facilities of any kind and I vividly remember the little portable tape recorded that she carried around with her for music and movement lessons.  She retired from teaching in 1987, not long after the acrimonious national teachers pay dispute.  Despite being rather scunnered with the education system by the time she retired, it’s clear that the years she spent at Dunfermline played a formative role in shaping not just in her career, but also her personal relationships and her approach to teaching. Typically, she was proud to be known as the rule breaker of her “set” and I think she’d appreciate the irony of her old pictures appearing on the college Wikipedia page. 

[See image gallery at] In order to add these images to Commons, I’m having to go through the rather baroque OTRS procedure, and I’d like to thank Michael Maggs, former Chair of Board of Wikimedia UK, for his invaluable support in guiding me through the process.  Thanks are also due to colleagues at the Centre for Research Collections, which holds the college archive, for helping me access some of the sources I’ve cited. 

One last thing….when I was producing our OER Service Autumn newsletter I made this GIF to illustrate a short news item about the Body Language exhibition. 

Garden Dance GIF

Garden Dance, CC BY, University of Edinburgh.

The gif is part of a beautiful 1950s film featuring students from Dunfermline College called Garden Dance, which was released under open licence by the Centre of Research Collections.  The film is described as “Dance set in unidentified garden grounds, possibly in Dunfermline” however when I was looking through my mother’s college album I found this picture of the very same garden, so it appears it was filmed in Aberdeen. If you click through to the film, you can clearly see the same monkey puzzle tree in the background. It was obviously something of a landmark!  I wonder if my mother is one of the dancers? 


Older blog entries