Readable Functions: Minimize State

13:14, Tuesday, 30 2018 October UTC

Several tricks and heuristics that I apply to write easy to understand functions keep coming up when I look at other peoples code. In this post I share the first of two key principles to writing readable functions. Follow up posts will contain the second key principle and specific tricks, often building on these general principles.

What makes functional programming so powerful? Why do developers that have mastered it say it makes them so much more productive? What amazing features or capabilities does the functional paradigm provide to enable this enhanced productivity? The answer is not what you might expect if you never looked into functional programming. The power of the functional paradigm does not come from new functionality, it comes from restricting something we are all familiar with: mutable state. By minimizing or altogether avoiding mutable state, functional programs skip a great source of complexity, thus becoming easier to understand and work with.

Minimize Mutability

If you are doing Object Orientated Programming you are hopefully aware of the drawbacks of having mutable objects. Similar drawbacks apply to mutable state within function scope, even if those functions are part of a procedural program. Consider the below PHP code snippet:

function getThing() {
    var $thing = 'default';
    if (someCondition()) {
        $thing = 'special case';
    return $thing;

This function is needlessly complex because of mutable state. The variable $thing is in scope in the entire function and it gets modified. Thus to understand the function you need to keep track of the value that was assigned and how that value might get modified/overridden. This mental overhead can easily be avoided by using what is called a Guard Clause:

function getThing() {
    if (someCondition()) {
        return 'special case';
    return 'default';

This code snippet is easier to understand because there is no state. The less state, the less things you need to remember while simulating the function in your head. Even though the logic in these code snippets is trivial, you can already notice how the Accidental Complexity created by the mutable state makes understanding the code take more time and effort. It pays to write your functions in a functional manner even if you are not doing functional programming.

Minimize Scope

While mutable state is particularly harmful, non-mutable state also comes with a cost. What is the return value of this function?

function getThing() {
    $foo = 1;
    $bar = 2;
    $baz = 3;

    $meh = $foo + $baz * 2;
    $baz = square($meh);

    return $bar;

It is a lot easier to tell what the return value is when refactored as follows:

function getThing() {
    $foo = 1;
    $baz = 3;

    $meh = $foo + $baz * 2;
    $baz = square($meh);

    $bar = 2;
    return $bar;

To understand the return value you need to know where the last assignment to $bar happened. In the first snippet you, for no reason at all, need to scan up all the way to the first lines of the function. You can avoid this by minimizing the scope of $bar. This is especially important if, like in PHP, you cannot declare function scope values as constants. In the first snippet you likely spotted that $bar = 2 before you went through the irrelevant details that follow. If instead the code had been const bar = 2 like you can do in JavaScript, you would not have needed to make that effort.


With this understanding we arrive at two guidelines for scope in functions that you can’t avoid altogether in the first place. Thou shalt:

  • Minimize mutability
  • Minimize scope

Indeed, these are two very general directives that you can apply in many other areas of software design. Keep in mind that these are just guidelines that serve as a starting point. Sometimes a little state or mutability can help readability.

To minimize scope, create it as close as possible to where it is needed. The worst thing you can do is declare all state at the start of a function, as this maximizes scope. Yes, I’m looking at you JavaScript developers and university professors. If you find yourself in a team or community that follows the practice of declaring all variables at the start of a function, I recommend not going along with this custom because its harmful nature outweighs “consistency” and “tradition” benefits.

To minimize mutability, stop every time you are about to override a variable and ask yourself if you cannot simplify the code. The answer is nearly always that you can via tricks such as Guard Clauses, many of which I will share in follow up posts. I myself rarely end up mutating variables, less than once per thousand 1000 lines of code. Because each removal of harmful mutability makes your code easier to work with you reap the benefits incrementally and can start applying this style right away. If you are lucky enough to work with a language that has constants in function scopes, use them as default instead of variables.

See also

Thanks to Gabriel Birke for proofreading and making some suggestions.

The post Readable Functions: Minimize State appeared first on Entropy Wins.

The Wikimedia Foundation is honored to join the Global Network Initiative (GNI) as an observer, an opportunity we hope will advance our efforts to champion freedom of expression and privacy for the Wikimedia community and beyond. GNI is a channel for collective action, advocating to governments and international institutions for policies and laws which promote free expression and privacy. As an observer to GNI, the Wikimedia Foundation will be able to benefit from this accomplished network of human rights defenders, and apply the insights we gain to our existing worldwide efforts to advance these two fundamental rights.

Principle to the Wikimedia movement is the idea of collaboration. Our projects are continually expanded and improved by a global community of volunteers, and we rely on donations from millions of people worldwide to continue operating year after year. Thus, when the opportunity arose to join GNI and collaborate with other leaders in both technology and digital rights, it was only natural for the Wikimedia Foundation to engage. Everyone who joins GNI, from large technology companies to civil society groups fighting for human rights online, has indicated their commitment to addressing and solving the most prescient issues in GNI’s focus areas. By working with other member organizations and companies to further the rights of internet users worldwide, our participation as an observer in GNI will be just one of many ways we give back to the global community who supports the Wikimedia movement.

The Wikimedia Foundation understands that the protection and promotion of human rights online is essential to what we do. In order to provide access to knowledge for everyone, we must protect freedom of expression on our projects. In practice, this means we work to keep Wikipedia volunteers free from attempts to block or censor their writing on Wikipedia or requests to remove their content. We hope that by participating in GNI, we can share our experiences not only as a large internet platform, but as a mission-driven organization that holds our values firm even in the face of threats. In return, we are sure that the insights we will gain from the combined experience of GNI’s members will help us to better our initiatives to promote free expression for our communities.

In addition to freedom of expression, GNI and its members believe in respecting user privacy rights, including initiatives to increase transparency and accountability in government surveillance. We believe that strong privacy rules and opposition to surveillance are a fundamental part of the freedom that allows people to read and contribute to Wikipedia.  The Wikimedia Foundation has taken a clear stance defending our users’ right to privacy by taking part in a lawsuit challenging the NSA’s upstream surveillance collection activities and encrypting all traffic to and from Wikimedia projects through the HTTPS protocol. By taking concrete steps to defend user privacy, we are able to offer a unique perspective to other GNI members interested in how to better protect individuals. We hope that the connections we make within GNI will strengthen our own efforts to ensure that people are free to read and write Wikipedia without fear of anyone looking over their shoulder.

The Wikimedia Foundation reflects our commitment to GNI’s mission in both our actions as the host of Wikipedia and our greater advocacy efforts through the Wikimedia movement. We are excited to collaborate with and learn from the variety of committed organizations and individuals who make up GNI in order to further these efforts to promote freedom of expression and privacy worldwide and ensure that everyone, everywhere can have access to knowledge.

Jan Gerlach, Senior Public Policy Manager
Wikimedia Foundation

Earlier this year, the Wikimedia Foundation asked designer George Oates, who has worked for Flickr and the Internet Archive‘s Open Library, among others, to conduct a deep dive into Wikimedia Commons, the free media repository that provides many of the images used on Wikipedia. We wanted a fresh pair of eyeballs, and we were particularly interested in Oates’ observations about possible areas of improvement.

Oates soon realized navigating Wikimedia Commons and finding interesting materials is challenging. She noted how the category system used to organize and tag media files on Commons is confusing and hides—not shows—the richness of content there. Other areas of improvement she mentioned include making both contributors and contributions more prominent and inviting, improving the user experience for new users, and actively recruiting more diverse gender participation in the project. (Oates noted that the group of administrators on Wikimedia Commons is internationally diverse, but not so much in terms of gender; this affects, for instance, the selection of featured images.)

Oates also identified several ways in which Wikimedia Commons can work more fruitfully with GLAMs (Galleries, Libraries, Archives and Museums). This is partially due to her background in the field: over the last decade, she as worked on a series of projects that have reimagined how the world engages with the collections of libraries, archives, and museums on the web.

She created the Commons on Flickr, which to this day stands as one of the most popular shared platforms for the world’s visual archives. She worked with the Internet Archive to lead the Open Library, a universal and openly editable catalog of the world’s books, connected to a digital lending library. More recently, she has worked with a variety of individual institutions to devise new ways of exploring their collections, like this experimental interface to the Wellcome Library collection in the UK.

As we contemplate the future of cultural partnerships and the sharing of open collections in the Wikimedia movement, the Foundation asked George to do a short assignment: look at Wikimedia Commons through the lens of your background and experience, tell us candidly what you see, and suggest areas for improvement. We asked her to focus on the experience of new users, of cultural institutions, and to share with us some ideas of how Commons might evolve in the years ahead. Ben Vershbow (Director of Community Programs) and Sandra Fauconnier (Program Officer for GLAM and Structured Data) sat down with George to speak with her about her findings.


Ben: Please tell us about yourself!

George: I’ve worked on the web since 1996, mostly in design-centered roles. I designed a lot of the user interface of the photosharing service, Flickr, for the first few years, wrote its Community Guidelines (with Heather Champ), and particularly enjoyed creating the privacy and safety tools that helped people coexist. Since 2008, I’ve specialised in the cultural heritage sector, and today, I’m based in London, building an edtech company called Museum in a Box.


Sandra: What were your first impressions of Wikimedia Commons, looking at it as a ‘well-informed newbie’ and with your background of having worked on Flickr, and Flickr Commons?

Turns out, Commons is huge, dense, variegated, and alive.

It’s a system that’s about the size of Hong Kong (7.3 million people), but it’s only about 14 years old. Hong Kong has been settled for centuries. Just like the city, Commons is full of nooks and crannies, mazes, rules, signs, and people doing their thing.

Here’s a map of Hong Kong drawn using locations where photos were taken. “Blue pictures are by locals. Red pictures are by tourists. Yellow pictures might be by either.”

For me, this Locals and Tourists project symbolises the way people move around software systems. Software can be learned like a place. You can move around it how you see fit, and inhabit corners you like. You can see other people in it, and follow, copy, like, ignore, and meet them.

This sort of socialising and movement happens on Commons too, but there’s an important difference between Commons, and, say, Facebook, and that’s that the locals are building the city. The ‘old school’ admins know the back alleys and the best noodles, and the Commons newbies head for the tourist hotspots.

So, in terms of my first impressions, I faced three primary challenges to get to a position where I could present ideas, not just observe:

  1. I am a newbie to the platform, unfamiliar with its rules, rituals and traditions
  2. There’s a major usability challenge: Information vs Interface
  3. The rules of engagement are not clear

But, it also feels important to say that maybe there’s nothing wrong with it. The question in itself reveals a quiet tension. The Wikimedia Foundation’s team who support the Commons is in a hard, interesting position. They’re a crew of professional, philosophically aligned staff who find it difficult to help improve Commons. They meet resistance from the old guard, the community of volunteer admins and other long-term Commons folk who are there every day, have probably been there longer than anyone who works at the Foundation, feel deep ownership of the system, and can exercise deep control. They’ve patiently and visibly negotiated the dynamics of the system for the last fourteen years.

So, the question becomes what is it OK for the Wikimedia Foundation to do?


Ben: Yes, you’ve put your finger on a very delicate aspect of our work. Fortunately, we are finding ways to channel great ideas from the community into well-defined efforts to improve the platform. The prime example right now is our work to integrate Commons more tightly with Wikidata to improve discovery and to offer contributors and partners better ways of describing and organizing content (see here and here).

Before we go deeper into your impressions of Commons, can you say a few words about your design philosophy and how you applied it during this assignment?

George: There are two main themes in my work that come to bear here:

Challenging the dominance of search as primary digital cultural interface

Many of the web projects I’ve made recently focus on explorability of gigantic cultural collections. I use the prompt “what happens if there’s no search box?” to help you find things. How can you show the shape and contour of a collection to allow people to follow their nose and sniff out the things that interest them?

Importance of copywriting in interface design

Wiki systems are always a battle of Information vs Interface. The more functional pages you need to use the thing also take the time to explain and define themselves. It can be so informative it’s overwhelming.

Wikis are also written by many. At Flickr, I wrote a lot of the interface copy; the “system voice” was largely mine, and therefore, consistent. On Commons, you don’t know who is telling you what. This opacity can be confusing if you’re expected to follow certain rules, but don’t know who’s setting them.


Ben: So what did you learn as you delved deeper into Commons?

George: It’s huge and arcane. And successful. “Anyone can contribute” means that all sorts do, and there are now about 50 million things there. That’s astonishing.

But, there’s very little sense of the scale or depth of the thing. It’s one thing to say there are all those things that anyone can use, but it’s another to show them all. Newbie explorers are confronted with a vast textual category system including an obscure interface element to show contents of each category… can you tell what it means?

Categories are a beautiful disaster

The various Wikipedias work so beautifully because they embrace the networked nature of information. Pages are stronger (and accepted) when they interconnect across the system. Wikipedians joke about getting “lost down the rabbit hole”, where you’re at one place at one minute then you look up an hour later and you’re somewhere completely different. This is because all of the entries are deliberately and delicately interconnected.

Commons is not like that. Even though there are about 6 million “multi-hierarchichal” categories, the majority of files are only in one category, as this 2017 research shows:

If you select Images at the top of the homepage, you see a list of categories of types of image, not types of things in images. That’s tricky if you’re looking for a photo of a flower and not Images by Resolution. There are also Topics on the homepage, which operate a little more like you’d expect. You can click on Mathematics, then explore Chirality in Mathematics or Sets of Mathematical Objects. I was able to find what might be the best category in the history of categories though: Potato truncated from cube to predicate lattice. Does that sound like a category to you?

Showing the actors

I was surprised the thousands of people participating aren’t more obvious. Who’s editing what? Which people like which categories? Who is uploading a lot? Who’s helping other people? I became interested in who the admins were, how/when rules were established, and how decisions are made. If we think in terms of locals and tourists, who are the actors who know what’s happening, who I could ask for advice, or imitate?

Encouraging description

When you get to a media file, like this fabulous The approach of the spirits, there’s nowhere to jump across to; no way to see images like the one you’ve just discovered. Its categories are listed as PD-Art (PD-US-1923) and Illustrations. Nothing like myth, stories, night, spirits, stars, or the night sky.

Not being able to move sideways is a huge weakness of any hierarchical structure. It can also be difficult to navigate a hierarchy unless you know its whole structure. You’re constantly going into a branch then back out again, instead of skipping across an interconnected collection. That said, the new work on interconnected Commons infoboxes looks really promising, e.g. Category:St. Paul’s Cathedral.

There’s no limit to how many categories a digital object could belong to. How could the system be improved to allow more and different descriptions? As with many traditional cultural collections, an object’s metadata is revisited rarely, so instead of a live database of media like Flickr, where objects enjoy a very social existence, a thinly described object in a sea of 50 million is practically invisible.

Rules of engagement

“At the start of my wiki-journey, I was left to fend for myself. It felt akin to being surrounded by hungry beasts and wondering why you’re still alive…” –Pyfan

While it’s also easy or almost trite to say that Wikis are transparent democracies where anyone can participate, I also read quite a lot that people feel really intimidated when they first “come to the party”.  The category structure might also have evolved into what’s there today because the rules of engagement are dense, and the interface when you try to add or enhance things is obtuse.

As I did my research, I asked a few questions in the IRC channel, or on the Village Pump (which is really well run, by the way). Answers would be presented, and be useful, but, they referenced pages that I bet would have taken me forever to find, if I could find them at all. They often displayed old “votes” or decisions made by a very few, and not revisited, or with no recourse to revisit.


Sandra: Do you have any tips and direct feedback for the Wikimedia Commons community?

It’s amazing!!

Like I said, I remain interested in the administration of the thing. In this self-governing system, how are rules created? How do those rules percolate to the newbies? How visible are they? And most importantly, who is making them?

There are about 225 admins for 7.2 million accounts on Commons. It’s a media library built by 7 million, but not all are editing and adding things all the time. In fact, the Wikimedia Foundation suggests about 32,000 folks a month are active, but even so, the system is administered by just 0.00003% of the population.

Diversity in administration / representation / arbitration

I looked into who the Commons admins are, with the data to hand.[1] Here’s a graph that shows how many admins were brought on each year, and how many from each year “survived” as admins.

I also looked at each admin’s page, and noted whenever I could see a declared gender (using names or photos etc). Here’s the breakdown of admin community demographics:

The geographical and language spread of the admins seems pretty good. There are people from different countries, who often also participate in their language Wikipedia and/or the EN Wikipedia, and sometimes even work for, or have worked for, the Foundation. But, even in a best case scenario (where the “unspecified” category from the pie chart was actually about 50/50 women and men, that would still be 66% men and 33% women as declared administrators. There also aren’t many new folks. This might be because the rules of engagement take a while to take in and operate at a sophisticated level. Perhaps it’s also a place to inject diversity, particularly around the things that require some kind of vote or consensus.

Also interesting to note at this point that Hong Kong has a police force of 34,000.

Fostering diversity

The Geena Davis Research Institute was founded in 2004 to study and shift the balance of representation of women and girls in media. Their tagline is “if she can see it, she can be it,” and the institute has done groundbreaking research about how to measure the presence of women and men in films, mostly, to demonstrate the presence of unconscious bias in our media. The stats are telling.

One of the very simple ways the institute suggests to check when you’re making a movie is that all the crowd scenes contain 50/50 women and men. I really like this. It’s easy to check and simple to do. If we apply this idea to Commons, one place to look for an “equal crowd” is the various Highlights areas, like Photo of the Day, or Meet our photographers. Today, those lists are mostly of men, and even some of the older admins.

Those public lists are a place to look for fostering more diverse representation in the community. How could the procedure for creating those lists be realigned with a positive and welcomed diversity agenda?

Changing precedents

During my research, I was shocked to see a photograph of a naked woman show up on the homepage of Commons, marked as Photo of the Day. I went to the Help Desk to ask “Why is OK to have a picture of a nude woman on the homepage?” I wasn’t crying OMG P0RN!, but suggesting that the image may be an alienating first image for some to encounter. It’s worth having a read (and a mark of the strength of a wiki’s nature that I can retrieve the conversation). For me, it was telling that the rebuttals to my query ranged from tired responses around “pornography or art”, or that “galleries are full of naked women”, to more more resigned responses like “it’s already been decided”, or, “But this is the Internet, where the opinions of Western young men dominate, so good luck getting any sort of grown-up discussion about that.”

The fact that there the system is full of old decisions that are no longer questioned is a problem. You can see how it’s now handed out as a “it’s the way we’ve always done it” response here, as another helpful person chipped in with support for my query. You can also see the original voting process in action, in November 2013, which pushed the photo of the naked woman into the picture of the day category. (Warning: you’ll also see the naked woman on that page.) Of the 18 votes cast, at least 17 of them were by men, 3 of whom were admins, many of whom have been on Commons 5+ years, and a handful of whom are also featured photographers.

I realise that this file and its life on Commons may be an easy target, but, it’s also indicative of the oligarchal participation and representation on Commons.

What if featured photographs votes require a more representative voter group? What if you, as a user, could only have 10 featured photographs? What if the “Featured” lists were removed entirely?


Sandra: Do you have any suggestions on how to improve the design of Wikimedia Commons in general, to make it easier to use?

George: If we’re talking about making it easier to use for individuals using the web interface, then yes, I do.

There are just five user pages and one email in the UI that could be improved to help new folks get grounded. Making them crisp and instructional instead of information vs interface would have a huge effect. It’s about decluttering the copy and trying to help the new person figure out what to do first. The pages are User, (and welcome email), User talk, Watchlist, User contributions, and File list.

For example, here’s the current Welcome email, the first point of contact from the Commons to an individual:

Hi there [Username],

Welcome to Wikimedia Commons! Someone (probably you) from IP address, has registered an account “[Username]” with your email address here. We’re glad you decided to join us.

What next? First you should confirm your account.

To confirm that this account really does belong to you and activate email features on Wikimedia Commons, open this link in your browser:

After that, you’ll see ideas on how to get started and links to help you learn about Wikimedia Commons.

From all of us here at Wikimedia Commons, welcome aboard!

If you did *not* register the account, follow this link to cancel the email address confirmation:

This confirmation code will expire at 13:53, 13 June 2018.

This asks more questions than it answers, and even casts doubt on me being a legitimate user! “Someone (probably you) / “to confirm this account really does belong to you…” come across as defensive and doubtful. Hardly a friendly welcome for someone new.

Have a look at the difference if you make it more concise:

Hi [Username],

Welcome to Wikimedia Commons, the world’s free media library!

Your first step is to confirm your email address, please:

Once that’s done, there are a few ways we recommend getting started:

• Read the Policies and Guidelines of contributing to Wikimedia Commons
• Take a Photo Challenge or Upload your first media file
• Help improve information about Commons resources
• Have a look around your account settings

Good luck! And if you get stuck, please visit the Community Help Centre.

It’s also worth noting that I have absolutely no idea how I—an interested interface designer—could independently contribute effort towards making this part of the system better, just like I can edit regular entries.

Seeing each other

One of the reasons I think Flickr and other social systems are so successful is because we can see each other when we use them. They show people their own stuff, other people’s stuff, allow them to follow/connect/gather, show the activity happening—particularly on your stuff because we all love that—and often, give a short list of things to try next, to show the way through. We learn from the way other people act and operate in the physical world, and that’s the same on a software platform.

The foundation is there on Commons to enhance representations of scale, activity, and actors to help people see each other more clearly. Improving those five core screens with these themes in mind would help locate new people in the system, and give them only as much information as they need, when they need it.


Ben: And what about suggestions on how to invite more and better contributions to Commons?

George: I would try to make it much more obvious that viewers can improve metadata; provide much clearer calls to action to interested people. I mean, it’s somewhat implied, given that it’s a Wikimedia system, but, as far as I can tell, the call to action to improve the metadata about a file is a small Edit link next to the Summary heading on the item’s page.

Here’s the Pied-winged swallow, Picture of the Day on 3 September:

If you have the gumption to click “Edit”, you get this:

So, even if you wanted to add information about the image — e.g. it’s taken outside, there’s blue sky, it’s a bird, etc — you have to figure out the editing code/pseudo code and its UI. I tried to add “tag=bird” for example, and it threw an error. It’s very different from having a UI designed specifically to encourage contributions and conversation. It’s not simple. (I realise that “special” images, like Picture of the Day, have protected edit capacity to prevent vandalism.)

There’s a ton of uploading that happens to Commons that’s not through the UI, but with lots of other tools written to streamline the process. (That’s another strength of the system, and was at Flickr for that matter—that people can contribute to it through all sorts of interfaces.) I found myself wondering if another way to improve contributions to Commons might be to focus on the connective tissue between the various Wikimedia platforms, in the UI. If I upload an image to Commons, I could be actively prompted to create or enhance a Wikipedia page…


Sandra: Do you have suggestions on how to make Wikimedia Commons a more interesting and useful platform for external organizations (e.g. GLAMs) to work with our communities and make their knowledge available?

George: I would look at three main areas around usefulness for institutions:


Demonstrating use of collections, especially in a digital context, is especially difficult for institutions. If their stuff is nestled cosily amongst about 50 million other things, how and when can they know if the effort they’ve put in to sharing their treasures in the Commons is effective? Can any use of materials be reported on? Can that be established as new, legitimate usage of an institution’s materials? How can it be more refined, or indeed, more accurate than the mysterious “page view”. How can Commons help institutions (and individuals) see when their stuff is used? Something like the old school Dopplr Annual Report, perhaps?

Presentation and presence

As I understand it, institutions either create a category or use a totally different system to see all their media in one place. Can it provide a good destination for institutions to share around? What could their “User” page be like?

If you look at the Wikipedia page for a contributing institution, e.g. Nationalmuseum Stockholm, you see this little box at the bottom:

Even just rephrasing that to say “Explore the Nationalmuseum Stockholm’s 2,400 contributions to Wikimedia Commons” or perhaps creating another box that does that might be a start. That link also takes me to Category:Nationalmuseum Stockholm, but it’s not immediately clear which items are from the collection versus about the institution.

There’s obviously a question about whether institutions should be treated differently in the Commons context. I’m not sure about that. Perhaps it would be better to look for a way to enhance anyone’s presence equally. If part of the claimed benefit of participating in Commons is that metadata may be improved, visibility into the material also needs to be improved. What about a link on the home page to a list that shows all of the institutions?

Supporting people doing great stuff

When I was doing research, I attended a Wiki-a-thon at the Wellcome Collection in London. It was a day dedicated to improving the presence of women involved in medicine on Wikipedia. There were about eight people in the room, including the Wellcome Wikimedian in Residence, Alice. It was brilliant, and I could not have created my article on Clara Stone without Alice in the room. The simple act of having a living human who could explain the rules and secret doors (like, you need to make 10 edits before you can make a page) instantly made the whole process more approachable.

Human Wikimedian presence in cultural institutions can really help bridge the gap that’s sometimes perceived between authority and the crowd, which can be difficult for institutions to conquer on their own. This mediation is hugely valuable, and something that the Foundation could continue to support directly, and more. What if there was a Wikimedian in Residence at every national library in the world? (And lots of the smaller ones too?!?)

I was also surprised to discover that lots of the updates to Commons are done en masse, through scripts and other programmes developed by volunteers. What sorts of other support could the Foundation provide for these folks? How can it support this developer community even more than it does now? What new efforts and resources can be amplified to reach more potential users (both outside and within GLAMs)?


Ben: Any last thoughts on what a community-driven media repository like Wikimedia Commons could become in the future?

George: I was a bit surprised that this work turned out to be about representation in the end. Yes, there are simple interface changes that could be made to continue to improve usability, but there’s a huge challenge for Commons (and other huge collections online like Europeana and DPLA) to improve description of their millions of things so they’re easier to see.

As we’ve just seen in the United States, oligarchies don’t like change and giving up control, but what if diversity trumps ability?

These vast online collections will eventually see a return of the power and delicacy of curation. Only machines can consume 50 million things, and even then they might not be sure exactly what those things are, and if they find something interesting they won’t know who to tell. Computers “scan everything and hear nothing,” so we need all kinds of humans from all over the world to help gather our histories into meaningful units, but right now, Commons is a pretty closed system.

Imagine if Commons could become like one of the great old cities of the world, full of all kinds of people from all kinds of places instead of a big city run by a tiny group of people who basically look the same?


Interview by Ben Vershbow, Director, Community Programs
Sandra Fauconnier, Program Officer, Community Programs
Wikimedia Foundation


[1] The admin data consists of two views: (1) The list on the right side of the Commons:Administrators page, and (2) those users listed on the Commons:Administrators/Archive/Successful requests for adminship, which I edited manually to separate users who’d ever been admins from those who remain admins. Note that the total count of admins in those two views differs: (1) 225 admins, and (2) 472. I decided (2) is the more useful, but it should be stated that that’s a manually maintained list, and possibly not accurate.

Moving Plants

14:20, Monday, 29 2018 October UTC
All humans move plants, most often by accident and sometimes with intent. Humans, unfortunately, are only rarely moved by plants. 

Unfortunately, the history of plant movements is often difficult to establish. In the past, the only way to tell a plant's homeland was to look for the number of related species in a region to provide clues on their area of origin. This idea was firmly established by Nikolai Vavilov before he was sent off to Siberia, thanks to Stalin's crank-scientist Lysenko, to meet an early death. Today, genetic relatedness of plants can be examined by comparing the similarity of DNA sequences (although this is apparently harder than with animals due to issues with polyploidy). Some recent studies on individual plants and their relatedness have provided insights into human history. A study on baobabs in India and their geographical origins in East Africa established by a study in 2015 and that of coconuts in 2011 are hopefully just the beginnings. These demonstrate ancient human movements which have never received much attention from most standard historical accounts.

Unfortunately there are a lot of older crank ideas that can be difficult for untrained readers to separate. I recently stumbled on a book by Grafton Elliot Smith, a Fullerian professor who succeeded J.B.S.Haldane but descended into crankdom. The book "Elephants and Ethnologists" (1924) can be found online and it is just one among several similar works by Smith. It appears that Smith used a skewed and misapplied cousin of Dollo's Law. According to him, cultural innovation tended to occur only once and that they were then carried on with human migrations. Smith was subsequently labelled a "hyperdiffusionist", a disparaging term used by ethnologists. When he saw illustrations of Mayan sculpture he envisioned an elephant where others saw at best a stylized tapir. Not only were they elephants, they were Asian elephants, complete with mahouts and Indian-style goads and he saw this as definite evidence for an ancient connection between India and the Americas! An idea that would please some modern-day Indian cranks and zealots.

Smith's idea of the elephant as emphasised by him.
The actual Stela in question
 "Fanciful" is the current consensus view on most of Smith's ideas, but let's get back to plants. 

I happened to visit Chikmagalur recently and revisited the beautiful temples of Belur on the way. The "Archaeological Survey of India-approved" guide at the temple did not flinch when he described an object in the hand of a carved figure as being maize. He said maize was a symbol of prosperity. Now maize is a crop that was imported to India and by most accounts only after the Portuguese sea incursions into India in 1492. In the late 1990s, a Swedish researcher identified similar  carvings (actually another one at Somnathpur) from 12th century temples in Karnataka as being maize cobs. It was subsequently debunked by several Indian researchers from IARI and from the University of Agricultural Sciences where I was then studying. An alternate view is that the object is a mukthaphala, an imaginary fruit made up of pearls.
Somnathpur carvings. The figures to the
left and right hold the puported cobs in their left hands.
(Photo: G41rn8)

The pre-Columbian oceanic trade ideas however do not end with these two cases from India. The third story (and historically the first, from 1879) is that of the sitaphal or custard apple. The founder of the Archaeological Survey of India, Alexander Cunningham, described a fruit in one of the carvings from Bharhut, a fruit that he identified as custard-apple. The custard-apple and its relatives are all from the New World. The Bharhut Stupa is dated to 200 BC and the custard-apple, as quickly pointed out by others, could only have been in India post-1492. The Hobson-Jobson has a long entry on the custard apple that covers the situation well. In 2009, a study raised the possibility of custard apples in ancient India. The ancient carbonized evidence is hard to evaluate unless one has examined all the possible plant seeds and what remains of their microstructure. The researchers however establish a date of about 2000 B.C. for the carbonized remains and attempt to demonstrate that it looks like the seeds of sitaphal. The jury is still out.
I was quite surprised that there are not many writings that synthesize and comment on the history of these ideas on the Internet and somewhat oddly I found no mention of these three cases in the relevant Wikipedia article (naturally, fixed now with an entire new section) - pre-Columbian trans-oceanic contact theories

There seems to be value for someone to put together a collation of plant introductions to India along with sources, dates and locations of introduction. Some of the old specimens of introduced plants may well be worthy of further study.

Introduction dates
  • Pithecollobium dulce - Portuguese introduction from Mexico to Philippines and India on the way in the 15th or 16th century. The species was described from specimens taken from the Coromandel region (ie type locality outside native range) by William Roxburgh.
  • Eucalyptus globulus? - There are some claims that Tipu planted the first of these (See my post on this topic).  It appears that the first person to move eucalyptus plants (probably E. globulosum) out of Australia was  Jacques Labillardière. Labillardiere was surprized by the size of the trees in Tasmania. The lowest branches were 60 m above the ground and the trunks were 9 m in diameter (27 m circumference). He saw flowers through a telescope and had some flowering branches shot down with guns! (original source in French) His ship was seized by the British in Java and that was around 1795 or so and released in 1796. All subsequent movements seem to have been post 1800 (ie after Tipu's death). If Tipu Sultan did indeed plant the Eucalyptus here he must have got it via the French through the Labillardière shipment.  The Nilgiris were apparently planted up starting with the work of Captain Frederick Cotton (Madras Engineers) at Gayton Park(?)/Woodcote Estate in 1843.
  • Muntingia calabura - when? - I suspect that Tickell's flowerpecker populations boomed after this, possibly with a decline in the Thick-billed flowerpecker.
  • Delonix regia - when?
  • In 1857, Mr New from Kew was made Superintendent of Lalbagh and he introduced in the following years several Australian plants from Kew including Araucaria, Eucalyptus, Grevillea, Dalbergia and Casuarina. Mulberry plant varieties were introduced in 1862 by Signor de Vicchy. The Hebbal Butts plantation was establised around 1886 by Cameron along with Mr Rickets, Conservator of Forests, who became Superintendent of Lalbagh after New's death - rain trees, ceara rubber (Manihot glaziovii), and shingle trees(?). Apparently Rickets was also involved in introducing a variety of potato (kidney variety) which got named as "Ricket". -from Krumbiegel's introduction to "Report on the progress of Agriculture in Mysore" (1939) [Hebbal Butts would be the current day Airforce Headquarters)

Further reading
  • Johannessen, Carl L.; Parker, Anne Z. (1989). "Maize ears sculptured in 12th and 13th century A.D. India as indicators of pre-columbian diffusion". Economic Botany 43 (2): 164–180.
  • Payak, M.M.; Sachan, J.K.S (1993). "Maize ears not sculpted in 13th century Somnathpur temple in India". Economic Botany 47 (2): 202–205. 
  • Pokharia, Anil Kumar; Sekar, B.; Pal, Jagannath; Srivastava, Alka (2009). "Possible evidence of pre-Columbian transoceanic voyages based on conventional LSC and AMS 14C dating of associated charcoal and a carbonized seed of custard apple (Annona squamosa L.)" Radiocarbon 51 (3): 923–930. - Also see
  • Veena, T.; Sigamani, N. (1991). "Do objects in friezes of Somnathpur temple (1286 AD) in South India represent maize ears?". Current Science 61 (6): 395–397.
Dubious research sources
  • Singh, Anurudh K. (2016). "Exotic ancient plant introductions: Part of Indian 'Ayurveda' medicinal system". Plant Genetic Resources. 14(4):356–369. 10.1017/S1479262116000368. [Among the claims here are that Bixa orellana was introduced prior to 1000 AD - on the basis of Sanskrit names which are assigned to that species - does not indicate basis or original dated sources. The author works in the "International Society for Noni Science"! ] 
  • The same author has rehashed this content with several references and published it in no less than the Proceedings of the INSA - Singh, Anurudh Kumar (2017) Ancient Alien Crop Introductions Integral to Indian Agriculture: An Overview. Proceedings of the Indian National Science Academy 83(3). There is a series of cherry-picked references, many of the claims of which were subsequently dismissed by others or remain under serious question. In one case there is a claim for early occurrence of Eleusine coracana in India - to around 1000 BC. The reference cited is in fact a secondary one - the original work was by Vishnu-Mittre and the sample was rechecked by another bunch of scientist and they clearly showed that it was not even a monocot - in fact Vishnu-Mittre himself accepted the error (the original paper was Vishnu-Mittre (1968). "Protohistoric records of agriculture in India". Trans. Bose Res. Inst. Calcutta. 31: 87–106. and the re-analysis was Hilu, K. W.; de Wet, J. M. J.; Harlan, J. R. Harlan (1979). "Archaeobotanical Studies of Eleusine coracana ssp. coracana (Finger Millet)". American Journal of Botany. 66 (3):330–333. Clearly INSA does not have great peer review and have gone by the claims of authority by virtue of positions held. Even an external researcher who merely examines the references cited would be able to note that all subsequent contrary claims have been dropped.

Tech News issue #44, 2018 (October 29, 2018)

00:00, Monday, 29 2018 October UTC
TriangleArrow-Left.svgprevious 2018, week 44 (Monday 29 October 2018) nextTriangleArrow-Right.svg
Other languages:
Deutsch • ‎English • ‎Tiếng Việt • ‎dansk • ‎español • ‎français • ‎polski • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎مصرى • ‎हिन्दी • ‎মেইতেই লোন্ • ‎中文 • ‎日本語 • ‎한국어

PHP Typed Properties

09:32, Sunday, 28 2018 October UTC

Lately there has been a lot of hype around the typed properties that PHP 7.4 will bring. In this post I outline why typed properties are not as big of a game changer as some people seem to think and how they can lead to shitty code. I start by a short introduction to what typed properties are.

What Are Typed Properties

As of version 7.3, PHP supports types for function parameters and for function return values. Over the latest years many additions to PHP types where made, such as primitive (scalar) types like string and int (PHP 7.0), return types (PHP 7.0), nullable types (PHP 7.1) and parameter type widening (PHP 7.2). The introduction of typed properties (PHP 7.4) is thus a natural progression.

Typed properties work as follows:

class User {
    public int $id;
    public string $name;
    public function __construct(int $id, string $name) {
        $this->id = $id;
        $this->name = $name;

You can do in two simple lines what takes a lot more boilerplate in PHP 7.3 or earlier. In these versions, if you want to have type safety, you need a getter and setter for each property.

    private $id;
    private $name;

    public function getId(): int {
        return $this->id;
    public function setId(int $id): void {
        $this->id = $id;
    public function getName(): string {
        return $this->name;
    public function setName(string $name): void {
        $this->id = $name;

Not only is it a lot more work to write all of these getters and setters, it is also easy to make mistakes when not automatically generating the code with some tool.

These advantages are what the hype is all about. People are saying it will save us from writing so much code. I think not, and I am afraid of the type of code those people will write using typed properties.

Applicability of Typed Properties

Let’s look at some of different types of classes we have in a typical well designed OO codebase.

Services are classes that allow doing something. Loggers are services, Repositories are services and LolcatPrinters are services. Services often need collaborators, which get injected via their constructor and stored in private fields. These collaborators are not visible from the outside. While services might have additional state, they normally do not have getters or setters. Typed properties thus do not save us from writing code when creating services and the added type safety they provide is negligible.

Entities (DDD term) encapsulate both data and behavior. Normally their constructors take a bunch of values, typically in the form of Value Objects. The methods on entities provide ways to manipulate these values via actions that make sense in the domain language. There might be some getters, though setters are rare. Having getters and setters for most of the values in your entities is an anti-pattern. Again typed properties do not save us from writing code in most cases.

Value Objects (DDD term) are immutable. This means you can have getters but not setters. Once again typed properties are of no real help. What would be really helpful however is a first-class Value Object construct part of the PHP language.

Typed properties are only useful when you have public mutable state with no encapsulation. (And in some cases where you assign to private fields after doing complicated things.) If you design your code well, you will have very little code that matches all of these criteria.

Going to The Dark Side

By throwing immutability and encapsulation out of the window, you can often condense code using typed properties. This standard Value Object …

class Name {
    private $firstName;
    private $lastName;
    public function __construct(string $firstName, string $lastName) {
        $this->firstName = $firstName;
        $this->lastName = $lastName;
    public function getFirstName(): string {
        return $this->firstName;

    public function getLastName(): string {
        return $this->lastName;

… becomes the much shorter

class Name {
    public strimg $firstName;
    public strimg $lastName;
    public function __construct(string $firstName, string $lastName) {
        $this->firstName = $firstName;
        $this->lastName = $lastName;

The same goes for Services and Entities: by giving up on encapsulation and immutability, you gain the ability to not write a few lines of simple code.

This trade-off might actually make sense if you are working on a small codebase on your own or with few people. It can also make sense if you create a throw away prototype that you then actually throw away. For codebases that are not small and are worked on by several people writing a few simple getters is a low price to pay for the advantages that encapsulation and immutability provide.


Typed properties marginally help with type safety and in some rare cases can help reduce boilerplate code. In most cases typed properties do not reduce the amount of code needed unless you throw the valuable properties of immutability and encapsulation out of the window. Due to the hype I expect many junior programmers to do exactly that.

See Also

The post PHP Typed Properties appeared first on Entropy Wins.

weeklyOSM 431

14:13, Saturday, 27 2018 October UTC




XCTrails – the map for climbing routes fully based on OSM data [1] | © a.müller | map data © OpenStreetMap contributors


  • Dabohamda announced in a tweet (fr) the creation of the first bus line in Conakry following a field survey. OpenStreetMap Guinea will complete the bus network in the coming days. (automatic translation)
  • Satoshi Iida (User nyampire) announced last week on Twitter(1, 2) (ja) about fresh aerial imagery for Japan. OSM now has permission to use aerial imagery published as open data from Nerima-ku (specified districts in Tokyo Metropolis) and Fukaya city. He created map tiles and explains how to use them on the wiki Nerima-ku (ja) (automatic translation) and Fukaya (ja) (automatic translation). The imagery is of very high resolution, which will help us to enrich OSM.
  • Martin Koppenhoefer asks on the mailing list whether the decision of the Data Working Group regarding the Crimean peninsula should be changed. Russia has been in control of the area for four years and changes are not expected.
  • Frederik Ramm of the DWG warns on the Talk-de (de) (automatic translation) mailing list and in the German forum (de) (automatic translation) that users who continue to participate in verbal fights in changeset comments of new users against or for ‘gluing’ landuse areas to other features will be temporarily blocked from editing. He also started a discussion (de) (automatic translation) about this topic to try and resolve one of the oldest contentious issues in OSM, i.e. the gluing of linear features to area features. Mappers pro gluing don’t want to leave blank artefacts in between when rendering at high zoom levels; mappers against find it is extremely time consuming to do further editing when glued together.
  • Researchers at Cardiff University have used OSM data to develop a novel method for finding the safest routes for pedestrians. Their work is to be published in the journal Accident Analysis & Prevention.
  • Joseph Eisenberg suggests bringing some order to the tagging of hot springs and geysers. The current tagging goes back to a proposal from 2008 that was never approved.
  • Gregory Marler presents the 360° cameras which are available for hire from OSM UK for its members in an unboxing video on YouTube.
  • With a mail to the tagging list, Allan Mustard draws attention to the proposal about distinguishing embassies from consulates. The aim of the proposal is to bring structure to the current set of informal tagging rules, to let OSM reflect international law.


  • The user Adrian O´Connor shares a tweet about the launch of OpenStreetMap Ireland at Maynooth University. Ciarán Staunton (aka DeBigC) reports the founding of OpenStreetMap Ireland, which aims to become the local chapter of the OSMF in Ireland.
  • Simon Poole has submitted a pull request that will add links to new OSM Terms of Use in the Welcome box and on the About page.


  • An import of natural monument trees in the Italian region Friuli Venezia Giulia is planned. Giovanni has created a wiki site and is looking for help.
  • Leif Rasmussen announced a ready-to-go import for missing buildings and address data in Miami-Dade County.
  • update An import of electric charging station for cars in Norway, Sweden and Finland is currently in preparation. As outlined in the wiki page, electric charging stations are quite important in Scandinavia as nearly 50% of all cars sold in Norway are electric cars or plug-in hybrids.
  • While most imports are carefully prepared, documented and discussed, some aren’t. As reported on the Canadian mailing list, a mapper added administrative boundaries in a questionable way and ignored comments from community members.

OpenStreetMap Foundation

  • The draft meeting minutes for the License Working Group meeting of October 11th were published.
  • The meeting minutes of the recent OSMF Board have been published.


  • Opensaar e.V., a German FOSS organisation, will host a meet&talk on November 15th at Saarbrücken in Germany. In addition to talks about OGC standards, Mapserver, Postgis and QGIS, Guillaume Rischard (OSM: Stereo) will talk about the experience with open geodata in Luxembourg.

Humanitarian OSM

  • The OSM GeoWeek will take place between November 11-17th. OSM GeoWeek is a celebration of geography and map making with OpenStreetMap. Last year there were 230 events in 48 countries. The homepage of the OSM GeoWeek helps you to find or register an event.
  • HOT endorsed the Principles for Digital Development, which comprise nine guidelines designed to help digital development practitioners integrate established best practices into technology-enabled programs.


  • The association “Atvirasis žemėlapis” (The Open Map) together with SpatialForces released a topographical vector map of Lithuania. The map combines OSM data, the SwissTopo style with some minor adjustments and LiDAR based dynamic high resolution hillshading. Further details are described in a blogpost.
  • XCTrails Map provides a dedicated map for climbing routes fully based on OSM data.


  • OsmAnd seems (de) (automatic translation) to partially meet the needs of firefighters, but favourites with links to PDFs (e.g. floor plans) would be very helpful. A user is looking for help in the German OSM forum.


  • Coder superDoss provided a pull request for iD, that enables it to read and display data in shapefile format.


  • Daniel Koć announced the release of OpenStreetMap Carto 4.16.0, that brings a bunch of new icons and renderings. There have been a few comments on why ATMs and post boxes are now rendered only at zoom level 19+, and the discussion on the list indicates that it was not a simple decision.
  • Please check the OSM Software Watchlist of Wambacher for the latest releases of all OSM software.

Did you know …

  • … about the possibility to use (de) (automatic translation) mailing lists as a forum?

Other “geo” things

  • A visualisation of rivers in the US based on their average annual flow has been posted on
  • The blog on, a site dedicated to mountain biking, has published a test of the outdoor GPS Wahoo Elemnt, that uses OSM maps.
  • The website published an article about the increasing importance of satellite imagery for journalism. Aerial views have become indispensable for journalists who want to double check the location of a scene, document deforestation, illustrate devastation of urban areas in recent conflicts and many other purposes.
  • The World Food Programme posted an update about the situation in Nepal. The region suffers from extreme food shortage on a large scale. An OpenStreetMap-based pilot project on trail and infrastructure mapping in the Jumla district concluded in September.

Upcoming Events

Where What When Country
Manila 【MapaTime!】 @ co.lab 2018-10-27 philippines
Rennes Recensement des commerces du centre-ville 2018-10-28 france
Melbourne Papua New Guinea Malaria Mapathon 2018-10-31 australia
Toronto Mappy Hour 2018-11-05 canada
Lyon Rencontre mensuelle pour tous 2018-11-13 france
Mumble Creek OpenStreetMap Foundation public board meeting 2018-11-15 everywhere
Bengaluru State of the Map Asia 2018 2018-11-17-2018-11-18 india
Melbourne FOSS4G SotM Oceania 2018 2018-11-20-2018-11-23 australia
online via IRC Foundation Annual General Meeting 2018-12-15 everywhere
Heidelberg State of the Map 2019 (international conference) 2019-09-21-2019-09-23 germany

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Anne Ghisla, Nakaner, Polyglot, Rogehm, SunCobalt, TheSwavu, YoViajo, derFred, geologist, jinalfoflia, k_zoar, keithonearth.

#Library #Science - Prof Dr Frank Huysmans

06:02, Saturday, 27 2018 October UTC
Mr Huysman's works at the Universiteit Amsterdam. He teaches "Library sciences" and as is usual for a scientist, he has a fair share of publications to his name.

The problem is that this field of science is not well represented in Wikidata. There were no publications to his name. Importing them from ORCiD proved problematic; only four were added out of the 22 known there. Working from what was known, it was possible to add co-authors and enrich those, seek out their co-authors and enrich them as well. The result is the current 40 publications to Mr Huysman's name.

Mr Huysman has both a Twitter and an ORCiD account. Everybody who does, in Wikidata, will have his or her profile in Wikidata updated thanks to a job that is running by Daniel Mietchen. They are the ones who publicly promote their science and in this way they gain some additional credibility.

NB when you have an ORCiD and twitter, tweet #IcanHazWikidata and you will get your Qid.

When you care about your science, do maintain your ORCiD profile because it will make your papers, your co-authors and the organisation for more visible in Wikidata.. Your #Scholia profile will get better and better and chances of being quoted in Wikipedia improve.

Shocking tales from ornithology

11:20, Friday, 26 2018 October UTC
Manipulative people have always made use of the dynamics of ingroups and outgroups to create diversions from bigger issues. The situation is made worse when misguided philosophies are peddled by governments that put economics ahead of ecology. The pursuit of easily gamed targets such as GDP is preferrable to ecological amelioration since money is a man-made and controllable entity. Nationalism, pride, other forms of chauvinism, the creation of enemies and the magnification of war threats are all effective tools in the arsenal of Machiavelli for use in misdirecting the masses when things go wrong. One might imagine that the educated, especially scientists, would be smart enough not to fall into these traps, but cases from history dampen hopes for such optimism.

There is a very interesting book in German by Eugeniusz Nowak called "Wissenschaftler in turbulenten Zeiten" (or scientists in turbulent times) that deals with the lives of ornithologists, conservationists and other naturalists during the Second World War. Preceded by a series of recollections published in various journals, the book was published in 2010 but I became aware of it only recently while translating some biographies into the English Wikipedia. I have not yet actually seen the book (it has about five pages on Salim Ali as well) and have had to go by secondary quotations in other content. Nowak was a student of Erwin Stresemann (with whom the first chapter deals with) and he writes about several European (but mostly German, Polish and Russian) ornithologists and their lives during the turbulent 1930s and 40s. Although Europe is pretty far from India, there are ripples that reached afar. Incidentally, Nowak's ornithological research includes studies on the expansion in range of the collared dove (Streptopelia decaocto) which the Germans called the Türkentaube, literally the "Turkish dove", a name with a baggage of cultural prejudices.

Nowak's first paper of "recollections" notes that: [he] presents the facts not as accusations or indictments, but rather as a stimulus to the younger generation of scientists to consider the issues, in particular to think “What would I have done if I had lived there or at that time?” - a thought to keep as you read on.

A shocker from this period is a paper by Dr Günther Niethammer on the birds of Auschwitz (Birkenau). This paper (read it online here) was published when Niethammer was posted to the security at the main gate of the concentration camp. You might be forgiven if you thought he was just a victim of the war. Niethammer was a proud nationalist and volunteered to join the Nazi forces in 1937 leaving his position as a curator at the Museum Koenig at Bonn.
The contrast provided by Niethammer who looked at the birds on one side
while ignoring inhumanity on the other provided
novelist Arno Surminski with a title for his 2008 novel -
Die Vogelwelt von Auschwitz
- ie. the birdlife of Auschwitz.

G. Niethammer
Niethammer studied birds around Auschwitz and also shot ducks in numbers for himself and to supply the commandant of the camp Rudolf Höss (if the name does not mean anything please do go to the linked article / or search for the name online).  Upon the death of Niethammer, an obituary (open access PDF here) was published in the Ibis of 1975 - a tribute with little mention of the war years or the fact that he rose to the rank of Obersturmführer. The Bonn museum journal had a special tribute issue noting the works and influence of Niethammer. Among the many tributes is one by Hans Kumerloeve (starts here online). A subspecies of the common jay was named as Garrulus glandarius hansguentheri by Hungarian ornithologist Andreas Keve in 1967 after the first names of Kumerloeve and Niethammer. Fortunately for the poor jay, this name is a junior synonym of  G. g. anatoliae described by Seebohm in 1883.

Meanwhile inside Auschwitz, the Polish artist Wladyslaw Siwek was making sketches of everyday life  in the camp. After the war he became a zoological artist of repute. Unfortunately there is very little that is readily accessible to English readers on the internet (beyond the Wikipedia entry).
Siwek, artist who documented life at Auschwitz
before working as a wildlife artist.
Hans Kumerloeve
Now for Niethammer's friend Dr Kumerloeve who also worked in the Museum Koenig at Bonn. His name was originally spelt Kummerlöwe and was, like Niethammer, a doctoral student of Johannes Meisenheimer. Kummerloeve and Niethammer made journeys on a small motorcyle to study the birds of Turkey. Kummerlöwe's political activities started earlier than Niethammer, joining the NSDAP (German: Nationalsozialistische Deutsche Arbeiterpartei = The National Socialist German Workers' Party)  in 1925 and starting the first student union of the party in 1933. Kummerlöwe soon became a member of the Ahnenerbe, a think tank meant to provide "scientific" support to the party-ideas on race and history. In 1939 he wrote an anthropological study on "Polish prisoners of war". At the museum in Dresden that he headed, he thought up ideas to promote politics and he published them in 1939 and 1940. After the war, it is thought that he went to all the European libraries that held copies of this journal (Anyone interested in hunting it should look for copies of Abhandlungen und Berichte aus den Staatlichen Museen für Tierkunde und Völkerkunde in Dresden 20:1-15.) and purged them of his article. According to Nowak, he even managed to get his hands (and scissors) on copies held in Moscow and Leningrad!  

The Dresden museum was also home to the German ornithologist Adolf Bernhard Meyer (1840–1911). In 1858, he translated the works of Charles Darwin and Alfred Russel Wallace into German and introduced evolutionary theory to a whole generation of German scientists. Among Meyer's amazing works is a series of avian osteological works which uses photography and depicts birds in nearly-life-like positions (wonder how it was done!) - a less artistic precursor to Katrina van Grouw's 2012 book The Unfeathered Bird. Meyer's skeleton images can be found here. In 1904 Meyer was eased out of the Dresden museum because of rising anti-semitism. Meyer does not find a place in Nowak's book.

Nowak's book includes entries on the following scientists: (I keep this here partly for my reference as I intend to improve Wikipedia entries on several of them as and when time and resources permit. Would be amazing if others could pitch in!).
In the first of his "recollection papers" (his 1998 article) he writes about the reason for writing them  - the obituary for Prof. Ernst Schäfer  was a whitewash that carefully avoided any mention of his wartime activities. And this brings us to India. In a recent article in Indian Birds, Sylke Frahnert and others have written about the bird collections from Sikkim in the Berlin natural history museum. In their article there is a brief statement that "The  collection  in  Berlin  has  remained  almost  unknown due  to  the  political  circumstances  of  the  expedition". This might be a bit cryptic for many but the best read on the topic is Himmler's Crusade: The true story of the 1939 Nazi expedition into Tibet (2009) by Christopher Hale. Hale writes about Himmler: 
He revered the ancient cultures of India and the East, or at least his own weird vision of them.
These were not private enthusiasms, and they were certainly not harmless. Cranky pseudoscience nourished Himmler’s own murderous convictions about race and inspired ways of convincing others...
Himmler regarded himself not as the fantasist he was but as a patron of science. He believed that most conventional wisdom was bogus and that his power gave him a unique opportunity to promulgate new thinking. He founded the Ahnenerbe specifically to advance the study of the Aryan (or Nordic or Indo-German) race and its origins
From there Hale goes on to examine the motivations of Schäfer and his team. He looks at how much of the science was politically driven. Swastika signs dominate some of the photos from the expedition - as if it provided for a natural tie with Buddhism in Tibet. It seems that Himmler gave Schäfer the opportunity to rise within the political hierarchy. The team that went to Sikkim included Bruno Beger. Beger was a physical anthropologist but with less than innocent motivations although that would be much harder to ascribe to the team's other pursuits like botany and ornithology. One of the results from the expedition was a film made by the entomologist of the group, Ernst Krause - Geheimnis Tibet - or secret Tibet - a copy of this 1 hour and 40 minute film is on YouTube. At around 26 minutes, you can see Bruno Beger creating face casts - first as a negative in Plaster of Paris from which a positive copy was made using resin. Hale talks about how one of the Tibetans put into a cast with just straws to breathe from went into an epileptic seizure from the claustrophobia and fear induced. The real horror however is revealed when Hale quotes a May 1943 letter from an SS officer to Beger - ‘What exactly is happening with the Jewish heads? They are lying around and taking up valuable space . . . In my opinion, the most reasonable course of action is to send them to Strasbourg . . .’ Apparently Beger had to select some prisoners from Auschwitz who appeared to have Asiatic features. Hale shows that Beger knew the fate of his selection - they were gassed for research conducted by Beger and August Hirt.
SS-Sturmbannführer Schäfer at the head of the table in Lhasa

In all, Hale makes a clear case that the Schäfer mission had quite a bit of political activity underneath. We find that Sven Hedin (Schäfer was a big fan of him in his youth. Hedin was a Nazi sympathizer who funded and supported the mission) was in contact with fellow Nazi supporter Erica Schneider-Filchner and her father Wilhelm Filchner in India, both of whom were interned later at Satara, while Bruno Beger made contact with Subhash Chandra Bose more than once. [Two of the pictures from the Bundesarchiv show a certain Bhattacharya - who appears to be a chemist working on snake venom at the Calcutta snake park - one wonders if he is Abhinash Bhattacharya.]

My review of Nowak's book must be uniquely flawed as  I have never managed to access it beyond some online snippets and English reviews.  The war had impacts on the entire region and Nowak's coverage is limited and there were many other interesting characters including the Russian ornithologist Malchevsky  who survived German bullets thanks to a fat bird observation notebook in his pocket! In the 1950's Trofim Lysenko, the crank scientist who controlled science in the USSR sought Malchevsky's help in proving his own pet theories - one of which was the ideas that cuckoos were the result of feeding hairy caterpillars to young warblers!

Issues arising from race and perceptions are of course not restricted to this period or region, one of the less glorious stories of the Smithsonian Institution concerns the honorary curator Robert Wilson Shufeldt (1850 – 1934) who in the infamous Audubon affair made his personal troubles with his second wife, a grand-daughter of Audubon, into one of race. He also wrote such books as America's Greatest Problem: The Negro (1915) in which we learn of the ideas of other scientists of the period like Edward Drinker Cope! Like many other obituaries, Shufeldt's is a classic whitewash.  

Even as recently as 2015, the University of Salzburg withdrew an honorary doctorate that they had given to the Nobel prize winning Konrad Lorenz for his support of the political setup and racial beliefs. It should not be that hard for scientists to figure out whether they are on the wrong side of history even if they are funded by the state. Perhaps salaried scientists in India would do well to look at the legal contracts they sign with their employers, especially the state, more carefully. The current rules make government employees less free than ordinary citizens but will the educated speak out or do they prefer shackling themselves. 

  • Mixing natural history with war sometimes led to tragedy for the participants as well. In the case of Dr Manfred Oberdörffer who used his cover as an expert on leprosy to visit the borders of Afghanistan with entomologist Fred Hermann Brandt (1908–1994), an exchange of gunfire with British forces killed him although Brandt lived on to tell the tale.
  • Apparently Himmler's entanglement with ornithology also led him to dream up "Storchbein Propaganda" - a plan to send pamphlets to the Boers in South Africa via migrating storks! The German ornithologist Ernst Schüz quietly (and safely) pointed out the inefficiency of it purely on the statistics of recoveries!
  • July 2018 - an English translation of Nowak's book is now available.
The Cloisters at Gloucester Cathedral by Christopher JT Cherrington – Wikimedia Commons CC BY-SA 4.0

The winners of the UK section of the world’s biggest photo contest Wiki Loves Monuments have just been announced, with the judges awarding fIrst prize to this stunning image of Gloucester Cathedral cloisters taken by Christopher JT Cherrington.

Chris has written a short blog post on the Wiki Loves Monuments website explaining how he took his winning image.

The 2018 contest

Wiki Loves Monuments is the world’s biggest photographic competition, with a total of 260,607 images submitted to the 2018 competition from all over the world. In the UK, 13,185 images taken by over 500 photographers were entered.  The competition aims to gather high quality, openly-licensed images of historic sites from all over the world.

The contest is an incredible opportunity to document and preserve our heritage for future generations, and this year saw a particular focus on the capture of internal shots, as well as of those sites which were lacking a freely-licensed image in Wikidata, the knowledge base which sits behind Wikipedia.

Among this year’s winners are three castles (all in Wales), two lighthouses (New Brighton and Bass Rock), and one museum (Arbroath).

This year saw a marked increase in submissions from Scotland, with over double the number of entries submitted this year than in 2017.  Wikimedia UK worked with Historic Environment Scotland’s publicly-available database of listed buildings and scheduled monuments to add over 27,000 new eligible items to Wikidata, vastly improving the coverage of Scotland.

PIctures submitted to this year’s contest are already being used to illustrate Wikipedia articles, and Wikimedia UK would like to extend their warmest thanks to all those who submitted entries, helping to significantly improve access to this knowledge.

The top ten UK winners now go forward to the international judging stage of the contest, where they will compete against the best images from some 55 other countries. The first, second and third placed UK winners receive £250, £100, and £50 respectively, with seven Highly Commended winners receiving £25 each.

Additional prizes have been awarded for the best three images from England, from Scotland and from Wales. Archaeology Scotland has also sponsored a special prize for the best photograph of a site in Scotland: a free 1-year membership including the Archaeology Scotland Magazine as well as access to their learning resources.

One of the competition’s judges noted that the quality and variety of images submitted continues to increase:

“Each year the standard of entries for Wiki Loves Monuments UK rises. Browsing through the long list of almost 250 images was made enjoyable and easy because of the quality of the images and the variety of locations from across the British Isles on display, narrowing it down to a shortlist of just 10 was a much harder process. It is a real pleasure to have been involved in the judging of this competition and to see the skill and dedication of the winning photographers recognised.”

Find out more about the prizes on the Wiki Loves Monuments website.


The winners are as follows. Click the title for access to more details and high resolution copies on Wikimedia Commons.

UK winners

UK highly commended




Special prize

The most prolific photographer of “new” UK historic sites was Paul the Archivist, who uploaded more than 200 pictures of sites which hadn’t previously been represented in the database.

For the complete list of the UK award winners and shortlisted images, well as access to high-resolution copies, see the winners’ page on Wikimedia Commons. security incident

17:49, Thursday, 25 2018 October UTC

What happened?
On September 24, 2018 a series of malicious edit attempts were detected on In general, these included attempts to inject malicious javascript, threatening messages and porn.

Upon detection it was determined that while the attacker’s attempts were unsuccessful there was a vulnerability that if properly leveraged could affect users. Because of the vulnerability it was decided to temporarily disable translation updates until countermeasures could be applied.

What information was involved?
No sensitive information was disclosed.

What are we doing about it?
The security team and others at the foundation have been working with to add security relevant checks into the deployment process. While we currently have appropriate countermeasures in place we will continue to partner with to add more robust security processes in the future. Translation updates will go out with the train while we continue to address architectural issues uncovered during the security incident investigation.

John Bennett
Director of Security, Wikimedia Foundation

When you think about the work of art historians or genetics researchers, installing database software is not the first thing that comes to mind. Yet, from 19 to 21 September, Wikimedians, art curators, and scientists gathered at the New Museum in New York City’s Lower East Side for a three-day workshop to talk about an emerging technology designed to make storing and structuring data free and accessible. The focal point was an increasingly vital piece of the Wikimedia ecosystem that makes linked data possible for everyone: Wikibase.

Wikibase is a little-known standalone piece of software that powers Wikimedia’s popular new linked-data project Wikidata.  (This is similar to how Wikipedia is powered by a general wiki software called MediaWiki, which is used everywhere from NASA to MuppetWiki.)  Since 2012, Wikidata has been growing to fill an increasingly important role in the Wikimedia community: connecting, sharing and providing tools for turning Wikipedia’s text strings into useful, searchable, machine-readable.  Wikidata’s content informs research and cultural heritage institutions, as well as digital tools like Google’s Knowledge Graph. None of this would be possible without Wikibase—which makes the Wikidata’s linked open data project possible and practical.

In the last few years, an expanding community of researchers, GLAMs, and other knowledge communities have been experimenting with using Wikibase for their own repositories of knowledge, distinct from the central Wikidata knowledge collected by Wikimedians. But this nascent community of Wikibase reusers is just emerging, and deploying an open source technology not originally designed for third-party use can be challenging.

The September Wikidata workshop gathered a subgroup of these pioneering data explorers to better understand how to realize Wikibase as a robust independent project distinct from the Wikidata platform, and to turn it into a tool that can offer core infrastructure for the work of scholars and GLAMs as they collect, describe and analyze their diverse stores of both facts and artifacts.  The key question: how could Wikibase reach and meet the needs of professionals across a broad community of emerging and established institutions.

The gathering was made possible through a generous grant by the Sloan Foundation.  It was hosted and sponsored by one of the early adopters of this technology, Rhizome (documented in our Many Faces of Wikibase blog series), and co-organized with Wikimedia Germany (Deutschland).

What do you talk about in a Wikibase workshop?

The Wikibase workshop began with talks by participants highlighting their work in the realms of Wikidata, Wikibase, and more generally representing and working with linked data. Participants came with varying levels of experience with Wikidata and Wikibase, from interested newcomers to intrepid early adopters of the technology.  So, the presentations created an important baseline understanding of what is happening with Wikibase and what its possibilities and pain points are.

Day one explored both the many opportunities and the number of underlying concerns exploring and integrating a new technology. Are we the only ones experiencing challenges? Are there tools that could help us solve our problems? What kind of future can we build for increasing adoption of the software?

At the same time presenters brought striking examples of advanced use.  Digital art specialists described how they use Wikimedia tools for art preservation Rhizome using Wikibase, and SFMOMA using Mediawiki. Michigan State University’s and Pratt University’s Linked Jazz Project showed off the power of wikibase and linked data for exploring the cultural record.  GLAM professionals from the Smithsonian, New York’s METRO library consortium, and York University highlighted the potential applications of Wikidata to their own varied and vast collections spaces.

During the second day, we broke into working groups focused on understanding practical ways which Wikibase can be better supported for arts and research communities. Tracks focused on UI/UX improvements, making Wikibase easier to install and use, developing a Wikibase community, improving data modeling on Wikibase. The groups each provided pages of documentation and specific recommendations that can be used as part of Wikibase’s development going forward (including, for example, concept sketches for changes to the platform that would help GLAM and humanities researchers use Wikibase).

By day three, the community in the room had a strong sense of solidarity and shared purpose: that we can imagine are future in which Wikibase powers GLAM and research institutions to strengthen the exchange and connection of knowledge. We wrapped up the workshop with a session of sharing and reflecting, and then finished documenting the event, with next steps and links to notes on the conference meetup page, technical feature requests on phabricator tickets, and this very blog post.

It’s more than just gathering together—it’s about creating a community

An overwhelming theme throughout the workshop was the questions of, what’s next? In the wrapup conversations, the answers came back to facilitating a growing community of users who want to apply the Wikibase technology to the work of researchers and GLAMs. Key to forming the community would be figuring out what stories we can tell, and how we can tell them to new audiences.

Most of the participants in the New York Workshop were new users of Wikibase—very much still learning. In a community-driven environment like Wikimedia, sometimes the best outcome of an event is developing a larger community of knowledge-holders, of individuals able to explain, support and facilitate understanding of a practice or set of projects. With the Wikibase workshop, we helped a larger group of community members share their knowledge and gain knowledge from others. So, the next time a question arises on the Wikibase users mailing list, or a colleague expresses as interest in Wikibase, they know who else to connect with. In short: the workshop helped produce more nodes in our growing Wikibase linked data network.

This budding community would not have been able to form without the amazing facilitation of Dragan Espenschied (Q111053 on Wikidata) from Rhizome and Sandra Müllrick from Wikimedia Deutschland, along with their teams.  The organizers facilitated a collegial, knowledge exchange that can lead to a growing web of Wikibase users and community participants.

Alex Stinson, Senior Strategist, Community Programs, Wikimedia Foundation
Jake Orlowitz, The Wikipedia Library, Community Programs, Wikimedia Foundation
Jens Ohlig, Software Communications Strategist, Software Development, Wikimedia Germany (Deutschland)

To learn more about Wikibase or the growing community of Wikibase users, join the conversation on the Wikibase community mailing list  or the Wikidata community mailing list, follow the Wikidata Weekly status updates, and share your stories with jens.ohlig[at]wikimedia[dot]org.

What’s something very few people know about PHP?

20:02, Tuesday, 23 2018 October UTC

Answered in Quora:

Q: What’s something very few people know about PHP?

It is mind-bogglingly popular for web development. That popularity hasn’t diminished even though conventional wisdom says otherwise…

Over a decade ago, I said about 40% of the top 100 websites use PHP — a number I pulled out of my ass — but nobody (not even the Ruby on Rails developers I pissed off) argued with that spurious claim. In 2009, Matt Mullenweg, the creator of WordPress became curious with my claim and did a survey of Quantcast’s top 100 sites — he got almost exactly 40. Even today, among the 10 most important websites, four use PHP as their language of choice — 40% again.

Overall, almost 80% of the internet is powered by PHP, and that has held steady for years! Newer web languages such as Ruby or NodeJS have only grown at the expense of other languages such as ASP, Java, or Perl.

Just one single application written in PHP, WordPress, is used by over 30% of all websites on the entirety of the internet. That’s more than double the market share since back when I last worked at Automattic/WordPress in 2011! It grew until it saturated its entire market — over 60% of all CMSs. In the CMS market as a whole, PHP-based CMSs occupy positions 1, 2, 3, 6, 7, and 8 in the top 10. The most popular non-PHP-based CMS is both closed source and sitting at only a 2.5% share.

It was estimated back in 2009 that there were 5 million PHP developers worldwide. It’s difficult to make this estimate today, but it’s obvious that that number has also held steady or grown.

These last few years, I’ve been commercially working in Ruby (on Rails), GoLang, NodeJS (for static servers), and Python (Django), but PHP is still also my love in that love/hate relationship.

Come see my talk in February 2019 at SunshinePHP in Florida!

#Science - Ladies you work together

14:07, Monday, 22 2018 October UTC
Yesterday I singled out a Paola Giardina because she was a co-author of someone who had SO many co-authors, I could not manage the information that was in there. Yesterday Paola had a large number of co-authors that were white (no gender info). Today there are even more present.

One thing is pretty obvious in what I see: women are more likely to work with women than men. When you want to analyse this, it is important to know the data this is based on. At this time 31% of the people with an ORCiD identifier are female. When you consider probability, it is likely that some 31% of people who have not been associated yet with a gender will be female as well.

In many universities the percentage of women studying is more than 50%. All of them get involved in research. All students are involved in the production of papers and all of them are entitled to their ORCiD and to their Wikidata identifier.

So when we want to express the notability of women in modern science, all we have to do is ask any and all scientists to make their publication details part of the open record. Slowly but surely, it will become obvious who and where the best science is produced and who collaborates with whom.
The most important thing religion has over science? Its papers can be read. Sources like the Bible, he Quran can be read for free. You can get *your* copy from many true believers. A copy is in your library. With science the papers that can prove to you that goldfish should be classified as endangered are behind a paywall. It is only your common sense that might say: "Hey, wait a minute.."

When Wikipedia insists on its sources, they are only functional when these sources can actually be read. This is why the Internet Archive plays such a vital role in maintaining the validity of stated facts.

Some scientists think that "the public" cannot read scientific papers. They forget that even for scientists a paper that cannot be read is a paper that does not exist in their contemplations. The public does read scientific papers. The Cochrane crowd for instance reads papers and checks particular premises for validity.. We know that scientific research of coronary disease was biased for males and as a consequence women still die. A bias like that is what they look for, it is why they reject many papers because they are basically *not* valid.

There is a lot to do about what scientific publishing should be. How it should be funded.. The base line is that when a publication is not available for anyone to read, the facts do not matter. Why believe vaccines are safe when the publications that prove it are behind a paywall?

Tech News issue #43, 2018 (October 22, 2018)

00:00, Monday, 22 2018 October UTC
TriangleArrow-Left.svgprevious 2018, week 43 (Monday 22 October 2018) nextTriangleArrow-Right.svg
Other languages:
Deutsch • ‎English • ‎Tiếng Việt • ‎dansk • ‎français • ‎italiano • ‎polski • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎مصرى • ‎नेपाली • ‎हिन्दी • ‎মেইতেই লোন্ • ‎മലയാളം • ‎中文 • ‎日本語 • ‎한국어

weeklyOSM 430

09:36, Sunday, 21 2018 October UTC




A novel way of visualizing the distortion of country size of Mercator projections [1] | © Neil Kaye


  • Satoshi IIda (User nyampire) (ja) announced on Twitter (1, 2) (automatic translation) (automatic translation) about a new aerial image source in Japan. OSM now has permission to use aerial imagery published as open data from Itoigawa city and Atsugi city. He has published this imagery as map tiles and explains how to use it on the Wiki pages Itoigawa city and Atsugi city. (ja) (automatic translation, automatic translation)
  • The voting for the tag telecom=* has started. Values can include exchange, service_device and connection_point. This should enable mappers to map last mile networks and related equipment like DSLAMs (Wikipedia link).
  • A user of StreetComplete spotted complicated opening hours that don’t fit within OSM’s limit of 255 chars. Whilst all agree that the issue is not limited to opening hours, opinions as to whether such a level of detail should be stored in one value differ.
  • User jeisenbe proposes a comprehensive handling of default language format for names and places within a region. The voting on the proposal is currently under way: you’re welcome to review it and express your vote.
  • Nicolas Chavent produced a Twitter moment on the two weeks of capacity building action OSM, OpenData and free geomatics in Port-au-Prince (Haiti) organized in coordination with members of the association Communauté OpenStreetMap Haiti Saint-Marc (C.OSMHA-STM) for people from academic, research, development sectors and the local OSM community, thanks to the support of the Economic and Digital Directorate of the International Organization of La Francophonie
  • The new value landuse=governmental is proposed to mark land used by government bodies.


  • On October 11 Nomad Maps finished his cartographic cycle tour through the Andes route having cycled 5000 kilometres. As a result more than 10,000 new objects were mapped and more than 100,000 additional photos added to Mapillary. He had 17 meetings with OpenStreetMap contributors in Colombia, Ecuador and Peru. As a cherry on the pie, Alban Vivert (@Nomadmapper) participated in SOTM Latam 2018 in Buenos Aires to present the results of the expedition and to meet other members of the OSM community. More info on Nomad’s blog and the website of Nomad Maps.
  • The “Schokofahrt” (chocolate tour) is a decentralized private bicycle tour for emission free transport of chocolate, to promote sustainable mobility and CO₂ neutral transport. At this year´s edition some stairs, that got in the way of the OSM based routing app BikeCitizens, were complicated to overcome with the heavily loaded cargo bikes. The map error was rapidly corrected.
  • Here are the videos for the State of the Map Latam 2018 sessions on September 24th in Buenos Aires, Argentina.
  • Sev OSM tweeted about two weeks of OSM and OpenData training in Conakry, Guinea.


  • Majka announced that an import of post boxes in Czech Republic is planned. The word import may be wrong here as it is mainly intended to update collection times and re-tag currently not active post boxes to disused:amenity=post_box.
  • Dannykath from Development Seed wrote about importing of data into OSM. The write-up covers topics like what type of data make sense to import, what is involved in the import process, what needs to be considered before starting, how to document and community involvement.
  • The Denver Regional Council of Governments is making a dataset available for importing to OpenStreetMap. It contains over 1 million building footprints, with an accuracy of 6 cm. Several import-a-thons will be organised for volunteers to review and ensure consistency with existing OSM data.

OpenStreetMap Foundation

  • Michael Reichert started a discussion on the OSMF list about the next Board elections: he suggested a slight change in the election calendar and a questionnaire for the candidates, instead of email flooding the mailing list. His proposal is getting positive feedback and further input on the subject.
  • Christine Karch from the SotM Working Group wrote a comprehensive summary (de) (automatic translation) about her activity over the past year. The ‘Way to SotM 2018’ titled article covers the preparations of the SotM 2018 and other tasks related to her job at this Working Group. She details how she was working on the scholarship applications, the organisation and evaluation of the submitted abstracts for the SotM workshops, lightning talks, talks and other events, the work on the tender for the 2019 SotM and many related tasks. In addition she gave a lot of insight behind the scenes. She explained how helpful it was to meet Nicolas Chavent and Séverin Menard to find a better balance and give the French/African part of OSM some more room and also explained why she thinks that no Code of Conduct for OSM is required. Some minor, maybe more controversial points in the article are her feeling of an overweight of HOT in areas like scholarship and diversity.
  • User Stereo (Guillaume Rischard) wrote in his user diary about the new Organised Editing Guidelines that were drafted by the Data Working Group and are on the OSMF board meeting agenda of Oct 18th. In his post he explains the context and motivations but does not answer to solitary critic on the content of the new drafted guideline and lack of community involvement during the development.


  • This year’s Chaos Communication Congress (Leipzig, 27th to 30th of December) will host once again an OpenStreetMap-Assembly. More details on this event on the dedicated wiki page (in German).
  • The Europe Direct Office in Cuneo (Piedmont, Italy) has organised (it) (automatic translation) a full day around OSM, filled with talks, workshops and a mapping party. OSM will be presented by Marco Brancolini, OSM Piedmont coordinator for Wikimedia Italia; Alessandro Palmas, OSM Project Manager for Wikimedia Italia; and Cristiano Giovando, World Bank and HOT consultant.
  • A reminder that the upcoming FOSS4G event is on 25 October in Brussels. The planned schedule can be found here.

Humanitarian OSM

  • The 6th GeOnG forum will take place from October 29th to 31st in Chambéry, France. The GeOnG is organised by CartONG and includes topics like mapping and GIS, mobile data collection and many other technology related subjects that can be of interest in the humanitarian and development sector.
  • The English city St Albans is looking for crowdfunding to help visually impaired people with content for the app Soundscape. According to the article, the money will be used for a mapathon to train volunteers in OSM, verification and ongoing data maintenance.


  • The OSM-based London cycle parking map at got an extra layer that can show the locations where bicycles were stolen based on police reports.
  • The New York Times used aerial images of the city Mexico Beach taken before and after Hurricane Michael and Microsoft’s building footprints to visualise the extent of the damage. The NYT identified 440 buildings of which 237 were destroyed and an additional 99 were severely damaged.
  • User SK53 wrote a blog post about how he created an OSM based raster map of the Irish Vice Counties with the proper Irish Grid, hill shading and hypsometric tints for use with MapMate, software intended to record, map, analyse and share biological sightings.
  • With Locator Map, you can now easily create locator maps in Data Wrapper. A location map is a simple map showing the position of a particular geographic area within its larger and presumably more familiar context. Locator Map is a new map editor to make this process as easy as creating bar charts. Try it here.


  • Fabian Kowatsch introduces a new ohsome dashboard prototype as a preview on what is possible with the HeiGIT’s ohsome OpenStreetMap history analytics platform which is deployed on a distributed cloud system using Apache Ignite. You can explore the evolution of any available OSM tag over time in arbitrary areas of Germany and calculate some summary statistics.


  • In a blog post Alex-7 pointed to a prototype of an OSM map that generates Open Location Codes (OLC) for every position and, vice versa. One can also lookup a position based on the Open Location Code. It works without the need to add OLC related data to OSM.
  • ENT8R has added the filter parameters user name, user ID and time to the search functions of the OSM-API for Notes and documented the new functions in the OSM wiki.
  • Trafford Data Lab developed and documented a new Plugin for the Leaflet JavaScript library to show areas of reachability based on time or distance for different modes of travel using the Openrouteservice isochrones API. Included are various examples to get you started.

Did you know …

  • … Pascal Neis’ tool that generates a list of suspicious changesets based on different adjustable criteria? The results are also available as an RSS feed.
  • … the OSM changeset analyser OSMCha? It allows you to keep track of edits in changesets. Many different filters can be used when visualising changes in changesets.
  • … that you can invoke a username based filter with user:username when using
  • … the OSM and NASA based map that simulates rising sea levels following a temperature increase. It allows you to simulate global warming caused land loss.

Other “geo” things

  • Benjamin Schmidt wrote the article Data-driven projections: Darwin’s world that shows the world map in a projection that centres around Charles Darwin’s route with his ship Beagle. The result is an interesting, but very unfamiliar perspective of the globe, with North America and Eurasia looking unimportant at the edge of the map.
  • The Relevator published an article showing the estimated regional change in rainfall and snow until 2050 caused by the climate change. The predicted change is displayed on an interactive map on a county level.
  • Microsoft has made a strategic investment in ride-hailing and on-demand services company Grab as part of a deal that includes collaborating on big data and AI projects.
  • [1] Neil Kaye published an animated gif on Twitter and Reddit that demonstrates the size of a country as shown in a Mercator projection compared to its true dimension.
  • A reader of Macrumors took photos of a man wearing a rucksack full of measurement devices walking along a street in San Francisco. The devices were LIDAR, GNSS and cameras for Apple Maps. Cars are also used for this purpose. In June 2018 Apple announced that they will not use third-party data in future anymore (we reported).

Upcoming Events

Where What When Country
Karlsruhe Karlsruher Hackweekend 2018-10-20-2018-10-21 germany
Colorado Springs Denver Importathon 2018-10-22-2018-10-25 united states
Bremen Bremer Mappertreffen 2018-10-22 germany
Arlon Rencontre des contributeurs du Pays d’Arlon Invalid date-Invalid date belgium
Nottingham Pub Meetup 2018-10-23 united kingdom
Arlon Espace public numérique d’Arlon – Formation Consulter OpenStreetMap 2018-10-23 belgium
Cologne Köln Stammtisch 2018-10-24 germany
Lübeck Lübecker Mappertreffen 2018-10-25 germany
Manila 【MapaTime!】 @ co.lab 2018-10-27 philippines
Rennes Recensement des commerces du centre-ville 2018-10-28 france
Toronto Mappy Hour 2018-11-05 canada
Bengaluru State of the Map Asia 2018 2018-11-17-2018-11-18 india
Melbourne FOSS4G SotM Oceania 2018 2018-11-20-2018-11-23 australia

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Anne Ghisla, Nakaner, Polyglot, Rogehm, SK53, Softgrow, SunCobalt, TheSwavu, YoViajo, derFred, geologist, jinalfoflia.

#Wikidata - the missing #Elsevier papers

14:31, Friday, 19 2018 October UTC
It started with a Twitter tweet.. "There is also a professor Elsevier". A search found that Professor Cornelis J. Elsevier works at the "Universiteit of Amsterdam". He did not exist at Wikidata and there was only one paper to be found for him.

Adding this one paper was done with the "Resolve Authors" tool. The Scholia tool for Mr Elsevier showed a few co-authors and in addition to this several "missing co-authors" could be found.

In order to show more papers for Mr Elsevier, more papers needed to be imported into Wikidata. This can be done for authors with an ORCiD identifier, particularly the ones with no known gender. So far they did not get much TLC. Just running the "SourceMD tool" for them will add additional papers and associate other authors to these papers as well.

This is an iterative process and I focused for no particular reason on Mrs Barbara Milani. Processing her co-authors meant that more co-authors came out of the woodwork. At this time, 13 new authors with an ORCiD identifier popped up. Once they are processed more papers will be known to Wikidata and given their relation to Mrs Milani a reasonable chance that these papers link to Mr Elsevier as well.

At this time Mr Elsevier is known to have 7 publications.

Josh Lovejoy @jdlovejoy, in the first minutes of this video about human-centered machine learning, explains “artificial intelligence is really anything where there is an automated decision being made” and cites, as examples, a toaster and automatic doors. Yes, your toaster is AI! And then “what’s distinct about machine learning as a subset of AI is that decisions are learned”. As simple as that. Refreshening.

You might also want to check the very interesting articles from Google’s People + AI Research team

The news that optical physicist Donna Strickland did not have a Wikipedia page before winning the Nobel Prize in Physics brought renewed attention to Women in Red, a long-standing volunteer effort to add more biographies about women to the encyclopedia.

After the announcement, the Women in Red WikiProject had one of their best weeks ever, says Rosie Stephenson-Goodknight, the co-founder of Women in Red.

Statistics […] show an increase in the number of women’s biographies during the week immediately after the Nobel Prize announcement,” she said. “I’ve also seen a dramatic increase [in interest] on social media.”

The news that more people are interested in contributing biographies about women to Wikipedia is gratifying to Stephenson-Goodnight, who co-founded Women in Red in 2015 after learning that only 15 percent of biographies on Wikipedia at the time were about women. (That number has since jumped to 17.67 percent, meaning an average of 72 new articles on women were added to English Wikipedia every single day for the past 3.5 years.)

In the past three years, Women in Red has collaborated with outside institutions like museums and libraries that want to put on edit-a-thons, and developed lists of “redlinks”—non-existing Wikipedia articles—of women who should have material written about them. But the group doesn’t stop at biographies.

“Our scope includes women’s works (such as the paintings they painted, the schools the founded, the conferences they convened), as well as women’s issues (such as women’s suffrage and women’s health),” says Stephenson-Goodnight.

A helpful space for newcomers

Women in Red is known for helping newcomers navigate the technical rules of Wikipedia—which can be overwhelming for new contributors. Sue Barnum, a volunteer Librarian-in-Residence with Women in Red, points to the group’s friendly and helpful nature as a key reason why it has been so successful.

“We support one another and help each other with our articles, references and technical details. If you need help, we’re going to help or find someone who can,” she says. “I’ll try to help people get access to articles or books behind paywalls. Sometimes I’ll even go to nearby libraries to access the actual book to help people verify their sources if the book or journal is only available offline.”

The focus is on why women matter

Editor SusunW, a founding member of Women in Red, recently wrote an essay about why she focuses on writing about women on Wikipedia.

“I learn as much from writing women’s biographies as I impart from telling their stories,” she writes. “For example, in the pre-internet world, the international links between people and the organizations in which they participated were much stronger than you might imagine. The analytical part of researching the interconnections, and reward of working with editors who want to improve articles, is a motivating factor to me—as is the hope that the women in generations who follow will grow up knowing that women have always been actively involved in the world around them and were not passively allowing the world to go by.”

Making women—and their role in history—more visible is a motivating factor for many participants to Women in Red.

“I grew up believing that women didn’t do anything interesting,” says Barnum. “It’s sad that I believed that, because it isn’t true. I believed this because women’s contributions become invisible, especially after their deaths. There are many articles I’ve worked on where a woman was quite nationally famous during her lifetime, but after she dies, she somehow “fails” to make it into the canon that describes the subject she was involved in. This is really tragic and I’m glad that there are historians out there writing about women who have been hidden in history.”

You can make a difference

Inspired to join Women in Red?  There are many ways to get involved. For starters, Women in Red has put together a primer for creating women’s biographies” and “ten Simple Rules for Creating Women’s Biographies”.

If you are looking for a place to jump in, Barnum recommends starting with the lists that Women in Red has put together on women that are notable by Wikipedia’s standards and who need biographies written about them.

“Pick someone from a subject area you enjoy writing about and then make sure you have several reliable sources to backup your writing,” she says.

SusunW adds that you don’t necessarily need to start with a full biography. “If there do not appear to be sufficient sources to add a standalone biography on a woman, she can be added to events she participated in and organizations she was involved with, provided reliable sourcing can confirm her activities.”

For example, if a woman participated in an academic conference, adding her name to the list of participants—with proper sourcing—may help someone write a biography about her at a later date.

And Stephenson-Goodnight says you can always update an existing article. “Add a reference, add an internal link to another article, fix the punctuation, or improve the opening paragraph,” she says.

If you need help

Don’t be overwhelmed if you start to edit and find yourself confused. There are many ways to get help! You can post on the talk page of  Women in Red, as well as the talk pages for Rosie, Sue, and SusunW. (A talk page is like a message board where people communicate about a topic or article.)

“If you get stuck or have questions, remember that all of us were new once and others helped us, so we’re glad to respond to your questions,” says Stephenson-Goodnight. “You can also reach me via Twitter or Facebook or email.”

And if you’d rather work with people in person, there are opportunities for that too.

“If you live near a city which has a Wikimedia Affiliate (Chapter, Thematic Organization, or User Group), check in with them regarding their events schedule. There’s an international list available on this Meetup page. Or ask us at Women in Red to assist you in the search,” says Stephenson-Goodnight.

Helping beyond editing

And one last thing: you don’t need to edit Wikipedia to meaningfully contribute to Women in Red. You can tweet or post about new articles on social media (including the handle @WikiWomenInRed), help find archival material, or help access photos that are licensed properly.

“Having people who are willing to reach out to individuals to secure a license for a picture to be used on Commons is very helpful,” says Barnum. “Others may have access to archives or books that are offline and can help provide access to other editors.”

And your work—whether an edit, a tweet, or a photo—is meaningful, and makes a difference on Wikipedia.

“We are making a difference, one article at a time, making for incremental corrections to the systemic bias depicted in the written canon,” says Stephenson-Goodknight. “I can’t imagine doing anything more worthwhile with my free time.”

Interview by Melody Kramer, Senior Audience Development Manager, Communications
Wikimedia Foundation

Wikipedia is a mirror of the world’s gender biases

17:00, Thursday, 18 2018 October UTC

This post ran in the Los Angeles Times on 18 October 2018.

When Donna Strickland won the Nobel Prize this month, she became only the third woman in history to receive the award in physics. An optical physicist at the University of Waterloo, Strickland is brilliant, accomplished and inspiring. To use Wikipedia parlance, she is very clearly notable.

Except that, somehow, she wasn’t. Despite her groundbreaking research on a method of generating laser beams with ultrashort pulses, Strickland did not have a Wikipedia page until shortly after her Nobel win.

Perhaps more disconcerting, a volunteer Wikipedia editor had drafted a page about Strickland in March only to have it declined in May. The reason: There wasn’t enough coverage of Strickland’s work in independent secondary sources to establish her notability. Her achievements simply weren’t documented in enough news articles that Wikipedia editors could cite.

Before Wikipedia points a finger that might rightly be pointed back at us, let me acknowledge that Wikipedia’s shortcomings are absolutely real. Our contributors are majority Western and mostly male, and these gatekeepers apply their own judgment and prejudices. As a result, Wikipedia has dozens of articles about battleships and not nearly enough on poetry. We’ve got comprehensive coverage on college football but significantly less on African marathoners.

At the same time, Wikipedia is by design a living, breathing thing—a collection of knowledge that many sources, in aggregate, say is worth knowing. It is therefore a reflection of the world’s biases more than it is a cause of them.

We are working to correct biases in Wikipedia’s coverage. For instance, in 2014, Wikipedia editors evaluated all the biographies on English Wikipedia and found that only about 15% of them were about women. To rectify the imbalance, groups of volunteers, including the WikiProject Women Scientists and WikiProject Women in Red, have been identifying women who should have pages and creating articles about them.

Today, 17.82% of our biographies are about women. This near 3% jump may not sound like much, but it represents 86,182 new articles. That works out to 72 new articles a day, every single day, for the past three and a half years.

But signs of bias pop up in different ways. A 2015 study found that, on English Wikipedia, the word “divorced” appears more than four times as often in biographies of women than in biographies of men. We don’t fully know why, but it’s likely a multitude of factors, including the widespread tendency throughout history to describe the lives of women through their relationships with men.

Technology can help identify such problems. Wikipedia articles about health get close attention from our community of medical editors, but for years, some articles on critical women’s health issues, such as breastfeeding, languished under a “low importance” categorization. An algorithm identified this mistake.

But there is only so much Wikipedia itself can do. To fix Wikipedia’s gender imbalance, we need our contributors and editors to pay more attention to the accomplishments of women. This is true across all under-represented groups: people of color, people with disabilities, LGBTQ people, indigenous communities.

Although we don’t believe that only women editors should write pages about other women, or writers of color about people of color, we do think that a broader base of contributors and editors—one that included more women and people of color, among others—would naturally help broaden our content.

Wikipedia is founded on the concept that every individual should be able to share freely in the sum of all knowledge. We believe in “knowledge equity,” which we define as as the idea that diverse forms of knowledge should be recognized and respected. Wikipedia is not limited to what fits into a set of encyclopedias.

We also need other fields to identify and document diverse talent. If journalists, book publishers, scientific researchers, curators, academics, grant-makers and prize-awarding committees don’t recognize the work of women, Wikipedia’s editors have little foundation on which to build.

Increasingly, Wikipedia’s content and any biases therein have ramifications well beyond our own website. For instance, Wikipedia is now relied upon as a major source in the training of powerful artificial intelligence models, including models that underlie common technologies we all use.

In such training processes, computers ingest large data sets, draw inferences from patterns in the data and then generate predictions. As is well understood in the programming world, bad or incomplete data generate biased outcomes. This phenomenon is known by the acronym GIGO: garbage in, garbage out.

People may intuitively understand that Wikipedia is a perennial work in progress. Computers, on the other hand, simply process the data they’re given. If women account for only 17.82% of the data, we may find ourselves with software that thinks women are only 17.82% of what matters in the world.

It is true that Wikipedia has a problem if Donna Strickland, an accomplished physicist, is considered worthy of a page only when she receives the highest possible recognition in her field. But this problem reflects a far more consequential and intractable problem in the real world.

Wikipedia would like to encourage other knowledge-generating institutions to join us in our efforts to balance this inequity. We may not be able to change how society values women, but we can change how women are seen, and ensure that they are seen to begin with. That’s a start.

Katherine Maher, Executive Director
Wikimedia Foundation

Image adaptation by Eryk Salvaggio/Wiki Education Foundation, CC BY-SA 4.0; underlying photo by Darapti, CC BY-SA 3.0.

‘Can my business have a Wikipedia page?’

14:57, Thursday, 18 2018 October UTC
Image from UK Black Tech’s stock photo project to increase Open Licensed photos of black people in business and tech – Wikimedia Commons CC BY-SA 4.0

So you’re a business. You’ve got a company that’s number #3 in the UK at making spoons, or something like that, and you want to make sure that when people search for your company, they can see you’re legit because a Wikipedia page confers an aura of legitimacy on your noble pursuit of creating the best spoons in the land.

You tried to make a page for your spoon business before, but for some reason it disappeared. No doubt the anti-spoon lobby have got their knives out for you in their cynical attempt to stop people using your quality products. You’ve found the charity responsible for Wikipedia in the UK (that’s us!) and you want to know how you can get your spoon business listed on Wikipedia.

I’m afraid that we may have some bad news for you. You see, Wikipedia is not a business directory. It’s not the Yellow Pages or whatever website has put the Yellow Pages out of business. So you probably need to stop and think ‘is my business notable enough to be in an encyclopaedia?’ It’s estimated that there are somewhere around 200 million companies in the world, so only very few of these will be famous enough to appear in an encyclopaedia.

Maybe you don’t know the answer because you’re not sure what makes something notable enough to be on Wikipedia. Well, luckily we have a set of Notability guidelines for that.

The basic criteria for notability is that “a topic has received significant coverage in reliable sources that are independent of the subject”. So I’m afraid that links to your own site, quotes in articles about another subject, or references to other self-published sources like blogs, petitions or social media posts just won’t meet this standard.


A presentation on verifiability and notability – Wikimedia Commons CC BY-SA 4.0

This standard isn’t supposed to be easy to meet. Your business might be doing really well, it might make the biggest spoons in Britain, but if you’ve not had the Times, or at least the local newspaper down to cover your amazing spoon production in an article which is specifically about your business, then as far as Wikipedia is concerned, it’s not going to be notable enough. But don’t get disappointed. If you want your spoons to be famous, you need to concentrate on getting some media coverage for those spoons. Wikipedia can only cover what has been already published elsewhere.

If your company is notable, it’s likely that someone will eventually get around to creating a page for it. You’re just going to have to be patient. If you try to create the page yourself, without really understanding the core rules of Wikipedia, you might make some mistakes, like putting in Non-Neutral Point of View language, which will show others that you might be connected to the subject matter, and result in the article’s deletion for Conflict of Interest (CoI) editing.

You should also most definitely not pay someone to create a page for you. Paid editing, without a declaration that someone is being paid to edit, is against the rules. If the page for a company keeps getting made and then deleted, editors may ban the creation of the page indefinitely. In 2015 Wikipedia editors uncovered a group trying to make money by scamming businesses by telling them they could make and protect their company’s Wikipedia articles.

The main lesson in this is that if you are going to use Wikipedia properly, you really have to understand how it works. You can’t just stumble into it and start changing important things without appreciating what you’re allowed to change and what kinds of edits are acceptable.  On English Wikipedia, you can’t even create new articles anymore without having a registered account with a certain number of edits.

We recognise that this can be frustrating and offputting to some businesses who could theoretically have good reason to interact with Wikipedia. However, there are things your company could consider doing to make it more likely that someone will create a page for you. You could consider releasing photos of your company or its products under an Open, Creative Commons license, meaning that these photos can be used on Wikipedia.

All the content on Wikipedia is shared on Open Licenses, so we can’t use any media about your company unless you publish it specifically on an Open License. The Welsh music label, Sain Records, released the cover art of many of their Welsh-language records on Open Licenses, along with 30 second clips of some of their artists songs. This means there is now much better coverage of the company and its products on Wikipedia.

A guide to the different types of Creative Commons Open License, and what you are allowed to do with the content published on each one. Image via ANDS.

I have been trying to do outreach to the music industry to encourage them to donate content, like photos of their artists, which Wikipedia editors can use to improve pages on notable musicians. There are lots of black and ethnic minority musicians who don’t have pages on Wikipedia, and we would like to change that. Again, we don’t encourage people who work for music companies to make pages about their artists, but if those companies would like to work with Wikimedia UK, we could organise Wikipedia editing workshops for fans of the artists, and use photos donated by the artists’ companies to create pages for notable people who deserve to be on the encyclopaedia.

We’ve already had a very fruitful collaboration with the Parliamentary Digital Service, who released official parliamentary photos of MPs in 2017, and you will now see that most MPs pages use their official photograph in the infobox on the right of the page.

The best way to learn how Wikipedia works is to get involved. Come to our events. Come to meetups to talk to other Wikimedians and ask their advice. The community is huge, and has over the past 18 years created a complex set of rules to govern the living, constantly changing nature of Wikipedia. We think it’s an amazing achievement, and that’s why we treat it as so much more than an advertising platform.

The three kinds of data scientists

05:00, Thursday, 18 2018 October UTC

A little-known bird artist

03:06, Thursday, 18 2018 October UTC

One of the hazards of contributing to Wikipedia is that one does not read enough of what is on it. Bumping into a series of interesting paintings of South African birds I looked up the artist marked as Sergeant C. G. Davies. Turns out that he was Claude Gibney Finch-Davies, a somewhat lesser known artist. Born in Delhi in 1875 he went to England and joined the army in South Africa. Somewhere along the line he picked up an interest in birds and art. A couple of biographies have been written about him by A C Kemp, but it would seem like he has largely been unknown, partly due to something he did that blemished his career and led perhaps to his death/suicide. His keen interest in illustration led him to remove plates from books in the museums and libraries that he referred to. Today there are probably art collectors who must be eager to steal this man's paintings.

The Natural History Museum at London holds some of his unpublished notebooks and paintings. Fortunately for us his paintings are out of copyright since 70 years have passed since his untimely death. Some of his paintings can be found here on Wikimedia Commons

His biography on Wikipedia is interesting but some of the details seem to be untraceable - it says [emphasis mine]:
He was born in Delhi, India, the third child and eldest son of Major-General Sir William and Lady Elizabeth B. Davies née Field. His father later became Governor of Delhi and was awarded the Order of the Star of India, while his mother was said to be an expert on Indian snakes.
The names of the mother and father are confirmed elsewhere as well. But it is odd that no further information is found on his father in the ODNB. Does anyone know further details and sources?

What insulates Wikipedia from the criticisms other massive platforms endure? We explored some answers—core values, lack of personalization algorithms, and lack of data collection—in last week’s “How Wikipedia Dodged Public Outcry Plaguing Social Media Platforms.”

But wait, there’s more:

Wikipedia moderation is conducted in the open.

“The biggest platforms use automated technology to block or remove huge quantities of material and employ thousands of human moderators.” So says Mark Bunting in his July 2018 report Keeping Consumers Safe Online: Legislating for platform accountability for online content.” Bunting makes an excellent point, but he might have added a caveat: “The biggest platforms, like Facebook, Twitter, and YouTube, but not Wikipedia.

Wikipedia, one of the top web sites worldwide for about a decade, works on a different model. The volunteers writing Wikipedia’s content police themselves, and do so pretty effectively. Administrators and other functionaries are elected, and the basic structure of Wikipedia’s software helps hold them accountable: actions are logged, and are generally visible to anybody who cares to review them. Automated technology is used; its coding and its actions are transparent and subject to extensive community review. In extreme cases, Wikimedia Foundation staff must be called in, and certain cases (involving extreme harassment, outing, self-harm, etc.) require discretion. But the paid moderators certainly don’t number in the thousands; the foundation employs only a few hundred staff overall.

More recently, a Motherboard article explored Facebook’s approach in greater depth: “The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People.” It’s a long, in-depth article, well worth the read.

One point in that article initially stood out to me: presently, “Facebook is still making tens of thousands of moderation errors per day, based on its own targets.” That’s a whole lot of wrong decisions, on potentially significant disputes! But if we look at that number and think, “that number’s too high,” we’re already limiting our analysis to the way Facebook has presented the problem. Tech companies thrive on challenges that can be easily measured; it’s probably a safe bet that Facebook will achieve something they can call success…that is, one that serves the bottom line. Once Facebook has “solved” that problem, bringing the errors down to, say, the hundreds, Facebook execs will pat themselves on the back and move on to the next task.

The individuals whose lives are harmed by the remaining mistakes will be a rounding error, of little concern to the behemoth’s leadership team. On a deeper level, the “filter bubble” problem will remain; Facebook’s user base will be that much more insulated from information we don’t want to see. Our ability to perceive any kind of objective global reality—much less to act on it—will be further eroded.

As artificial intelligence researcher Jeremy Lieberman recently tweeted, we should be wary of a future in which “…news becomes nearly irrelevant for most of us” and “our own private lives, those of our friends; our custom timelines become the only thing that really matters.” In that world, how do we plan effectively for the future? When we respond to natural disasters, will only those with sufficiently high Facebook friend counts get rescued? Is that the future we want?

It’s not just moderation—almost all of Wikipedia is open.

“If you create technology that changes the world, the world is going to want to govern [and] regulate you. You have to come to terms with that.” —Brad Smith, Microsoft, May 2018. As quoted in Bunting (2018).

From the start, Wikipedia’s creators identified their stakeholders, literally, as “every single human being.” This stands in stark contrast to companies that primarily aim to thrive in business. Wikipedia, on the whole, is run by a set of processes that is open to review and open to values-based influence.

This point might elicit irate howls of frustration from those whose ideas or efforts have been met with a less-than-respectful response. Catch me on another day, and the loudest howls might be my own. But let’s look at the big picture, and compare Wikipedia to massive, corporate-controlled platforms like YouTube, Facebook, or Google.

  • Wikipedia’s editorial decisions are made through open deliberation by volunteers, and are not subject to staff oversight.
  • Most actions leading up to decisions, as well as decisive actions themselves, are logged and available to public review and comment.
  • It’s not just the content and moderation: the free software that runs Wikipedia, and the policies that guide behavior on the site, have been built through broad, open collaboration as well.
  • The Wikimedia Foundation has twice run extensive efforts to engage volunteers in strategic planning, and in many instances has effectively involved volunteers in more granular decision-making as well.

There is room for improvement in all these areas, and in some cases improvement is needed very badly. But inviting everyone to fix the problems is part of what makes Wikipedia thrive. Treating openness as a core value invites criticism and good faith participation, and establishes a basic framework for accountability.

“While principles and rules will help in an open platform, it is values that [operators of platforms] should really be talking about.” — Kara Swisher in the New York Times, August 2018.

Wikipedia lacks relentless public relations & financial shareholders.

There’s another frequently overlooked aspect of Wikipedia: financially speaking, the site is an ant among elephants.

The annual budget of the Wikimedia Foundation, which operates Wikipedia, is about $120 million. That may sound like a lot, but consider this: Just the marketing budget of Alphabet (Google’s parent company) is more than $13 billion.

In terms of the value Wikipedia offers its users, and the respect it shows for their rights, Wikipedia arguably outstrips its neighbors among the world’s top web sites. But it does so on a minuscule budget.

Wikipedia doesn’t have armies of public relations professionals or lobbyists making its case. So part of the reason you don’t hear more about Wikipedia’s strategy and philosophy is that there are fewer professionals pushing that conversation forward. The site just does its thing, and its “thing” is really complex. Because it works fairly well, journalists and policymakers have little incentive to delve into the details themselves.

Wikipedia also doesn’t have armies of stockholders exerting pressure, forcing the kind of tension between profit and ethics that often drives public debate.

Wikipedia is driven by philosophical principles that most would agree with; so the issues that arise are in the realm of implementation. There is little pressure to compromise on basic principles. Tensions between competing values, like business interests vs. ethical behavior, drive the debate over corporate-controlled platforms; but those tensions basically don’t exist for Wikipedia.

In 1973, video artist and provocateur Richard Serra produced the short film “Television Delivers People.” It suggested that those consuming “free” television were not the customers, but the product…being sold to advertisers. In the Internet era, the notion has been frequently applied to media companies. Reasonable people might debate how well this line of thinking applies to various media and social media companies. But with Wikipedia, unique among major Internet platforms, this particular criticism clearly does not apply.

Concluding thoughts

The reasons you don’t hear much about Wikipedia’s governance model are that it is rooted in clearly articulated principles, works fairly well, is reasonably open to benevolent influence, and lacks a public relations campaign.

Those are all good things—good for Wikipedia and its readers. But what about the rest of the Internet? The rest of the media world, the rest of society? If the notion of objective truth is important to you, and if you’re concerned about our access to basic facts and dispassionate analysis in today’s rapidly shifting media landscape, you might want to challenge yourself to learn a bit more about how Wikipedia has been made and how it governs itself…even if you have to dig around a bit to do so.

This article was also published on LinkedIn and Medium.

Everybody has an opinion about how to govern social media platforms. It’s mostly because they’ve shown they’re not too good at governing themselves. We see headlines about which famous trolls are banned from what sites. Tech company executives are getting called before Congress, and the topic of how to regulate social media is getting play all over the news.

Wikipedia has problematic users and its share of controversies, but as web platforms have taken center stage in recent months, Wikipedia hasn’t been drawn into the fray. Why aren’t we hearing more about the site’s governance model, or its approach to harassment, bullying? Why isn’t there a clamor for Wikipedia to ease up on data collection? At the core, Wikipedia’s design and governance are rooted in carefully articulated values and policies, which underlie all decisions. Two specific aspects of Wikipedia inoculate it from some of the sharpest critiques endured by other platforms.

Wikipedia exists to battle fake news. That’s the whole point.

Wikipedia’s fundamental purpose is to present facts, verified by respected sources. That’s different from social media platforms, which have a more complex project…they need to maximize engagement, and get people to give up personal information and spend money with advertisers. Wikipedia’s core purpose involves battling things like propaganda and “fake news.” Other platforms are finding they need to retrofit their products to address misinformation; but battling fake news has been a central principle of Wikipedia since the early days.

1. Wikipedia lacks “personalization algorithms” that get other kids in trouble.

The “news feed” or “timeline” of sites like Facebook, Twitter, or YouTube is the source of much controversy, and of much talk of regulation. These platforms feed their users content based on…well, based on something. Any effort to anticipate what users will find interesting can be tainted by political spin or advertising interests. The site operators keep their algorithms private. Each social media company closely guards its algorithm as valuable intellectual property, even as they tinker and test new versions.

That’s not how Wikipedia works. Wikipedia’s front page is the same for all users. Wikipedia’s volunteer editors openly deliberate the about what content to feature. Controversies sometimes spring up, but even when they do, the decisions leading to them are transparent and open to public commentary.

Search within Wikipedia is governed by an algorithm. But relative to a Twitter feed, it’s fairly innocuous; when you search for something, there are probably only a handful of relevant Wikipedia articles, and they will generally come up in the search results. Much of the work that guides Wikipedia search is open, and is generated by Wikipedia’s user community: redirects, disambiguation pages, and “see also” links. And the MediaWiki software that drives the site, including the search function, is open source.

But even so, an ambitious Wikimedia Foundation executive tried to take bold action around the search algorithm a few years ago. The “Knowledge Engine” was conceived as a new central component of Wikipedia; artificial intelligence and machine learning would have taken a central role in the user experience. The plan was hatched with little regard for the values that drive the Wikipedia community, and was ultimately scuttled by a full-blown revolt by Wikipedia’s users and the Foundation’s staff. Would an algorithm-based approach to driving reader experience have exposed Wikipedia to the kind of aggressive scrutiny Twitter and Facebook now face? Perhaps the problems Wikipedia dodged in that tumultuous time were even bigger than imagined.

The Wikimedia Foundation’s fund-raising banners are driven by algorithms, too. These spark frequent debates, but even the design of those algorithms is somewhat transparent, and candid discussion about them is not unusual. Those of us who care deeply about Wikipedia’s reputation for honesty sometimes find significant problems with the fund-raising messages; but the impact of problems like these is limited to Wikipedia’s reputation, not the public’s understanding of major issues.

2. Wikipedia isn’t conspiring to track your every move.

Most web sites collect, use, and sell a tremendous amount of data about their users. They’ve gotten really sophisticated, and can surmise an incredible amount of information about us. But that’s a game that Wikipedia simply doesn’t play.

In 2010, the Wall Street Journal ran a series on how web sites use various technologies and business partnerships to track all kinds of information about their users. Journalists Julia Angwin and Ashkan Soltani were nominated for a Pulitzer Prize, and won the Loeb Award for Online Enterprise. It’s still relevant in 2018.

Even back then, coverage of the issue managed to neglect one vital fact: Wikipedia, unlike all the other top web sites, does not track your browsing history. The site barely captures any such information to begin with, and its operators don’t share it unless legally compelled. When considered by the Electronic Frontier Foundation in their “Who Has Your Back” report (and I’ll claim a little credit for their considering Wikipedia to begin with), the Wikimedia Foundation has earned stellar marks.

Why Wikipedia’s principled design matters

At its core, Wikipedia is avoiding scandal by two core aspects of how it functions: it doesn’t try to predict and guide what you encounter online, and it doesn’t capture and analyze user data.

It might be possible for social media platforms to constrain their approach to those activities enough to satisfy their critics. Just like it might be possible for a heroin addict to limit their use enough to function in society, or for a cabbie to minimize the possibility of a car wreck through attentive driving.

But it would have been safer for the heroin addict to avoid using heroin to begin with, or for the cabbie to have taken a desk job. That’s how it is with Wikipedia. The site has relentlessly kept its focus on its main goal of providing informationeven to the exclusion of chasing money from advertisers or by reselling user data.

One benefit of that clarity of vision among the designers and maintainers of Wikipedia is that we’ve been able to govern ourselves reasonably well. Which means the government and media pundits aren’t trying to do it for us.

This article was also published on LinkedIn and Medium.

Concern about social media and the quality of news is running high, with many commentators focusing on bias and factual accuracy (often summarized as “fake news”). If efforts to regulate sites like Facebook are successful, they could affect the bottom line; so it would behoove Facebook to regulate itself, if possible, in any way that might stave off external action.

Facebook has tried many things, but they have ignored something obvious. It’s something that has been identified by peer reviewed studies as a promising approach since at least 2004…the same year Facebook was founded.

Instead of making itself the sole moderator of problematic posts and content, Facebook should offer its billions of users a role in content moderation. This could substantially reduce the load on Facebook staff, and could allow its community to care of itself more effectively, improving the user experience with far less need for editorial oversight. Slashdot, once a massively popular site, proved prior to Facebook’s launch that distributing comment moderation among the site’s users could be an effective strategy, with substantial benefits to both end users and site operators. Facebook would do well to allocate a tiny fraction of its fortune to designing a distributed comment moderation system of its own.

Distributed moderation in earlier days

“Nerds” in the late 1990s or early 2000s—when most of the Internet was still a one-way flow of information for most of its users—had a web site that didn’t merely keep them them informed, but let them talk through the ideas, questions, observations, or jokes that the (usually abbreviated and linked) news items would prompt. Slashdot, “the first social news site that gained widespread attention,” presented itself as “News for Nerds. Stuff that Matters.” It’s still around, but in those early days, it was a behemoth. Overwhelming a web site with a popular link became known as “slashdotting.” There was a time when more than 5% of all traffic to sites like CNET, Wired, and Gizmodo originated from Slashdot posts.

Slashdot featured epic comment threads. It was easy to comment, and its readers were Internet savvy almost by definition. Slashdot posts would have hundreds, even thousands, of comments. According to the site’s Hall of Fame, there were at least 10 stories with more than 3,200 comments.

But amazingly—by today’s diminished standards, at least—a reader could get a feel for a thread of thousands of messages in just a few minutes of skimming. Don’t believe me? Try this thread about what kept people from ditching Windows in 2002. (The Slashdot community was famously disposed toward free and open source software, like GNU/Linux.) The full thread had 3,212 messages; but the link will show you only the 24 most highly-rated responses, and abbreviated versions of another 35. The rest are not censored; if you want to see them, they’re easy to access through the various “…hidden comments” links.

As a reader, your time was valued; a rough cut of the 59 “best” answers out of 3,212 is a huge time-saver, and makes it practical to get a feel for what others are saying about the story. You could adjust the filters to your liking, to see more or fewer stories by default. As the subject of a story, it was even better; supposing some nutcase seized on an unimportant detail, and spun up a bunch of inaccurate paranoia around it, there was a reasonable chance their commentary would be de-emphasized by moderators who could see through the fear, uncertainty, and doubt.

At first blush, you might think “oh, I see; Facebook should moderate comments.” But they’re already doing that. In the Slashdot model, the site’s staff did not do the bulk of the moderating; the task was primarily handled by the site’s more active participants. To replicate Slashdot’s brand of success, Facebook would need to substantially modify the way their site handles posts and comments.

Going meta

Distributed moderation, of course, can invite all sorts of weird biases into the mix. To fend off the chaos and “counter unfair moderation,” Slashdot implemented used what’s known as “metamoderation.” The software gave moderators the ability to assess one another’s moderation decisions. Moderators’ decisions needed to withstand the scrutiny of their peers. I’ll skip the details here, because the proof is in the pudding; browsing some of the archived threads should be enough to demonstrate that the highly-rated comments are vastly more useful than the average comment.

Some Internet projects did study Slashdot-style moderation

For some reason, it seems that none of the major Internet platforms of 2018—Facebook, Twitter, YouTube, etc.—have ever experimented with meta-moderation.

From my own experience, I can affirm that some projects intending to support useful online discussion did, in fact, consider meta-moderation. In its early stages, the question-and-answer web site took a look at it; so did a project of the Sloan Foundation in the early days of the commentary tool

If Facebook ever did consider a distributed moderation system, it’s not readily apparent. Antonio García Martínez, a former Facebook product manager, recently tweeted that he hadn’t thought about it at length, and expressed initial skepticism that it could work.

There are a few reasons why Facebook might be initially reluctant to explore distributed moderation:

  • Empowering people outside the company is always unsettling, especially when there’s a potential to impact the brand’s reputation;
  • Like all big tech companies, Facebook tends to prefer employing technical, rather than social, interventions;
  • Distributed moderation would require Facebook to put data to use on behalf of its users, and Facebook generally seeks to tightly control how its data is exposed;
  • Slashdot’s approach would require substantial modification to fit Facebook’s huge variety of venues for discussion.

Those are all reasonable considerations. But with an increasing threat of external regulation, Facebook should consider anything that could mitigate the problems its critics identify.

Subject of academic study

If you’ve used a site with distributed moderation, and a meta-moderation layer to keep the mods accountable, you probably have an intuitive sense of how well it can work. But in case you haven’t, research studies going back to 2004 have underscored its benefits.

According to researchers Cliff Lampe and Paul Resnick, Slashdot demonstrated that a distributed moderation system could help to “quickly and consistently separate high and low quality comments in an online conversation.” They also found that “final scores for [Slashdot] comments [were] reasonably dispersed and the community generally [agreed] that moderations [were] fair.” (2004)

Lampe and Resnick did acknowledge shortcomings in the meta-moderation system implemented by Slashdot, and stated that “important challenges remain for designers of such systems.” (2004) Software design is what Facebook does; it’s not hard to imagine that the Internet giant, with annual revenue in excess of $40 billion, could find ways to address design issues.

The appearance of distributed moderation…but no substance

In the same year that Lampe and Resnick published “Slash(dot) and burn” (2004), Facebook launched. Even going back to the site’s earliest days, the benefits of distributed meta-moderation had already been established.

Facebook, in the form it’s evolved into, shares some of the superficial traits of Slashdot’s meta-moderation system. Where Slashdot offered moderators options like “insightful,” “funny,” and “redundant,” Facebook offers options like “like,” “love,” “funny,” and “angry.” The user clicking one of those options might feel as though they are playing the role of moderator; but beneath the surface, in Facebook’s case, there is no substance. At least, nothing to benefit the site’s users; the data generated is, of course, heavily used by Facebook to determine what ads are shown to whom.

In recent years, Facebook has offered a now-familiar bar of “emoticons,” permitting its users to express how a given post or comment makes them feel. Clicking the button puts data into the system; but it’s only Facebook, and its approved data consumers, who get anything significant back out.

When Slashdot asked moderators whether a comment was insightful, funny, or off-topic, that information was immediately put to work to benefit the site’s users. By default, readers would see only the highest-rated comments in full, and would see a single “abbreviated” line for those with medium ratings, and would have to click through to see everything else. Those settings were easy to change, for users preferring more or less in the default view, or within a particular post. Take a look at the controls available on any Slashdot post:

Where Facebook’s approach falls short

Facebook’s approach to evaluating and monitoring comments falls short in several ways:

  1. It’s all-or-nothing. With Slashdot, if a post was deemed “off topic” by several moderators, it would get a low ranking, but it wouldn’t disappear altogether. A discerning reader, highly interested in the topic at hand and anything even remotely related, might actually want to see that comment; and with enough persistence, they would find it. But Facebook’s moderation—whether by Facebook staff or the owner of a page—permits only a “one size fits all” choice: to delete or not to delete.
  2. Facebook staff must drink from the firehose. When the users have no ability to moderate content themselves, the only “appeal” is to the page owner or to Facebook staff. Cases that might be easily resolved by de-emphasizing an annoying post either don’t get dealt with, or they get reported. Staff moderators have to process all the reports; but if users could handle the more straightforward cases, the load on Facebook staff would be reduced, permitting them to put their attention on the cases that really need it.
  3. Too much involvement could subject Facebook to tough regulation as a media company. There is spirited debate over whether companies like Facebook should be regarded as a media company or a technology platform. This is no mere word game; media companies are inherently subject to more invasive regulation. Every time Facebook staff face a tricky moderation decision, that decision could be deemed an “editorial” decision, moving the needle toward the dreaded “media company” designation.

Facebook must learn from the past

Facebook is facing substantial challenges. In the United States, Congress took another round of testimony last week from tech executives, and is evaluating regulatory options. Tim Wu, known for coining the term “net neutrality,” recently argued in favor of competitors to Facebook, perhaps sponsored by the Wikimedia Foundation; he now says the time has come for Facebook to be broken up by the government. In the same article, antitrust expert Hal Singer paints a stark picture of Facebook’s massive influence over innovative competitors: “Facebook sits down with someone and says, ‘We could steal the functionality and bring it into the mothership, or you could sell to us at this distressed price.’” Singer’s prescription involves changing Facebook’s structure, interface, network management, and dispute adjudication process. Meanwhile in Europe, the current push for a new Copyright Directive would alter the conditions in which Facebook operates.

None of these initiatives would be comfortable for Facebook. The company has recently undertaken a project to rank the trustworthiness of its users; but its criteria for making such complex evaluations are not shared publicly. Maybe this will help them in the short run, but in a sense they’re kicking the can down the road; this is yet another algorithm outside the realm of public scrutiny and informed trust.

If Facebook has an option that could reduce the concerns driving the talk of regulation, it should embrace it. According to Lampe and Resnick, “the judgments of other people … are often the best indicator of which messages are worth attending to.” Facebook should explore an option that lets them tap an underutilized resource: the human judgment in its massive network. The specific implementation I suggest was proven by Slashdot; the principle of empowering end users also drove Wikipedia’s success.

Allowing users do play a role in moderating content would help Facebook combat the spread of “fake news” on its site, and simultaneously demonstrate good faith by dedicating part of its substantial trove of data to the benefit of its users. As Cliff Lampe, the researcher quoted above, recently tweeted: “I’ve been amazed, watching social media these past 20 years, that lessons from Slashdot moderation were not more widely reviewed and adopted. Many social sites stole their feed, I wish more had stolen meta-moderation.”

All platforms that feature broad discussion stand to benefit from the lessons of Slashdot’s distributed moderation system. To implement such a system will be challenging and uncomfortable; but big tech companies engage with challenging software design questions routinely, and are surely up to the task. If Facebook and the other big social media companies don’t try distributed moderation, a new project just might; and if a new company finds a way to serve its users better, Facebook could become the next Friendster.

This article was also published on LinkedIn and Medium.

Maybe you’ve already heard the story of how the global edit-a-thon known as Art+Feminism got started. It goes something like this:

Five years ago, four friends—Siân Evans, Jacqueline Mabey, Michael Mandiberg, and Laurel Ptak—gathered together to discuss an idea for promoting Wikipedia as a place to challenge one of the ways women are silenced: through the preservation of information. That discussion became the Art+Feminism campaign.

Our goals today still revolve around combating gender inequity on the internet, using Wikipedia as a tool for correcting the written record on cis and trans women. And in the last year, all of us—from the leadership collective to the thousands of organizers, artists, librarians, activists, and editors who make up our global Art+Feminism community—have experienced many lessons, challenges, and triumphs.

Take the month of March, for example. Over 4,000 people at more than 315 Art+Feminism events around the world came together to create or improve nearly 22,000 pages on Wikipedia, with a total of 43,000 content pages created or improved! That’s four times the output of our 2017 events. Four times. Out of 357 initiatives across 80 countries, we were named as 1 of 5 finalists in the #EQUALSinTech Leadership Award Category.

Our fifth-annual Wikipedia edit-a-thon at the Museum of Modern Art, an all-day event designed to generate coverage of feminism and the arts on Wikipedia, took place on 3 March 2018 with hundreds of partner events around the world. It featured tutorials for the beginner Wikipedian, ongoing editing support, training on combating implicit and explicit biases, reference materials, childcare, and refreshments, with the leading panel “Careful with Each Other, Dangerous Together,” about the relationship between structures of inequality on the internet, the emotional labor of internet activism, and creating inclusive online communities with Caroline Sinders, Sydette Harry, Salome Asega, and Sarah Jaffe.

Art+Feminism’s regional organizers continue to amplify the way the project resonates with and reaches people all over the world. This year, we focused on Latin America, where Melissa Tamani has done concentrated outreach, quadrupling the number of events in that region compared to last year.  These nearly 30 events stretched from Laboratorio Cultural del Norte, Chihuahua, Mexico, to the Museo Nacional de Bellas Artes de Santiago.

Panel discussion about Art+Feminism at The Museum of Modern Art, New York City, 3 March 2018.

Our success is because of the commitment that we share with hundreds of organizers around the world to see the voices of cis and trans women made visible and their achievements shared just as widely as their male identified peers. We want to see justice done and each year,  we work to refine our strategies, our organizing, and our materials so that they are made even more accessible than the year before. With that goal in mind, we launched our Quick Guide for Organizers and our Quick Guide for (New) Editors. Both guides have been made available on our training materials page in English and Spanish. On 30 September of this year, we are launching our redesigned training guides for organizers in PPT form in English and Spanish with more guides to follow translated into French and at least one African language with the goal of translating our materials for new editors and organizers into at least five additional languages in the next three years of our campaign. In addition to content focused on quick tips for organizing edit-a-thons and editing Wikipedia for the first time; we’ll be launching even more materials in the future focused specifically on advancing gender justice on the internet, designing brave/safe space policies through a lens knowledge equity and anti-harassment, and more.

Beginning this project, we knew that our role would not only be to empower cis and trans women to edit online but to stand with them as they are challenged by those who do not see value in their perspective nor value in them. Art+Feminism is about making Wikipedia a more complete and representative source of information, but it doesn’t end there for us. It’s about dismantling systems of thought that diminish or erase entirely the place marginalized people and their communities have in our history.

As we have addressed many glaring omissions about women on Wikipedia, we have seen our focus turn towards improving existing articles: for example, the first year we created 101 new articles, and improved roughly the same number, while this year we improved 7 times as many articles as we created.

Our task is to take what we’ve made to the next level. From there, our task is to leverage the reach and impact of our social justice community to recognize that Wikipedia is just one of many tools that can be used to combat gender inequity and biased, informative content on the internet. We have so much to do, and we’re ready to take on the task of continuing to iterate and improve as our campaign evolves and adapts as it needs to.

Look at what we’ve done—and there’s even more to come.

McKensie Mack, Director

We couldn’t do this work without our supporters, our partners, and our Regional Ambassadors: AfroCROWD; Black Lunch Table; Women in Red; the Professional Organization for Women in the Arts (POWarts); and The Museum of Modern Art. Art+Feminism receives support from Qubit New Music and Wikimedia New York City. The Art+Feminism leadership collective includes Mohammed Sadat Abdulai; Stacey Allan; Amber Berson; Sara Clugage; Richard Knipel; Stuart Prior; Melissa Tamani; and Addie Wagenknecht.

We are pleased to announce a $2 million gift to the Wikimedia Endowment from George Soros, one of the world’s leading philanthropists.

Soros is known for his extensive philanthropy to support ideals underpinning a free and open society, including access to knowledge, education, economic development and policy reform. He is also known for founding the Open Society Foundations, one of the preeminent international grantmaking networks supporting civil society groups around the world, and giving over $32 billion of his personal wealth to the organization.

“George’s generous gift to the future of free knowledge is reflective of his deep commitment to supporting openness in all its forms” said Katherine Maher, Executive Director of the Wikimedia Foundation. “His gift will help us ensure the sum of all knowledge remains free and open for the benefit of generations to come.”

This gift provides vital momentum to the Wikimedia Endowment Campaign. Wikimedia believes that free knowledge is the foundation for human potential, opportunity, and freedom. Since the launch of the endowment in January 2016, the campaign has raised over $26.5 million from generous donors, philanthropists, and Wikimedia community members.

“The Endowment is not just a practical way to support Wikipedia,” Soros says. “My gift represents a commitment to the ideals of open knowledge—and to the long-term importance of free knowledge sources that benefit people around the world.”

“The Wikimedia Endowment guarantees that the next generation of Wikipedia readers and contributors will have even better educational opportunities than the previous generations had,” said Peter Baldwin, co-founder of the Arcadia Fund, Wikimedia Endowment Board member, and long-time supporter. “Time and again George has been a philanthropic leader in ensuring access and opportunity around the world, and his gift to the endowment furthers that for generations to come.”

Kaitlin Thaney, Endowment Director
Wikimedia Foundation

Tech News issue #42, 2018 (October 15, 2018)

00:00, Monday, 15 2018 October UTC
TriangleArrow-Left.svgprevious 2018, week 42 (Monday 15 October 2018) nextTriangleArrow-Right.svg
Other languages:
Deutsch • ‎English • ‎dansk • ‎français • ‎italiano • ‎polski • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎فارسی • ‎مصرى • ‎हिन्दी • ‎মেইতেই লোন্ • ‎中文 • ‎日本語 • ‎한국어

Older blog entries