en.planet.wikimedia

November 01, 2014

Wikimedia Foundation

A Tale of Two Copyrights: The (im)probable reform in Europe

Since July 2013, Dimitar Dimitrov is Wikimedian in Brussels. In assorted blog posts he talks about his experiences vis-à-vis the EU.

Charles Dickens: A Tale of Two Cities. With Illustrations by H. K. Browne. London: Chapman and Hall, 1859. First edition. Photography Hablot Knight Browne, Heritage Auctions, Inc. Dallas, Texas. Public Domain

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness,it was the spring of hope,it was the winter of despair…
“Charles Dickens: A Tale of Two Cities. With Illustrations by H. K. Browne. London: Chapman and Hall, 1859. First edition. Photography Hablot Knight Browne, Heritage Auctions, Inc. Dallas, Texas. Public Domain

No, this title is not an original. It is largely copied. A derivative work that is legally unproblematic only because Mr. Dickens has been dead long enough. If I were to remix something newer, let’s say if I came up with “Pirates in the Copyright: Disney’s Chest” and included a picture and quotes from that particular work, well, that might get me into all kinds of trouble.

But copyright term lengths and how we deal with remixed content are just two of the fundamental questions we can no longer postpone. Information technology allows for sharing at virtually no cost. That is the positive promise the digital revolution has brought about. We must admit that this is a genuinely good thing and an opportunity for sustainable global development and improvement of people’s lives.

The other tale is more ambiguous. It retells the old story that every revolution brings about a new culture and new economy, but also puts out of business those who cannot adopt.

Position paper on EU Copyright Reform

The Wikimedia movement has read these two tales. We’ve suffered them, we’ve enjoyed them. We’ve experienced the practicalities, patches and peculiarities. We’ve thought, debated and worked with and around these issues for more than a decade now.

Recently, the European Wikimedia Chapters, together with a group of 18 further civil society organisations, published a Position Paper initially drafted by our EU Policy work group to be send to European Commission units responsible for intellectual property. We made four proposals that we’re convinced must be included in any meaningful copyright reform if it is to make anything fit the so-called “digital age”. These four points have one thing in common: they would drastically increase the commons and our ability to share content while leaving economic interests and thus financial profits virtually untouched. These four changes are:

  1. Harmonising copyright legislation, thereby making rules clearly understandable and reducing current legal risk
  2. Enshrining a universal Freedom of Panorama exception guaranteeing the right to use and re-use images taken in public spaces
  3. Clearly stating that publicly funded content must be public domain
  4. Growing the public domain by reducing copyright terms by 20 years (i.e. to the length set out in the currently binding international treaties)

Meanwhile in Brussels…

Even the new European Commission seems to have drawn the political conclusions from realising the inevitability of changing rules that were made with paper presses and horse-drawn carriages in mind. We are hearing that writing an actual reform proposal will take anything from 6-18 months in Brussels. This means that they’re hurrying which can only be interpreted as political pressure, at least for the moment.

After years of postponing tough decisions, the new President of the European Commission, who put copyright reform in his list of top priorities, moved the dossier and unit responsible for it to another directorate. It will no longer fall under the responsibility of the “internal market” (DG Markt), but is now housed by the Directorate-General responsible for the Digital Economy and Society and its Commissioner, the German Günther Öttinger. Overseeing Öttinger’s work will be Vice-President of the Commission Andrus Ansip from Estonia. His role is dedicated to establishing a “digital single market”, which can only mean harmonisation, which in turn is hardly possible without reforming copyright. The new composition of the European Commission and a recent Twitter hearing the Vice-President agreed to participate in give rise to some reasonable expectations that change might indeed be coming.

“Then tell Wind and Fire where to stop, but don’t tell me.”

Opponents of a copyright reform (which include, but are not limited to, publishers) are in fact not against the four points outlined above. The legal re-balancing we are proposing wouldn’t hurt the industry. They are simply against any change whatsoever, out of fear that it might be slippery slope to abolishing copyright. And while it isn’t a real intellectual challenge to argue that lack of change is much more likely to eventually kill copyright, rather than a few sensible updates, this “I will block anything that comes my way” attitude might turn out to be poisonous for reform. The only things law-makers shy away from more than bad law are unsuccessful legislative proposals.

It takes really strong-minded, shrewd and resolute politicians aided by a dedicated civil society to make things happen.

Tell them!

The good news is: optimism should derive from the fact that you can be part of that dedicated civil society that pushes its policy-makers to be resolute.

Wikimedians are working on being represented at the EU level to be part of the conversation when decisions about us and our daily work are made. By providing volunteers and supporters with necessary background knowledge, personal support and infrastructure, we’re trying to involve you in our advocacy activities!

If you prefer starting off solo, you can try contacting one of our European representatives from your region or country and warning them that a copyright reform is coming their way in about a year, while counselling them about digital culture and intellectual property. They are likely to be very busy people who have a hard time keeping track of every issues headed their way ;)

If you are a team player, please don’t hesitate to contact the coordinator from your country (and/or the Brussels project lead) to figure out what you can do together.

Alone or as an organisation you can follow Wikimedia UK’s example and snail mail decision-makers. Their response rate was impressive and snail mails are becoming less common today, showing that you’re willing to make that little bit of extra effort to gain their attention.

If you are not from or living in Europe, but you wish to engage in advocacy activities, there’s plenty to do globally. Please drop us a line and we will find a way to help each other!

Let’s lobby!

Dimitar Dimitrov, Wikimedian

by wikimediablog at November 01, 2014 12:02 AM

October 31, 2014

Wikimedia Foundation

Pang-start 2014: A collaboration between the Oslo National Academy of the Arts and Wikimedia Norge

Illustration for the article metaphor. In Norwegian language the expression “the King of the forest” is a metaphor for moose.
“Skogens konge” by Synne A. Salvesen, under CC-BY-SA-4.0

100 illustrations in 2 days?

In June 2014, Wikimedia Norge was contacted by Andreas Berg, a professor of illustrations at the department of graphic design and illustration at the Oslo National Academy of the Arts. He had an idea. In preparation for the upcoming fall semester, he drafted a workshop that centered on Wikipedia. The aim of this workshop was to teach students how to design and make use of freely licenced illustrations.

Wikimedia Norge adopted the proposal and compiled a list of 100 Wikipedia articles that were without illustrations. An academic program was created, comprising of six lectures. Over the course of the series, the topics ranged from the transition of print to online encyclopedias, over gender issues in dictionaries, to the use of cultural “big data”. Students would be taught how to use free licences, and how deconstruct the elements that make a successful illustration.

Professor at the University of Oslo, Ruth E. Vatvedt Fjeld and research librarian at the National Library of Norway, ph.d. Siv Frøydis Berg both held lectures at the workshop.
“Making illustrations for Wikipedia 09″ by WMNOastrid, under CC-BY-SA-4.0

We reached out to a variety of people, some from past collaborations and others we had heard speak at conferences. Our requests were well received; however, it was difficult to narrow down specific articles. After extensive creative help and good advice from Svein Nyhus, both an award winning illustrator and a Wikipedian, we chose a list that consisted mostly of articles about abstract words. Here are some examples: inner peace, power nap, master suppression techniques, shame, political asylum and rapid eye movement sleep.

Why illustration?

  Andreas Berg and Martin Egge Lundell, professors at the Oslo National Academy of the Arts.

“Making illustrations for Wikipedia 08″ by WMNOastrid, under CC-BY-SA-4.0

Encyclopedias are a pillar of Western culture. The French Encyclopédie has represented the ideal of an inseparable basic structure of text and images in a national framework for over 200 years. This alone, with its complex implications, is good enough reason for professors Berg and his colleague Martin Egge Lundell to deal with encyclopedias: “Up until fifteen years ago, major projects were in process all over the world to create national encyclopedias. It was considered a state responsibility. Nationalism was empowered by authority. The Internet has completely reshaped the concept of encyclopedias, and Wikimedia projects have been a driving force in that development.”

The students hang all the Wikipedia articles up on the wall, and as they got ideas, made drafts and hang them up next to the articles.

“Making illustrations for Wikipeida 07″ by WMNOastrid, under CC-BY-SA-4.0

On the second day, the best drafts were selected for uploading to Wikimedia Commons.
“Making illustrations for Wikipeida 07″ by WMNOastrid, under CC-BY-SA-4.0

Berg and Lundell approached Wikimedia Norge with the idea of a two day workshop for 50 students of illustration and graphic design, an experiment with the aim to produce one hundred illustrations for Wikipedia. Designing scientific images for encyclopedias was a sharp contrast to the work students were used to creating. «With Pang-Start 2014 we wanted to get the students as a group to playfully use their abilities and skills, taking into account the responsibility that comes with any kind of interference in the media», say Berg and Lundell.

Pang-Start 2014 dealt with general questions within the field of graphic design and illustration: Who decides the content? What is the difference between user and producer? What is possible to communicate and what is not? And specifically, when it comes to the free encyclopedia: What is national and what is international on Wikipedia? Who is in charge? Who really cares about the design and the images? And, last but not least, how to contribute?

The workshop resulted in the publication of 72 distinct illustrations, that have been thus far put to use on Bokmål Wikipedia, Nynorsk Wikipedia (2 Norwegian Wikipedias), Indonesian Wiktionary, English Wikipedia and Arabian Wikipedia. You can see all the illustrations by clicking here.

Contribution can have many forms, and most importantly, a wide range of meanings for the person who decides to take part. Help us expand this project and take its stories further: share, contribute, and use the images we brought on the Wikimedia projects!

Astrid Carlsen, project leader at Wikimedia Norge
Andreas Berg, professor of illustration at the Oslo National Academy of the Arts
Martin Egge Lundell, professor of graphic design at the Oslo National Academy of the Arts

by wikimediablog at October 31, 2014 10:04 PM

Amir E. Aharoni

Link Wikipedia Articles in Different Languages

OK THIS IS AWESOME, and “awesome” is not a word that I use lightly.

As a gift for the second birthday of the Wikidata project, nice people at Google created a tool that helps people link articles in different languages that are not linked yet. They prepared a list with thousands of pairs of articles in different languages that are supposed to be about the same subject according to their automatic guesswork. The tool only shows such articles, and a human editor must check whether they actually match, and if they do—make the linking automatically.

There were thirty six such articles for the Hebrew–English pair. About four of them were unrelated, and I fixed the linking between the rest of them. Some of them required manual intervention, because there were interfering links to unrelated subjects. For some simple cases it took me just a few seconds, and for a few complicated ones—a few minutes.

I also tried doing the same for Russian–English, but there are over a thousand article pairs there, so I only did a few. I also did a few for Catalan and Greek, and I finished all ten pairs for Bengali, even though I don’t actually know Greek or Bengali. I just used a bit of healthy intuition and Google Translate, and I’m pretty sure that I did it well.

You can help!

Here are my suggested instructions for doing this.

Preparation:

  1. Log in to mediawiki.org. This account is used also for the tool.
  2. Now go to the tool’s site. Click Login, and allow the tool to use your mediawiki.org account.
  3. Go to settings, and choose your pair of languages.
  4. Go to “Check by list” and you’ll see a list of article pairs. If there are no suggested article pairs for the language pair you selected, go back to number 3 choose some other languages. As I wrote above, from my experience, you don’t need to know a language thoroughly to perform this useful work ;)

Now click a link to a pair of articles that looks reasonable. Articles in both languages will open side by side.

  1. If the articles are definitely not about the exact same subject, click “No” in the list and find another pair.
  2. If the articles are about the same subject and one of them doesn’t have any interlanguage links, click “Add links” in the interlanguage area. In the box that will open, write the language name of the other language in the first field and the title of the article in the other field, and then click the “Link with page” button. A list of articles in other languages will be shown. If it looks reasonable, click “Confirm”, and then “Close dialog and reload page”. That’s it, the pages are linked! Click “Yes” in the list in the linking tool and proceed to another article pair.
  3. If the articles are about the same subject, but both of them appear to have links to other language, it’s possible that explicit interlanguage links are written in the source code of the articles. To resolve this, do the following:
    1. Open both articles for editing in source mode.
    2. Scroll all the way down and find whether they have explicit interlanguage links.
    3. If these are correct links to articles about the same subjects in other languages, go to those articles, and link them using Wikidata. Note that it often happens in such cases that these are links to redirects, so the actual current title may be different.
    4. If these are links to articles about other subjects, even if they are related, remove those links. For example, if the article in Bengali is about an island, and the article in Dutch is about a city on that island, remove the link – these subject are distinct enough. Ditto if the article in English is about an American human rights organization and the article in French is about a French human rights organization.
    5. If you were able to remove all the explicit links from the source, go back to point 2 above and link the articles using Wikidata.
    6. If it’s too complicated to remove these links for any reason, feel free to go to another article, but it would be nice to leave a note about this on the articles’ talk pages so that other editors would clean this up some time.

That’s it. It may get a tad complicated for some cases, but if you ask me, it’s a lot of fun.


Filed under: language, Wikipedia Tagged: Wikidata

by aharoni at October 31, 2014 10:36 AM

October 30, 2014

Mike Linksvayer

Wikidata II


Wikidata went live two years ago, but the II in the title is also a reference to the first page called Wikidata on meta.wikimedia.org which for years collected ideas for first class data support in Wikipedia. I had linked to Wikidata I writing about the most prominent of those ideas, Semantic MediaWiki (SMW), which I later (8 years ago) called the most important software project and said would “turn the universal encyclopedia into the universal database while simultaneously improving the quality of the encyclopedia.”

SMW was and is very interesting and useful on some wikis, but turned out to be not revolutionary (the bigger story is wikis turned out to be not revolutionary, or only revolutionary on a small scale, except for Wikipedia) and not quite a fit for Wikipedia and its sibling projects. While I’d temper “most” and “universal” now (and should have 8 years ago), the actual Wikidata project (created by many of the same people who created SMW) is rapidly fulfilling general wikidata hopes.

One “improving the encyclopedia” hope that Wikidata will substantially deliver on over the next couple years and that I only recently realized the importance of is increasing trans-linguistic collaboration and availability of the sum of knowledge in many languages — when facts are embedded in free text, adding, correcting, and making available facts happens on a one-language-at-a-time basis. When facts about a topic are in Wikidata, they can be exposed in every language so long as labels are translated, even if on many topics nothing has ever been written about in nor translated into many languages. Reasonator is a great demonstrator.

Happy 2nd to all Wikidatians and Wikidata, by far the most important project for realizing Wikimedia’s vision. You can and should edit the data and edit and translate the schema. Browse Wikidata WikiProjects to find others working to describe topics of interest to you. I imagine some readers of this blog might be interested in WikiProjects Source MetaData (for citations) and Structured Data for Commons (the media repository).

For folks concerned about intellectual parasites, Wikidata has done the right thing — all data dedicated to the public domain with CC0.

by <span class='p-author h-card'>Mike Linksvayer</span> at October 30, 2014 09:35 PM

Wikimedia Suomi (WMFI - English)

Bringing Cultural Heritage to Wikipedia

Photo by: Teemu Perhiö, CC-BY-SA 4.0

Course participants editing Wikipedia at the first gathering at the Finnish Broadcasting Company Yle.

Bring Culture to Wikipedia editathon course is already over halfway through its span. The course, co-organised by Wikimedia Finland, Helsinki Summer University and six GLAM organisations, aims to bring more Finnish cultural heritage to Wikipedia.

The editathon gatherings are held at various organisation locations, where the participants get a ”look behind the scenes” – the organisations show their archives and present their field of expertise. The course also provides a great opportunity to learn basics of Wikipedia, as experienced wikipedian Juha Kämäräinen gives lectures at each gathering.

Photo by: Teemu Perhiö, CC-BY-SA 4.0

Yle personnel presenting the record archives.

The first course gathering was held at the Archives of the Finnish Broadcasting Company Yle on 2nd October. The course attendees got familiar with the Wikipedia editor and added information to Wikipedia about the history of Finnish television and radio. The representatives of Yle also gave a tour of the tape and record archives. Quality images that Yle opened earlier this year were added to articles.

Course attendee Maria Koskijoki appreciated the possibility to get started without prior knowledge.

”The people at Yle offered themes of suitable size. I also got help in finding source material.”

Cooperation with GLAMS

Sketch_archives_(15617786792) (1)

Finnish National Gallery personnel presenting sketch archives at the Ateneum Arts Museum.

This kind of course is a new model of cooperation with GLAM organisations. The other cooperating organisations are Svenska litteratursällskapet i Finland, The Finnish National Gallery, Helsinki City Library, The Finnish Museum of Photography and Helsinki Art Museum. Wikimedia Finland’s goal is to encourage organisations in opening their high-quality materials to a wider audience.

There are many ways to upload media content to Wikimedia Commons. One of the new methods is using GLAMWiki Toolset for batch uploads. Wikimedia Finland invited the senior developer of the project, Dan Entous, to hold a GW Toolset workshop for the representatives of GLAMs and staff of Wikimedia Finland in Sebtember before the beginning of the course. The workshop was first of its kind outside Netherlands.

Course coordinator Sanna Hirvonen says that GLAM organisations have begun to see Wikipedia as a good channel to share their specialised knowledge.

“People find the information from Wikipedia more easily than from the homepages of the organisations.”

This isn’t the first time that Wikimedians and culture organisations in Finland co-operate: last year The Museum of Contemporary Art Kiasma organised a 24-hour Wikimarathon in collaboration with Wikimedia Finland. Over 50 participants added information about art and artists to Wikipedia. Wiki workshops have been held at the Rupriikki Media Museum in Tampere and in Ateneum Art Museum, Helsinki.

Editing_Wikipedia_(15431384190)

Wikipedian guiding a newcomer at the Ateneum Arts Museum.

Images taken on the course can be viewed in Wikimedia Commons.
All Photos by Teemu Perhiö. CC-BY-SA 4.0.

by Teemu Perhiö at October 30, 2014 07:22 PM

Wiki Education Foundation

Avoiding Plagiarism and Paraphrasing Problems

 

"Copyright-problem paste" by Rugby471 and Cronholm144 - Own Work & OpenClipart Library. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons

One of the most important lessons in education is that the work you submit must be your own. Wikipedia is no different, but some students seem to stumble into plagiarism when they embark on their course assignments.

Finding good resources is a crucial step in most Wikipedia assignments. These resources serve as the backbone for an article or as the source that verifies their facts. While these resources are critical, many students may fail to recognize the distinction between “putting it into their own words,” which is encouraged, and “close paraphrasing,” which is not. And yet, students are told that Wikipedia forbids original research — that it’s wrong to connect two pieces of information from different sources and derive a conclusion in their articles.

For students used to citing sources as a means to contributing their own ideas, this may be well outside of their comfort zone, which can lead to fundamental misunderstandings of what plagiarism on Wikipedia means. It’s important to stress that a Wikipedia article or edit has a different set of goals and norms than most traditional writing assignments.

Key points:

  • You won’t see large blocks of quotations on Wikipedia very often. That’s because Wikipedia favors a paraphrase format, in which texts are understood and their ideas restated in an otherwise original way (though short quotations are allowed). This re-articulation of the author’s idea should be followed by a citation.
  • Taking notes for a Wikipedia article is different. Copy key ideas, not key passages of text. Examine a variety of explanations in the topic area, and synthesize them into your own understanding. From there, you can write about your topic by generating your own text, rather than relying on the sources for direct quotations. When you have articulated the idea, cite the sources used to inform that writing. When you’ve finished your draft, read it with your sources close by, and ensure that nothing is too similar.
  • Material in your sandbox is still subject to Wikipedia’s policies, so don’t copy and paste sources into a sandbox, either.

Another common problem is close paraphrasing. This means copying a sentence, changing a few words to words that mean more or less the same thing, but otherwise keeping the structure, grammar, and flow of the original text.

For example:

Original text: Because the weather forecast called for rain, the league decided to switch the location of the game to an indoor facility.

Close paraphrasing: The league switched the game’s location to an indoor facility due to a weather forecast calling for rain. (Unacceptable).

Ideal paraphrasing: Forecasted rain caused the league to move the game indoors. (Acceptable).

You don’t need to strike fear into the hearts of your students to get these points across. It may be useful to frame this discussion in terms of writing for the real world, with a public audience that will hold them accountable, as opposed to the closed hub of academic writing. This approach also reiterates the value of the assignment: They’re producing information for the public, and need to take responsibility for the information they share. Hopefully, it will inspire students to take their assignments seriously, and raise questions about their own approach to writing and understanding the material they encounter in academia.

Students should be encouraged to truly grasp the information they read and share. The result can be transformative not only for their assignment, but in how they understand and approach their academic work in general.

Additional Resources:

Citing-SourcesThumbAvoiding-PlagiarismThumb

 

 

 

 

 

 

 


Image Attribution: Copyright-problem paste” by Rugby471 and Cronholm144Own Work & OpenClipart Library. Licensed under Wikimedia Commons.

by Eryk Salvaggio at October 30, 2014 05:33 PM

Wikimedia UK

“It’s a great way to engage a wider audience”: John Cummings and the Natural History Museum and Science Museum

This post was written by Joe Sutherland.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube-nocookie.com/embed/SIvXGdN-95k?list=PL66MRMNlLyR6BuplUUTWvyl4_klBOZzNT" width="640"></iframe>

John Cummings is not one to shy away from large-scale projects. Fresh from helping build one of the largest Wikipedia endeavours ever – converting the Welsh town of Monmouth into the world’s first “Wikipedia town” – John moved on to become the Wikimedian in Residence at the Natural History Museum and Science Museum (NHM).

His upbringing has played a key role in developing his interests, shooing him onto a path towards the role. “I’ve always had an interest in natural history,” he says. “I didn’t study science at university, but my mum’s a garden designer, I grew up in the countryside… This is one of those roles that probably doesn’t happen that often.”

He held the role at the museums in South Kensington between 2013 and 2014, and helped to promote a culture of openness there as well as exploring what the institutions could do with Wikipedia.

One major aspect of this was looking into content donations, and how they could be beneficial for the museums in promoting their content.

“One of the main ways I encouraged content release under an open licence was just to tell people how Wikimedia projects are made and how many people see the information. It’s amazing – it’s such a wide audience and not just in Britain.

“You can reach people in lots of languages, and amazing projects like Wikipedia Zero [a project to allow free access to Wikipedia in developing countries] give people access to information they can’t get in another way.”

John says that working with the museums provided an avenue to improving Wikipedia by simply tapping into the tremendous resources there.

“It was a wonderful opportunity not only to engage with the public,” he says, “but also with research scientists who have a specialist contribution to make to Wikipedia, built over a whole lifetime of knowledge.

John Cummings at the Natural History Museum

John Cummings at the Natural History Museum in South Kensington
Photo: User:Rock drum, CC-BY-SA 4.0

“The thing about the Natural History Museum is you work there because you care about the natural environment, and people are very willing to spend their time educating the public,” John adds. “Wikipedia is just one more avenue for that, but the great thing about Wikipedia is that it has such a large audience, so that contribution can have a wide impact.”

One major event John helped to organise was with the Office for National Statistics (ONS), the non-ministerial department tasked with collecting and collating statistics on various aspects of politics and life in England and Wales.

“Working at the museums gives you a lot of opportunity to connect with other organisations,” John explains. “One of those was the ONS. They produce all their content under the Open Government Licence which is compatible with Wikimedia projects.

“They produce wonderful infographics about all sorts of subjects that help people easily understand complicated statistics, and we’ve been able to put them straight onto Wikipedia with no change of license, it’s completely compatible.”

He is also keen to take ONS data and feed it into Wikidata, a relatively new Wikimedia project focused on open data collection. “This would allow the ONS to reach a wide audience in many languages very easily,” he says.

This attitude of helping to promote the museums’ work to as many people as possible has been the driving factor behind John’s time in the role. He argues the interaction has given them a chance to reach millions by learning to tap into the global reach of Wikipedia.

“Having a Wikimedian in Residence is a great way to engage with a wider audience that is potentially quite hard to engage with without this kind of bridge into the movement,” he says. “Wikipedia is not the easiest thing to edit in a lot of ways. It’s great to have the understanding of licensing, the rules around conflicts of interest, and other guidelines that the Wikimedia movement has.

“It’s really helpful to have someone internally within the organisation, who’s easily accessible, who’s able to get people started with engagement,” he continues.

“It’s like learning to ride a bike or play an instrument – it’s hard to start off with, but once you get going you kind of feel your way through… it takes practice.”

by Stevie Benton at October 30, 2014 02:30 PM

Free Knowledge Advocacy Group EU publishes copyright reform paper

The image shows the logo of the Free Knowledge Advocacy Group EU - a circle of gold stars on a blue background, like the flag of the EU, with the group's name in the centre

Logo of the Free Knowledge Advocacy Group EU

The Free Knowledge Advocacy Group EU (FKAGEU), of which Wikimedia UK is a member, has this week published a paper on copyright reform across the European Union.

This is in response to the Report on the responses to the Public Consultation on the Review of the EU Copyright Rules and the draft white paper on a copyright policy for creativity and innovation in the European Union.

The position paper has already been shared with key members of the European Commission. The main recommendations from the paper are that:

  • The Commission should clarify the European copyright framework by harmonising legislation and creating a single EU Copyright Title
  • The Commission should ensure everyone has the liberty to freely use and share images taken in public spaces by introducing Freedom of Panorama universally (currently optional under Directive 2001/29/EC Article 5 Point 3.H)
  • The Commission should ensure that all works created by officials within the EU administration and institutions are open for use and re-use by everyone. Such works should hence not be subject to copyright protection.
  • The Commission should re-balance the current culturally and economically harmful mismatch between public commons and private property and close the “20th century gap” by shortening copyright terms to the minimum term possible under existing international treaties and conventions.

The FKAGEU is a grouping of European Wikimedia chapters and other open knowledge organisations from throughout the EU. The work has largely been co-ordinated by Dimitar Dimitrov, the movement’s Wikimedian in Brussels.

This landmark paper has been signed by 33 parties from 17 European countries, of which 16 are Wikimedia chapters, thematic organisations or user groups. You can see a full list of signing partners here.

by Stevie Benton at October 30, 2014 12:03 PM

October 29, 2014

Wiki Education Foundation

Updated Handouts Ready for Classrooms

As part of our commitment to improving the quality of the resources we offer to instructors and students, we’ve revised and updated seven of our classroom handouts. The new versions explain Wikipedia’s policies and procedures in a way student editors can understand.

All of these handouts can be downloaded as single .pdf documents by clicking on the links to the right of each title. You can also download the full collection as a single .pdf at the end. They will be permanently linked alongside the other materials on our Instructor’s page, too.

Using Talk Pages (pdf)
Talk Pages are a student’s gateway to the Wikipedia community. Since your students will use talk pages to interact with your course page and each other, as well as other Wikipedia editors, this handout addresses the technical aspects of how to find, use, and sustain basic etiquette on their talk pages or those of articles they’re working on. Topics include setting up a new talk page, commenting on an existing one, and setting up e-mail alerts.

Choosing an Article (pdf)
This handout offers some collected advice from students and instructors on how to find an article topic worth adding or expanding. Divided into a “Do” and “Don’t” column, topics include comparing available literature to the literature presented on Wikipedia, how to find articles related to their topic area, and advice on starting their articles from scratch or from stubs.

Citing Sources (pdf)
Citations are the backbone of Wikipedia, and of most Wikipedia-based classroom assignments. This is a practical and advice-driven guide on identifying good sources, and how to cite those sources using Wiki markup. The handout introduces the citation toolbar and other areas to check for help.

Avoiding Plagiarism (pdf)
This guide introduces plagiarism policies on Wikipedia, with examples of appropriate and inappropriate (“close”) paraphrasing. We find students are often confused by Wikipedia’s paraphrasing policies. This handout contextualizes and offers examples for doing it right.

Moving Out of Your Sandbox (pdf)
Once students have created a few well-sourced paragraphs for their article with a good overview, they’re encouraged to move it out of the safety of their sandbox and into Wikipedia’s article namespace. This guide provides a technical description of how to do that through Wikipedia’s interface, and offers some guidance on what to expect once it enters the article namespace.

Polishing Your Articles (pdf)
This brochure explains some final steps for making a Wikipedia article better. Two topics are tackled: Uploading images to their article from Wikimedia Commons, and adding links to and from their article and other articles on Wikipedia.

“Did You Know” submissions (pdf)
Advanced classes may want to submit their article to Wikipedia’s “Did You Know” (DYK) process. Articles selected for DYK must be well-sourced and follow several other guidelines, all outlined in the handout. Successful submissions enjoy front-page status on Wikipedia.

Download the full series (pdf)
Download the full suite of our handouts for your course in a single .pdf file.

Let us know how you make use of these handouts, and tell us what changes or future topics you’d like us to cover.

by Eryk Salvaggio at October 29, 2014 08:34 PM

Wikimedia UK

What does Fraser Hobday tell us about notability on Wikipedia?

The photo shows a football goalkeeper catching a ball during a game

Fraser Hobday in action

There has been an interesting story circulating on the internet this week about a young Scottish amateur footballer, Fraser Hobday, who had a longer Wikipedia article than Brazilian World Cup star Neymar. The article has since been nominated for deletion by the Wikipedia community and this case raises some interesting questions.

How do you decide what goes into an encyclopedia? It’s a tricky question and one Wikipedia and its millions of editors have debated since the site was created in 2001. What they settled on was the concept that to be included, a topic had to be ‘notable’. In short, a subject needs to “have gained sufficiently significant attention by the world at large and over a period of time”.

In many cases ‘notability’ is clear cut. Leaders of countries should obviously be included in an encyclopedia and will have innumerable people writing about them. The chances are your next door neighbour doesn’t have this kind of coverage. What happens when opinions differ on a subject’s ‘notability’? A discussion is opened, and Wikipedia’s writers voice their opinions.

We hope that by teaching people how to edit we can lessen the cases in which new editors find their articles deleted. Sometimes articles which should be included are deleted because an inexperienced editor is not fully aware of how ‘notability’ is measured. What Wikipedia looks for is independent third-party sources. Newspaper articles and books are great examples.

By and large, the people who fall foul of the ‘notability’ guideline are newer, less experienced editors. They may spend a great deal of time and effort crafting their article only to see it deleted. No matter how valid the reasons, and how understanding the people discussing the article are, feelings can get hurt. This is especially true when people are writing about people, especially as sometimes people end up writing about themselves. If you write about yourself or someone you know – though Wikipedia actively discourages this – it can feel insulting to be told that you are not notable. It is important to keep in mind that the discussions are not about the value or worth of a person, or whether they ‘deserve’ an article, but whether it’s the kind of thing which belongs in an encyclopedia.

A lot of people learn what goes into Wikipedia through trial and error. Wikimedia UK is a UK registered charity, and one of its branches of activity is training people how to edit. In part this involves the how-to aspect of these are the buttons you press to make changes. That’s the easy part. The more nuanced aspect is helping people understand what goes into an article, and what articles go into Wikipedia!

Wikipedia is the encyclopedia that anyone can edit, but it helps to have someone friendly and knowledgeable on hand. If you’re interested in editing but haven’t taken the plunge yet, why not take a look at the charity’s event page and see what’s going on in your area?

And what of Fraser Hobday? There is a specific notability guideline for footballers – to be considered notable they must have played or managed in a strictly professional league, or played or managed a senior international. We hope that one day Fraser’s career will reach that point and his article can be reinstated. We wish him the very best of luck.

by Stevie Benton at October 29, 2014 04:19 PM

Magnus Manske

The way is shut

So I saw a mail about the new, revamped Internet Archive. Fantastic! All kinds of free, public domain (for the most part) files to play with! So I thought to myself: Why not celebrate that new archive.org by using a file to improve Wikidata? After all, I just have to upload it to Commons!

Easy, right? Well, I did write a tool to directly upload a file from a URL to Commons, but the IA only offers mp3, so I don’t know how that would work. Let’s do it the old-fashioned way, as every newcomer would: Download it to disk, and upload it to Commons. Except Commons barfs at mp3 uploads. Commons is the domain of free formats, after all. And we could not possibly set non-free formats free by converting them automatically, oh no! I am sure there is a good reason why the WMF can’t turn non-free mp3 into free formats during upload; that reason just escapes me at the moment, as it sure will escape everyone else who tried this. Maybe they would have to gasp! license an mp3 decoder? Not sure if that is actually required, but it would surely irk the free-only purity of the organization. Never mind the Foundation heavily relied on non-free software and services like Google internally; if they can’t get things done with free software and open-source services alone, obviously non-free ones are made available. Just not for the community.

The mp3 refusal surely means that there are well-documented ways to deal with this issue, right? The Upload Wizard itself is not very helpful, though; the dialog box that pops up says:

This wiki does not accept filenames that end in the extension “.mp3″.

That’s it. No reason why, no suggestion what to do about it, no links, nothing. Just “bugger off”, in so many words. Never mind; after all, there is a prominent, highlighted link in the Wizard to Upload help. Which, one would assume, offers help with uploading files. I search the page for “mp3″ – no result. Ah well, this seems to be a list of questions rather than an actual help page, but there is a “search archive” function; surely, this problem must have been discussed before! Nope. Neither do the FAQ cover the topic of mp3. But lo and behold, searching for “audio” gets me here, which tells me (finally!) that Commons accepts OGG and FLAC; OPUS is not mentioned, probably because there are “issues” with uploading OPUS to Commons (no, really?!?). There are some links to software and online converters, but I had found some of those on my own already by now.

I tried the Miro converter, but it “only” creates OGG, not FLAC, which I wanted to us in order to avoid re-encoding losses. Then I tried online-convert, which returned me a 10MB FLAC file for my 1.6MB mp3. So I upload the FLAC. And by that, I mean, I try. The Wizard takes the file, and starts “encoding”. And never finishes. Or at least, it’s been at it for >10min now, and not showing any sign it’s alive.

This is my experience; I could probably get it to work, if I cared enough. I shudder to think how a newbie would fair with this task. Where audio (and, most likely, video) is concerned, Commons is, in effect, a community-driven media site that does not accept media files. It has been for years, but we are approaching 2015; time we do something about that. Merely preaching more free format ideology is not a solution.

by Magnus at October 29, 2014 12:28 PM

Gerard Meijssen

#Wikimedia - Men at work; preparing a #presentation IV - #WCN2014

The Dutch community has one question to answer: what to do with available information in Dutch? How will we make it available. Currently there are 3,054,955 items [1] with labels and there are 1,890,905 items [1] that link to the Dutch Wikipedia. It follows that 62% of the in Wikidata known items do not have an article in Dutch.

This is a substantial amount of information that can be presented in Dutch. Similar numbers can be presented for any language; for English it is 39% and for German 121%..

Arguably, these items fulfill notability requirements somewhere. Arguably the Swedes have demonstrated that having more information available revitalised their community. Arguably, allowing for search results from Wikidata is an easy first step towards opening up all our available knowledge.
Thanks,
     GerardM

[1] these links take a few minutes to load; they provide real time information

by Gerard Meijssen (noreply@blogger.com) at October 29, 2014 10:19 AM

Luis Villa

Understanding Wikimedia, or, the Heavy Metal Umlaut, one decade on

It has been nearly a full decade since Jon Udell’s classic screencast about Wikipedia’s article on the Heavy Metal Umlaut (current textJan. 2005). In this post, written for Paul Jones’ “living and working online” class, I’d like to use the last decade’s changes to the article to illustrate some points about the modern Wikipedia.1

Measuring change

At the end of 2004, the article had been edited 294 times. As we approach the end of 2014, it has now been edited 1,908 times by 1,174 editors.2

This graph shows the number of edits by year – the blue bar is the overall number of edits in each year; the dotted line is the overall length of the article (which has remained roughly constant since a large pruning of band examples in 2007).

Edits-by-year

 

The dropoff in edits is not unusual — it reflects both a mature article (there isn’t that much more you can write about metal umlauts!) and an overall slowing in edits in English Wikipedia (from a peak of about 300,000 edits/day in 2007 to about 150,000 edits/day now).3

The overall edit count — 2000 edits, 1000 editors — can be hard to get your head around, especially if you write for a living. Implications include:

  • Style is hard. Getting this many authors on the same page, stylistically, is extremely difficult, and it shows in inconsistencies small and large. If not for the deeply acculturated Encyclopedic Style we all have in our heads, I suspect it would be borderline impossible.
  • Most people are good, most of the time. Something like 3% of edits are “reverted”; i.e., about 97% of edits are positive steps forward in some way, shape, or form, even if imperfect. This is, I think, perhaps the single most amazing fact to come out of the Wikimedia experiment. (We reflect and protect this behavior in one of our guidelines, where we recommend that all editors Assume Good Faith.)

The name change, tools, and norms

In December 2008, the article lost the “heavy” from its name and became, simply, “metal umlaut” (explanation, aka “edit summary“, highlighted in yellow):

Name change

A few take aways:

  • Talk pages: The screencast explained one key tool for understanding a Wikipedia article – the page history. This edit summary makes reference to another key tool – the talk page. Every Wikipedia article has a talk page, where people can discuss the article, propose changes, etc.. In this case, this user discussed the change (in November) and then made the change in December. If you’re reporting on an article for some reason, make sure to dig into the talk page to fully understand what is going on.
  • Sources: The user justifies the name change by reference to sources. You’ll find little reference to them in 2005, but by 2008, finding an old source using a different term is now sufficient rationale to rename the entire page. Relatedly…
  • Footnotes: In 2008, there was talk of sources, but still no footnotes. (Compare the story about Motley Crue in Germany in 2005 and now.) The emphasis on foonotes (and the ubiquitous “citation needed”) was still a growing thing. In fact, when Jon did his screencast in January 2005, the standardized/much-parodied way of saying “citation needed” did not yet exist, and would not until June of that year! (It is now used in a quarter of a million English Wikipedia pages.) Of course, the requirement to add footnotes (and our baroque way of doing so) may also explain some of the decline in editing in the graphs above.

Images, risk aversion, and boldness

Another highly visible change is to the Motörhead art, which was removed in November 2011 and replaced with a Mötley Crüe image in September 2013. The addition and removal present quite a contrast. The removal is explained like this:

remove File:Motorhead.jpg; no fair use rationale provided on the image description page as described at WP:NFCC content criteria 10c

This is clear as mud, combining legal issues (“no fair use rationale”) with Wikipedian jargon (“WP:NFCC content criteria 10c”). To translate it: the editor felt that the “non-free content” rules (abbreviated WP:NFCC) prohibited copyright content unless there was a strong explanation of why the content might be permitted under fair use.

This is both great, and sad: as a lawyer, I’m very happy that the community is pre-emptively trying to Do The Right Thing and take down content that could cause problems in the future. At the same time, it is sad that the editors involved did not try to provide the missing fair use rationale themselves. Worse, a rationale was added to the image shortly thereafter, but the image was never added back to the article.

So where did the new image come from? Simply:

boldly adding image to lead

“boldly” here links to another core guideline: “be bold”. Because we can always undo mistakes, as the original screencast showed about spam, it is best, on balance, to move forward quickly. This is in stark contrast to traditional publishing, which has to live with printed mistakes for a long time and so places heavy emphasis on Getting It Right The First Time.

In brief

There are a few other changes worth pointing out, even in a necessarily brief summary like this one.

  • Wikipedia as a reference: At one point, in discussing whether or not to use the phrase “heavy metal umlaut” instead of “metal umlaut”, an editor makes the point that Google has many search results for “heavy metal umlaut”, and another editor points out that all of those search results refer to Wikipedia. In other words, unlike in 2005, Wikipedia is now so popular, and so widely referenced, that editors must be careful not to (indirectly) be citing Wikipedia itself as the source of a fact. This is a good problem to have—but a challenge for careful authors nevertheless.
  • Bots: Careful readers of the revision history will note edits by “ClueBot NG“. Vandalism of the sort noted by Jon Udell has not gone away, but it now is often removed even faster with the aid of software tools developed by volunteers. This is part of a general trend towards software-assisted editing of the encyclopedia.NoSwagForYou
  • Translations: The left hand side of the article shows that it is in something like 14 languages, including a few that use umlauts unironically. This is not useful for this article, but for more important topics, it is always interesting to compare the perspective of authors in different languages.Languages

Other thoughts?

I look forward to discussing all of these with the class, and to any suggestions from more experienced Wikipedians for other lessons from this article that could be showcased, either in the class or (if I ever get to it) in a one-decade anniversary screencast. :)

  1. I still haven’t found a decent screencasting tool that I like, so I won’t do proper homage to the original—sorry Jon!
  2. Numbers courtesy X’s edit counter.
  3. It is important, when looking at Wikipedia statistics, to distinguish between stats about Wikipedia in English, and Wikipedia globally — numbers and trends will differ vastly between the two.

by Luis Villa at October 29, 2014 06:02 AM

October 28, 2014

Gerard Meijssen

#Wikidata - #algorithm for updating labels

Amir is the #pywikibot guru; he runs dexbot and it is the only bot with more than 20.000.000 edits. Amir regularly tinkers with the routines that he uses. Sometimes he gets better performance, sometimes he gets a better result.

The algorithm for adding labels has changed several times and, the result of the latest change can be seen in the statistics below. You may notice several spikes, the last one is captured in the last dump; it resulted in many more labels for items where already one label existed.
It is people like Amir qho make a real difference. One bot request of his for Commons will help the Commoners see that Wikidata knows about the people mentioned in the Creator templates. Jobs like this are essential when the wikidatification of mediafiles is to succeed.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at October 28, 2014 02:52 PM

Jamie Thingelstad

Updated Dynamic Questy Captchas

A little over a year ago I shared a method of generating dynamic Questy Captchas for the MediaWiki ConfirmEdit extension. This method has been awesome for stopping registration spam on the thingelstad.com wiki farm and many other wiki admins have used it with success. Unfortunately it was more useful in it’s novelty than in it’s difficult to solve, and eventually some spammers wrote the logic to solve it and the registration spam started flooding in.

I decided to put a new method in place that is based on the same question. The previous question generated 8 characters and asked the user to provide one of them based on a random index. I’ve now changed this to generating a number between 100,000,000 and 999,999,999, turning that into spelled out words and then asking to identify one digit. It looks like this:

What is the sixth digit of the number nine hundred fifty-one million eight hundred ninety-eight thousand four hundred twenty-seven?

That turns out to be a somewhat hard question for a human too. I find I typically have to type out the number as I read it. The benefit of this is the solution isn’t in the text of the page. And while I’m sure there are great libraries for turning written numbers back to digits, it’s not immediately obvious.

Implementation

I had no interest in implementing my own code to convert a number into words, and happily there is a PHP package called Numbers_Words that does just that. The URL and install information are in the comments right before the require line. Everything else is pretty simple stuff.

To implement this I used the same technique I did previously. Here is what this looks like in LocalSettings.php.

# Let's stop MediaWiki registration spam
require_once( "$IP/extensions/ConfirmEdit/ConfirmEdit.php" );
require_once("$IP/extensions/ConfirmEdit/QuestyCaptcha.php");
$wgCaptchaClass = 'QuestyCaptcha';
 
# Set number question for questy
# sudo pear install channel://pear.php.net/Numbers_Words-0.16.2
# http://pear.php.net/package-info.php?package=Numbers_Words 
require_once("Numbers/Words.php");
 
$myChallengeNumber = rand(0, 899999999) + 100000000;
$myChallengeString = (string)$myChallengeNumber;
$myChallengeStringLong = Numbers_Words::toWords($myChallengeNumber);
$myChallengeIndex = rand(0, 8) + 1;
 
$myChallengePositions = array (
    'first',
    'second',
    'third',
    'fourth',
    'fifth',
    'sixth',
    'seventh',
    'eighth',
    'ninth'
);
$myChallengePositionName = $myChallengePositions[$myChallengeIndex - 1];
 
$wgCaptchaQuestions[] = array (
    'question' =&gt; "What is the $myChallengePositionName digit of the number <strong>$myChallengeStringLong</strong>?",
    'answer' =&gt; $myChallengeString[$myChallengeIndex - 1]
);

Initial results of this are very solid.

The Numbers_Words package also supports localization into over a dozen languages. I didn’t explore this but clearly this should work for multiple languages pretty easily as well.

Related

by Jamie Thingelstad at October 28, 2014 12:17 PM

Gerard Meijssen

#Wikimedia - Men at work; preparing a #presentation III - #WCN2014

The bane of every live demonstration is when the software just does not work. My intention is to show #Wikidata in action. Demonstrate the Reasonator and AutoList2. When the experience of the last few weeks is anything to go by, I have a 50% chance of a reasonable result on the day.

There are many factors that can play up. Time outs at Wikidata are no exception at the moment and when Wikidata does not play ball, everything downstream from it suffers as a consequence. It means that I may not have a recent list of recent deaths because ToolScript does not function.

AutoList2, relies on WIDaR. It relies on being able to contact Wikidata reliably. Without this, AutoList2 does not run.

The subject of my presentation is firmly solution oriented. I can always fall back on screenshots. That feels like cheating.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at October 28, 2014 09:29 AM

October 27, 2014

Wikimedia Foundation

Wikicamp 2014 in Serbia and Hungary brings chapters together!

Participants of Wikicamp Palić – Szeged 2014
“Wiki Camp Palic-Szeged 2014 001″ by Mickey Mystique, under CC-BY-SA-4.0

Wiki Camp Palic-Szeged (Вики Камп Палић-Сегедин) was held for the first time ever this summer. It was organized by Wikimedia Serbia and Wikimedia Hungary in order to promote networking and develop stronger relationships among the chapters. On the last weekend of August, 17 volunteers from Serbia and 3 volunteers from Hungary gathered at Palic and Szeged. Three days, two cities, and 20 highly motivated participants resulted in a large number of photographs and 32 articles on the Serbian language Wikipedia about topics related to Serbia and Hungary. All articles and attendees are listed here (in English and in Serbian). Photos are available on Commons.

Workshop of editing Wikipedia
“Palić, Radionica obuke o Vikipediji, 01″ by Mickey Mystique, under CC-BY-SA-4.0

Among the participants of Wikicamp, there were those who had not previously worked on Wikipedia or weren’t familiar with the projects of the organization, which is an opportunity for training new people and potential Wikipedians and Wikimedians. “Diversity is what we always work on and support. We are pleasantly surprised by the number of interested people and we wanted to give everyone a chance to participate, so the first part of the camp was set for training of editing Wikipedia for newcomers, and then we proceeded with the planned edit-a-thon.”, said Ivana Madžarević, project and community manager from the office of Wikimedia Serbia.

Edit-a-thon in Sgezed
“Segedin, Uređivački maraton, 03″ by Mickey Mystique, under CC-BY-SA-4.0

At the first edit-a-thon there were 20 Wikipedians. Eight new articles were written and five were improved, all covering topics related to Serbia. Participants from Hungary joined us a little later. Apart from the workshop and edit-a-thon, we had two photo tours in Subotica and Palic, where we visited the Serbian Orthodox Church, the Synagogue of Subotica and the City House. After the second tour at Palic Lake, we successfully ended the first day of the camp with all participants full of positive impressions.

The next day, we arrived in Szeged around 12pm. We first visited the city and its attractions, as well as the museum, where we took ​​a large number of photos. After lunch, we went back to the accommodation where we held a workshop about editing Wikipedia. Once again we were surprised by the motivation of the attendees – some of them stayed to edit until 11pm. 16 new articles were created and three were improved. This time, the topics were related to Hungary.

The conference part of the camp was planned for the third day. Filip Maljkovic, the president of Wikimedia Serbia, presented the chapter and its projects, and Andrea Toth, Wikimedia Hungary Office Manager, talked briefly about Wikimedia Hungary. We then divided the participants into groups and gave them short case studies to consider, which involved improving cooperation between WMRS and WMHU. Representatives of the groups presented their creative solutions and then we officially finished the first Wikicamp 2014. We presented certificates to participants, and three of them joined Wikimedia Serbia. Maljkovic, Toth, and Ivana Madžarević, Project and Community Manager, each gave statements to reporter Milenko Radić, a Serbian journalist from a Hungarian radio station.

Madžarević said, “Wikicamp 2014 was successful and the first project organized by two chapters in our region. We’re definitely planning on doing this again, bearing in mind the importance of connection of chapters and strengthening the community in general. We learned that all we need is a little effort to strengthen the community by tapping the interests of existing members and attracting new volunteers. Wikicamp proved to be a successful way to enhance cooperation between the two chapters to enrich content on both Wikipedias and to show to new volunteers the idea and the importance of Wikimedia movement and the fun of participating in it.”

Ivana Madžarević, project and community manager, Wikimedia Serbia

by wikimediablog at October 27, 2014 10:09 PM

Gerard Meijssen

#Wikidata - #dead in #2014

A milestone is often a reason for celebration. Wikidata now knows about more than 10.000 people who died in 2014. This is more than is known for all of 2013 in Wikidata but we "know" about 4292 more people who died in 2013. For 2014 the death of some 329 humans is waiting to be registered and obviously there are two more months to go.

People wonder what the attraction is, killing people of. Registering a death is not nice; it is only worthwhile because of the potential it has:
  • Reasonator displays the latest information
  • Wikipedias can compare what it knows and what Wikidata knows
  • External sources can compare what they know and what we know
  • It can trigger attention for the people who died
It takes time for such effects to be realised..
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at October 27, 2014 08:02 PM

Wikimedia Foundation

On Philadelphia’s birthday, a look at how it came alive on Wikipedia

David Thomsen has written dozens of articles about Philadelphia, Pennsylvania.

This user profile is part of a series about history and geography on Wikipedia. Today, October 27th, 2014, is the 332nd anniversary of the founding of Philadelphia, Pennsylvania.

Fulfilling your civic duty can mean a lot of different things. David Thomsen, a retired programmer and member of the Historical Society of Pennsylvania, sees it as his civic duty to get the facts straight on almost everything Philadelphia-related on Wikipedia.

“I want people to know who their council people are [and] who they can come to,” says Thomsen. “I see it as a role to encourage democracy, political participation [and] I think anybody looking for information should have a place to go and Wikipedia is one of the places that they can go.”

With over 160,000 edits to his name and counting, the 75-year-old Philadelphian has been an active Wikipedia editor since early 2009. Thomsen says his first Wikipedia edit was inspired by his connection to Lafayette College (he earned a degree there), and since then has continued to specialize in editing and writing pages mostly related to either Philadelphia or Pennsylvania.

Thomsen’s first edit was when he decided to improve the page on Francis March, the first professor of English in the U.S. Then he went on to create a disambiguation page for the surname March.

“I really got started and did the first thing; an article about March and then I got into other things,” says Thomsen. “Particularly things about Philadelphia when I found [out] there was nothing [on] various things in Philadelphia or in Pennsylvania.”

One of Thomsen’s favorite edits was correcting an article that claimed that Independence Hall was a property of the United States federal government when actually it was owned by the City of Philadelphia. Thomsen also found himself correcting a rumor that fast-food chain Taco Bell, which has a bell logo, was planning on purchasing the Liberty Bell, an iconic symbol of American independence located in Philadelphia.

“It would have been unlikely to start out with, but also impossible because they thought they were buying it from the federal government also and not so, Philadelphia,” says Thomsen.

The avid Wikipedian worked as a programmer at Sunoco for 27 years and found that his experience as a web developer helped him ease into the nooks and crannies of Wikipedia.

Thomsen has written dozens of articles on Wikipedia (he lists them on his user page), and fondly remembers writing about the “First City Troop“, a unit of the Pennsylvania Army National Guard and the oldest military unit in U.S. history. He wrote about how the first captain of the troop was actually Abraham Markoe, who left his position after the King of Denmark issued an edict that Danes were not to fight the British.

“Even Philadelphians may know about the First City Troop [and] know about the fighting captain, [but] not the first captain,” says Thomsen.

Thomsen believes that contributing to Wikipedia is a democratic process that does not require technical know-how.

“I think that the software [has] been organized so people are free to add, but in a disciplined way, an organized way.” says Thomsen.

Thomsen recognizes that most Wikipedia users have not edited a single article, and editing isn’t the only way to give to the Wikimedia community.

“You don’t have to become an editor, but give some money so that the whole enterprise goes along and give some respect to all those tens of thousands of editors who are busy giving up their own time, money, effort to make it better.” says Thomsen.

Profile by Yoona Ha, Communications Intern

Interview by Jacob Wilson

by wikimediablog at October 27, 2014 07:00 PM

User:Sj

Soft, distributed review of public spaces: Making Twitter safe

Successful communities have learned a few things about how to maintain healthy public spaces. We could use a handbook for community designers gathering effective practices. It is a mark of the youth of interpublic spaces that spaces such as Twitter and Instagram [not to mention niche spaces like Wikipedia, and platforms like WordPress] rarely have architects dedicated to designing and refining this aspect of their structure, toolchains, and workflows.

Some say that ‘overly’ public spaces enable widespread abuse and harassment. But the “publicness” of large digital spaces can help make them more welcoming in ways than physical ones – where it is harder to remove graffiti or eggs from homes or buildings – and niche ones – where clique formation and systemic bias can dominate. For instance, here are a few ‘soft’ (reversible, auditable, post-hoc) tools that let a mixed ecosystem review and maintain their own areas in a broad public space:

Allow participants to change the visibility of comments:  Let each control what they see, and promote or flag it for others.

  • Allow blacklists and whitelists, in a way that lets people block out harassers or keywords entirely if they wish. Make it easy to see what has been hidden.
  • Rating (both average and variance) and tags for abuse or controversy can allow for locally flexible display.  Some simple models make this hard to game.
  • Allow things to be incrementally hidden from view.  Group feedback is more useful when the result is a spectrum.

Increase the efficiency ratio of moderation and distribute it: automate review, filter and slow down abuse.

  • Tag contributors by their level of community investment. Many who spam or harass try to cloak in new or fake identities.
  • Maintain automated tools to catch and limit abusive input. There’s a spectrum of response: from letting only the poster and moderators see the input (cocooning), to tagging and not showing by default (thresholding), to simply tagging as suspect (flagging).
  • Make these and other tags available to the community to use in their own preferences and review tools
  • For dedicated abuse: hook into penalties that make it more costly for those committed to spoofing the system.

You can’t make everyone safe all of the time, but can dial down behavior that is socially unwelcome (by any significant subgroup) by a couple of magnitudes.  Of course these ideas are simple and only work so far.  For instance, in a society at civil war, where each half are literally threatened by the sober political and practical discussions of the other half, public speech may simply not be safe.

by metasj at October 27, 2014 06:56 PM

Wikimedia Tech Blog

Structured Commons project launches in Berlin

How can we make multimedia data easier to use on Wikimedia Commons, Wikipedia and sister sites?

Today, information about media files on Wikimedia sites is stored in unstructured formats that cause a range of issues: for example, file information is hard to search, some of it is only available in English, and it is difficult to edit or re-use files to comply with their license terms.

To address these issues, members of the Wikidata and Multimedia teams met with community volunteers for a week-long bootcamp in Berlin from October 5 to 10, 2014.

caption

The Multimedia and Wikidata teams met with community volunteers in Berlin to discuss structured data on Commons.
(Photo: Structured Data Bootcamp Group Photo – Closeup by Christopher Schwarzkopf, under CC-by-sa 2.0)

The focus of this event was to investigate how to structure data on Wikimedia Commons, reusing the same technology as the one developed for Wikidata. Participants collaborated in small workgroups to explore a range of problems and solutions, in parallel sessions focused on community, design, engineering, licensing and product management challenges.

Each workgroup produced concrete examples of how these ideas could be implemented, including:

  • first data models for structuring file information, to make it machine-readable and license-compliant
  • first user interface designs for viewing and editing structured data seamlessly
  • a working prototype of a high-level API, for reading and updating metadata about media files
  • improvements to a prototype dashboard identifying files missing machine-readable metadata.

These preliminary ideas are now being documented on Commons so they can be discussed and improved with community members. For a project overview, check out this development page and these project slides.

The bootcamp was very productive, but many questions remain unanswered. Next steps include community discussions, design, prototyping, testing and a series of experiments — before starting actual development and data migration next year.

Everyone is invited to contribute to this important project. Your ideas and comments are much welcome, and developers would love your active participation to define and guide this project.

We look forward to working with our community to modernize our multimedia infrastructure and better support the needs of our users.

For the Structured Data project team at the Wikimedia Foundation:

Fabrice Florin – Product Manager, Multimedia (WMF)
Keegan Peterzell – Community Liaison (Product) (WMF)
Gilles Dubuc – Tech Lead, Multimedia (WMF)

by Guillaume Paumier at October 27, 2014 05:00 PM

Gerard Meijssen

#Wikimedia - Men at work; preparing a #presentation II - #WCN2014

Mr van Asselt was a prominent professor at the Utrecht University. He is one of many professors known to Wikipedia. Given that I regularly harvest data from categories, it makes sense for me to use the English Wikipedia as it has an article about Mr van Asselt.

The equivalent category on the Dutch Wikipedia knows about more faculty members and then there are categories in several other languages as well. All of them may know about even more faculty members.

As we aim to share the "sum of all available knowledge" with our readers, Mr van Asselt is a timely reminder to the audience of the Dutch Wikimedia conference that no Wikipedia does know it all.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at October 27, 2014 01:33 PM

#Wikimedia - Men at work; preparing a #presentation I - #WCN2014

This saturday I will present about #Wikidata at the annual conference of the Dutch Wikimedia chapter. As I have a day job too, I have started preparing. I want my presentation to be factual, challenging and inspiring.

The facts are simple; Wikidata is almost two years old. It started with incorporating all interwiki links. The development team is really small, it does an awesome job and typically Wikidata is available, responsive and up to the job. The ambitions are huge; the challenge is to add to the existing work load while keeping the ship afloat.

If there is to be a challenge in my presentation, it will be that our aim is "to share in the sum of all knowledge". Our aim should be to share all the knowledge we have available to us with our readers. At this time only a few Wikipedias go the extra mile and will inform that we have information available in one of our other projects. This is done by adding results from searching Wikidata and showing as much text as is available in the local language.

One challenge is to do this for the Dutch Wikipedia as well.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at October 27, 2014 01:30 PM

Tech News

Tech News issue #44, 2014 (October 27, 2014)

TriangleArrow-Left.svgprevious 2014, week 44 (Monday 27 October 2014) nextTriangleArrow-Right.svg
Other languages:
čeština • ‎English • ‎suomi • ‎français • ‎עברית • ‎日本語 • ‎македонски • ‎Nederlands • ‎português • ‎русский • ‎中文

October 27, 2014 12:00 AM

October 26, 2014

Andy Mabbett (User:Pigsonthewing)

It was twenty years ago today…

According to Google, it was twenty years ago today, that I made my first comment in an on-line forum (that doesn’t link to my comment which, it seems, has escaped the archives, but to one which quotes it).

Champagne uncorking photographed with a high speed air-gap flash

It was a post to the then-active alt.music.pink-floyd Usenet newsgroup. It includes the obligatory typo (PInk) and an embarrassingly-mangled signature (I shared a dial-up account with my then boss, Graham). The content was relatively trivial.

But even so, I had no idea where it would lead me. It was the first-step on a life-changing journey; being online effectively became my career, first as a website manager, then as a freelance consultant, and as a Wikipedian (and Wikimedian) in Residence. It greatly enhanced my life experiences, created opportunities for travel, and is the foundation of many long-lasting “” friendships, with people from all around the world.

So I’m using this anniversary as an excuse to ask you all to call for an open, fair and internet. Join the Open Rights Group or a similar organisation. Let your MP or other representative know that you support and oppose . Don’t let vested interests spoil what we have made.

And please, forgive my twenty years of awful typing.

by Andy Mabbett at October 26, 2014 04:05 PM

October 25, 2014

Priyanka Nag

My last two cents to the Mozilla India Community

This should not be coming as a big surprise to anyone who has seen my last post on Facebook regarding my decision to leave all community activities in Mozilla India. Well, this is a followup on that.

Abiding by my previous decision, here I am, taking a voluntary retirement from all community activities related with the Mozilla India community. I had taken up some responsibility for this year's MozFest and didn't wanna spoil that job for sure.

MozFest has always been awesome...its awesome even now, except the cold war I am unfortunately having to face with my own community people here. I had to set up a Hive India booth at the Maker Party here at MozFest, and I was determined to do it to the best of my capabilities. I had been almost begging my own community members, who are here at the MozFest, to help me plan and execute the session. But, not to my surprise, I received absolutely no help and only ignorance from my entire team here.

Ofcourse I am not forgetting the only rockstar who rescued me and my session today, Umesh Agarwal. I also shouldn't forget to thank Sayak for remotely putting in all the possible help and support to keep me from breaking down into tears, with all the struggles here.

Well, fortunately for all of us, the session didn't go that bad. Umesh and I did manage to grab some attention and engage with the kids during the Maker Party at MozFest'14.


She was making a Halloween card for her mother at our booth

Yes....we helped her decorate her hat

Well...this was probably the best make at our booth

Creativity at its max

Different people...different idea...but we all had a lot of fun

True creative artists at the Hive India booth at Maker Party

Thats Ioana with her vampire teeth


A big thanks to the entire Hive team for giving me the opportunity to represent Hive India here at MozFest this year.

Now that the last responsibility is completed, I would like to peacefully walk out of the community.

p.s - I am definitely not trying to start any revolution here. Nor am I complaining against anyone. Also, I am not leaving Mozilla. I will continue my contribution as a volunteer in all other possible ways. I am just gonna keep myself safely away from all community activities in India.

by priyanka nag (noreply@blogger.com) at October 25, 2014 04:55 PM

Gerard Meijssen

#Wikipedia - The Manley-O.-Hudson medal

One of the recipients of the Manley-O.-Hudson medal died. The article prominently mentions that Mr Lowenfeld was a recipient and it refers to the article about the award where all the recipients are mentioned. Both articles only exists in German.

Wonderful news is that Magnus did it again; his Linked Items allowed me to associate many humans with this award.

When you consider international laws as being important, all the recipients of this award are important. A great reason to have at least the basic information available in any language.. including English.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at October 25, 2014 06:56 AM

#Wikidata - #vaccines


For Wikidata, those items that are not known to be "something" are the worst. There are many of them; the last processable dump had some 3,758,186 items without any statement. Injecting them with a healthy dose of substance makes it easier to process them.

As people increasingly read about ebola, vaccines developed for ebola gain attention as well. In Wikidata they are now known as vaccines. I have no clue how to indicate that a vaccine is intended for a specific condition like whooping cough, measles or ebola.

PS I loved the cartoon produced by the "Anti Vaccine Society". Do note the golden cow in the picture. :)
Thanks,
       GerardM

by Gerard Meijssen (noreply@blogger.com) at October 25, 2014 06:53 AM

Wikimedia Foundation

Wikimedia Logos Have Been Freed!

A representation of the red, green, and blue of the Wikimedia logos. “ThreeCircles” by Samuel Johnson, under CC-BY-SA-3.0

We are proud to announce that starting today, the Wikimedia logos will be freely licensed on Wikimedia Commons under the Creative Commons Attribution-ShareAlike 3.0 license. After all, Wikimedia Commons’ mission is to disseminate free and public domain content to everyone. We are thrilled that the copyright status of Wikimedia logos will now be fully aligned with that goal.

As you may have noticed, many of the Wikimedia logos on Commons did not carry the CC BY-SA 3.0 license for historic reasons. Over the past year, we have done an extensive review of their copyright status and worked with many of the logo designers to get a complete history. That review is now complete, and we have begun to re-licence the Wikimedia logos on Commons so that they can be freely used, subject to the terms of the CC BY-SA 3.0 license and the Wikimedia trademark policy.

We would really appreciate your help in replacing the {{Copyright by Wikimedia}} templates for all Wikimedia logos on Commons! Each of the Wikimedia logos in every language version should instead carry the {{Wikimedia trademark}} and {{cc-by-sa-3.0}} templates. The only logos that will not be licensed under CC BY-SA 3.0 are the MediaWiki and the Community logos, which were originally released under free licenses and do not need to be changed.

Yana Welinder, Legal Counsel

Many thanks to former legal interns Joseph Jung, Matthew Collins, and Lukas Mezger for their work on the review of the copyright status of the Wikimedia logos. I would also like to thank Joseph and Natalie Kim for their help in preparing this post.

by nkimwmf at October 25, 2014 12:21 AM

October 24, 2014

Wikimedia UK

Using Wikipedia to open up science

The image is a series of drawings showing various parts of a newly discovered animal species

A description of a new species of Brazilian Paraportanus, uploaded by Open Access Media Importer

This post was written by Dr Martin Poulter, Wikimedia UK volunteer and Wikipedian

As part of Open Access Week, I’d like to explore some overlaps between Open Access and what we do in Wikimedia, and end with an announcement that I’m very excited about.

We who write Wikipedia do not expect readers to believe something just because Wikipedia says so. We cite our sources and hope that readers will follow the links and check for themselves. This is a kind of continuous quality control: if readers verify Wikipedia’s sources, then bias and misrepresentation will be winnowed out. However, we do not yet live in that ideal world. A huge amount of research is still hidden behind “paywalls” that charge startlingly high amounts per paper.

Here in the UK, a lot of progress is being made in opening up research, thanks to the policies of major funding bodies including Research Councils UK and the Higher Education Funding Council for England. This is a difficult cultural change for many researchers, but Wikipedia and its sister sites show that a totally open-access publishing system can work. These sites also provide platforms that give that greatest exposure and reuse for open access materials.

Open Access in the Broadest Sense

There is much more to open access than being able to read papers without paying. The OA agenda is about getting the full benefits of research, removing technical or legal barriers that restrict progress. You may sometimes hear about “Budapest” OA, referring to the 2002 declaration of the Budapest Open Access Initiative which said that open access would “accelerate research, enrich education, share the learning of the rich with the poor and the poor with the rich, make this literature as useful as it can be, and lay the foundation for uniting humanity in a common intellectual conversation and quest for knowledge.”

Open Access is ideally about unrestricted outputs to all the outputs of research, not just the finished research paper. Can the expert community get hold of the data and run their own analysis to check the conclusions? Can a lecturer use a paper’s figures to make educational materials? If not, it is arguably not open.

Openness is not just about whether you can access research outputs, but whether you can repurpose and reuse them. On Wikipedia, we want to use diagrams with text labels and translate those labels into other languages for our global audience. Some image formats make this easy while others make it difficult. Researchers will not just want to look at data tables but want them in a format that can be copied into their software for analysis.

We can also ask for open access to information about the review process: what faults did reviewers identify in the submitted paper, and what editorial changes were made as a result? We could also include open access to measures of impact: the metrics that help to show if a new finding is significant for its field or for public debate.

Metascience, the study of the scientific process, is all but impossible without open access. If we want to test whether different funders of research get different results, we need to mine large amounts of data about research studies. This requires not just the research outputs themselves but data about how, when, and by whom the studies were funded. To study biases in publication, you need to know not only what was published but also what trials have been conducted.

Wikipedia and the Open Agenda

Wikipedia and its sister projects embrace all aspects of “open” in the Budapest sense, not just that readers do not pay. The articles themselves can be copied, analysed, and reused by anyone, for any purpose. An article’s evolution, including any reviews it has gone through, is publicly examinable. Many kinds of data are available; about users’ contributions, about the number of edits, or about the readership of articles. These data give us ways to assess the reach and significance of experts’ contributions to Wikipedia.

For scientists, improving Wikipedia is not just a way to feed public curiosity about their work: it could improve science itself. A team at the Wellcome Trust Sanger Institute in Cambridge have for years been sharing their database of proteins on Wikipedia. Not only does this combine their data with other knowledge about the proteins, but it allows a new audience to improve the database.

Wikimedia sites offer new models for academic publishing. A few weeks ago saw the first peer-reviewed paper to be authored on Wikipedia: a clinical review paper about dengue fever. Among the new challenges for the journal was how to credit the authors for this paper with 1,373 contributors. Alongside this “Wiki-to-Journal publication” there is “Journal-to-Wiki”, exemplified by several articles published on Wikipedia by the journal PLoS Computational Biology.

A software “robot” called the Open Access Media Importer takes photos, diagrams, and video clips from suitable research papers and uploads them to Wikimedia Commons, with full attribution to the original authors and paper. From Commons they can be used to illustrate Wikipedia articles or materials on any other site.

Wikidata, the newest Wikimedia project, has many millions of facts and figures about everything from Ebola virus disease to the Hubble Space Telescope. At Wikimania this Summer, Peter Murray Rust, the University of Cambridge chemist who coined the term “Open Data”, said “Wikidata is the future of science data. [...] We [Wikimedians] are going to change the world.”

So there is a rapidly expanding overlap between science and Wikimedia. How will the scientific community – including researchers, educators, publishers, funders, and scholarly societies – keep up? A vital next step is to get people together in the same room: professionals and volunteers; bold innovators and curious newcomers.

This is why Wikimedia UK is working towards holding the first ever Wikipedia Science Conference. This will take place next September 2015 in London. It is a chance to explore how all aspects of openness – including open access, open data, open scholarship, and open source software – can transform the world’s understanding of, engagement with, and even practice of science. Details are still being worked out, but we have a long time to prepare and to make this a landmark event. We hope to see you there.

by Stevie Benton at October 24, 2014 11:41 AM

October 23, 2014

Wikimedia Tech Blog

Do you know what’s around you? Let Wikipedia tell you!

Screenshot of the Nearby feature in the Wikipedia iOS App.

Screenshot of the Nearby feature in the Wikipedia iOS App.

The Wikimedia App team has just added the first native “Nearby” functionality to the new Android and iOS Wikipedia apps. Using this feature, you’ll be able to retrieve a list of Wikipedia articles near your current location and see their relative distance to you. You’ll even notice a handy compass arrow that points to the direction for each location and updates as you move.

Simply single tap an entry to read the article, or long-press an entry to open in map view.

With this feature, we’re bringing Wikipedia into the world around you and enabling you to explore and learn more about your surroundings. Perhaps you’ve always wondered about that monument that you pass during your commute home, been curious about an architecturally interesting building, or simply wanted a to-do list while traveling. Now, the new Wikipedia app can surface those for you, and maybe it’ll even inspire you to add your own.

Screenshot of the Nearby feature in the Wikipedia Android App.

Screenshot of the Nearby feature in the Wikipedia Android App.

Possible things to come

We have some exciting and ambitious ideas of where we could go next:

  • Filtering nearby items by category, so that you could read more about specific things you’re interested in near you, such as museums or historic buildings.
  • Searching for other articles that are near the article you’re currently reading.
  • Letting you drop a pin on a map so you can see articles tagged near that location.

What do you think?

Don’t hesitate to send us feedback about this and make sure to download our latest Android or iOS beta. We want to know what you’d like to see in future updates, and to hear your ideas for making the apps even more awesome!

And if you love to code, do take a pass on our GeoData API and show us what you’ve built.

Dmitry Brant, Software Engineer,
Monte Hurd, Software Engineer

by montehurd at October 23, 2014 09:47 PM

Tony Thomas

Manually authenticating a MediaWiki user e-mail id

While testing with emails and user accounts, you will probably hit with a scenario when you have to create a fake account and make the Wiki send e-mails to it – so that you can analyse the results. Something similar turned up to me today, and thanks to Legoktm & Hoo, here we go: * […]

by tonythomas01 at October 23, 2014 05:09 PM

Wikimedia UK

Guest post: MozFest 2014 – Spotlight on “Community Building”

 

This guest blog is an interview with Bekka Kahn, Open Coalition Project Co-ordinator, and Beatrice Martini of Open Knowledge. They will be leading a track at MozFest about community building – a great fit for the Open Coalition. It was originally published on the Mozilla Webmaker blog here

What excites you most about your track?

In the early days of the web, Mozilla pioneered community building efforts together with other open source projects. Today, the best practices have changed and there are many organisations to learn from. Our track aims to convene these practitioners and join forces to create a future action roadmap for the Open Web movement.

Building and mobilising community action requires expertise and understanding of both tools and crowd. The relationships between stakeholders need to be planned with inclusivity and sustainability in mind.

Our track has the ambitious aim to tell the story about this powerful and groundbreaking system. We hope to create the space where both newcomers and experienced community members can meet, share knowledge, learn from each other, get inspired and leave the festival feeling empowered and equipped with a plan for their next action.

The track will feature participatory sessions (there’s no projector is sight!), an ongoing wall-space action and a handbook writing sprint. In addition to this, participants and passers-by will be encouraged to answer the question: “What’s the next action, of any kind/ size/ location, you plan to take for the Open Web movement?”

Who are you working with to make this track happen?

We’ve been very excited to have the opportunity to collaborate with many great folks, old friends and new, to build such an exciting project. The track was added to just a few weeks before the event, so it’s very emergent—just the way we like it!

We believe that collaboration between communities is what can really fuel the future of the Open Web movement. We put this belief into practice through our curatorship structure, as well as the planning of the track’s programme, which is a combination of great ideas that were sent through the festival’s Call for Proposals and invitations we made to folks we knew would have had the ability to blow people’s mind with 60 minutes and a box of paper and markers at their disposal.

How can someone who isn’t able to attend MozFest learn more or get involved in this topic?

Anyone will be welcome to connect with us in (at least) three ways.

  • We’ll have a dedicated hashtag to keep all online/remote Community conversations going: follow and engage with #MozFestCB on your social media plaftorm of choice, we’ll record a curated version of the feed on our Storify.
  • We’ll also collect all notes, resources of documentation of anything that will happen in and around the track on our online home.
  • The work to create a much awaited Community Building Handbook will be kicked off at MozFest and anyone who thinks could enrich it with useful learnings is invited to join the writing effort, from anywhere in the world.

by Stevie Benton at October 23, 2014 01:21 PM

Gerard Meijssen

#Wikipedia - One size does not fit all


In Wikipedia we are used to see our readers as one big group. They all read the same article, they all get the same info-boxes and they all get the same categories. It is a reasonable approach when Wikipedia is only a pile of text without data to separate out potential differences in interest.

One obvious consequence is that reasonable expectations decide what is shown and what it looks like. When there are too many categories, they no longer get attention. So what categories should be shown? The problem is that this "one size fits all" approach shows too much for some and too little for others.

Thanks to Wikidata it is possible to allow for preferences. For many categories Wikidata knows what they are about; they show for instance humans and their alma mater, their sports club, their gender... When our public has the option to choose what category of category they are interested in, there is no longer a "need" to choose what categories to keep. It is just a matter of making the choice what categories to show by default.

Any and all other category of categories are then selectable by the reader.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at October 23, 2014 06:39 AM

October 22, 2014

Wiki Education Foundation

Copy Right: Tips on Explaining Copyright and the Commons to Students

We’re well into the fall term now, and student editors will be breaking out of sandboxes and into Wikipedia’s article namespace to edit and create articles. As they do so, many will want to add images or other illustrations to the articles they’re working on. One of the first issues instructors may encounter as students begin editing Wikipedia for their course assignments is copyright in regard to these images.

It can help to survey student assumptions about copyright before you begin. Most students have never had to really understand copyright rules, especially within the global context where Wikipedia operates. Students may assume that if they don’t see a copyright claim on an image on the Web, it isn’t copyrighted. Others may be resistant to putting their own images into Wikimedia Commons, assuming it means they give up their rights to the work.

Wikipedia is based on the idea that the knowledge it contains can be used freely by anyone. Students should know that this is the result of creators, writers, and other content producers providing these resources to Wikipedia — in this sense, the original authors must contribute or “donate” this contribution — and, with few exceptions, nobody else can do this on their behalf.

Making this rule clear to students early on, and reiterating it throughout the course, is useful in preventing material from being uploaded to Wikipedia or Wikimedia Commons that doesn’t belong there. Encourage students to ask a very simple question: Did I make this? If the answer is “No,” they probably don’t have the right to upload it. This includes:

  • Logos
  • Promotional materials (such as for bands or movies)
  • CD or DVD covers
  • Screenshots or images of software, web sites or film stills
  • Images of artistic works (unless those works are in the public domain)

A great, easy resource for explaining all of this is on page 3 of the Illustrating Wikipedia guide [PDF] (also available in print from the Wiki Education Foundation for instructors participating in our program).

From there, things can get more complicated: “Remixing” or enhancing work found on Wikimedia Commons, the media repository that powers Wikipedia and sister sites, is generally allowed. Government data, public domain resources, and similarly free-licensed materials are fair game, with certain restrictions.

Photographs students take themselves, or illustrations such as charts, graphs or diagrams they create, are also shareable. Instructors may want to familiarize themselves with the Creative Commons license that governs the use of Wikipedia content so they can offer better guidance to students, and again, the Illustrating Wikipedia guide [PDF] has an excellent summary of these licenses on page 10.

Some quick references for Copyright questions:
Illustrating_Wikipedia_brochure.pdfThe Illustrating Wikipedia guide [PDF], our guide to creating and sharing images, has some excellent material explaining the Creative Commons license, and simple, illustrated outlines of what is OK and what isn’t.

Wikipedia’s Frequently Asked Questions list about copyright gets into the nitty-gritty details of many different copyright scenarios. There is also a good, quick guide specifically regarding the copy and pasting of text — a subject we’ll dedicate another blog post to in the future.

What are your ideas for navigating copyright and the commons with students?

by Eryk Salvaggio at October 22, 2014 05:37 PM

Wikimedia UK

Ada Lovelace Day – a women in science editathon

Image shows a black and white drawn portrait of Ada Lovelace in an oval shaped border with her name across the bottom

Ada Lovelace, considered to be the world’s first programmer

This post was written by Sarah Staniforth, Wikipedian and Wikimedia UK volunteer

Tuesday 14th was this year’s Ada Lovelace Day, with people around the world dedicating events to Ada Lovelace, the mathematician who is often described as having been the world’s first computer programmer, as well as other women in science.

Volunteers from Wikimedia UK took part in the festivities by hosting a women in science-themed editathon at the University of Oxford (specifically at Banbury St IT Services, a boon for those without laptops). Being a woman who is interested in addressing the deficit of females working on Wikimedia projects (around 90% of Wikipedia editors are men) and in STEM fields, I thought it’d be good to come along and help out with the event.

The afternoon began with an introduction by Oxford computer scientist Ursula Martin, followed by a training session to familiarize all attendees with the basics of editing Wikipedia. One special surprise during the tea break was an Ada Lovelace cake! The break was followed by the body of the editathon. Using the reference books provided , attendees were encouraged to work on the pages of Oxford-related women in science including Rosa Beddington, Marian Dawkins, Dorothy Hodgkin, and Louise Johnson. Before I knew it, it was the end of the editathon, which was unfortunate as I’d like to have stayed for longer! It was a pleasure to meet other Wikimedians at the event, as well as to see people without prior editing experience get involved.

Hopefully there’ll be more (and longer) get-togethers devoted to improving Wikipedia coverage of women in science, technology, maths, and engineering very soon!

by Stevie Benton at October 22, 2014 04:17 PM

Sue Gardner

Why I’m in favour of online anonymity

A while back I was startled while researching someone in a work context, to come across a bunch of NSFW self-portraits she’d posted online under her real name. She was mid-career in compliance-related roles at big, traditional companies, and the photos raised questions for me about her judgement and honestly her competency. Didn’t she realise the images were public? Hadn’t she ever thought about what could happen when somebody –a colleague, a boss– randomly googled her? Was she making a considered decision, or just being clueless?

I was surprised because nowadays, that lack of caution is so rare. That’s partly because people have gotten a little more sophisticated about privacy controls, but mostly I think we’ve just given up. We can’t be confident our stuff is private today or will stay private tomorrow — if we didn’t know that already, we know it now from The Fappening and the Guardian’s uncovering that Whisper tracks its users.

And so I think that most people, most of the time, have decided to just assume everything we do online is public, and to conduct ourselves accordingly. It’s a rational decision that’s resulted in a tone and style we all recognize: we’re cheerful about work, supportive of friends, proud of family; we’ve got unobjectionable hobbies and we like stuff like vacations and pie. Promotions and babies and parties yes, layoffs and illnesses and setbacks not so much.

Secret, the app that was super-hot last winter, was briefly an exception. People talked on Secret about bad sex, imposter syndrome, depression and ADD, their ageing parents, embarrassments at work. You may remember the engineer who posted that he felt like a loser because he, seemingly alone in Silicon Valley, was barely scraping by financially. It was vulnerable and raw and awesome.

But I ended up uninstalling it pretty fast, after one too many humble-brags showed up in my feed. (The final straw was a guy boasting about how he’d bought a new iPad for a kid at the airport, after watching her mom get mad at her for dropping and breaking theirs. Blah.) I couldn’t bear seeing people diligently polishing up their self-presentation as confident and fun and generous and successful, on a service whose whole point was to enable risk-free vulnerability.

Reverse-engineering user behaviour on Secret, it read to me like people were hedging their bets. Secret users seemed to be operating (maybe without even thinking much about it) on the assumption that one day, due to a data breach or change in privacy policy or sale of the company, their activity on Secret might be available, linked to them, to their friends or insurance provider or boss or mom or bank. They didn’t trust their activity was permanently private, and so they acted as though it wasn’t.

That feeling of always being potentially in a spotlight leads us to relentlessly curate how we self-present online. And that is bad for us.

It’s bad for individuals because we run the risk of comparing our own insides to other people’s outsides, which makes us feel crappy and sets us up to make decisions based on flawed assumptions. Brene Brown: “If you trade your authenticity for safety, you may experience the following: anxiety, depression, eating disorders, addiction, rage, blame, resentment, and inexplicable grief.” Erving Goffman: “To the degree that the individual maintains a show before others that he himself does not believe, he can come to experience a special kind of alienation from self and a special kind of wariness of others.”

It’s bad for society because it makes people feel alienated and disconnected from each other, and also because it has the effect of encouraging normativity. If we all self-monitor to hide our rough edges, our unpopular opinions, our anxieties and ugly truths, we’re participating in the narrowing of what’s socially acceptable. We make it less okay to be weird, flawed, different, wrong. Which sucks for young people, who deserve to get to freely make the stupid mistakes of youth. It sucks for people who’ve been abused or poor or sick, and who shouldn’t have to hide or minimize those experiences. And it sucks for anybody with an opinion or characteristic or interest that is in any way unconventional. (Yes that is all of us.)

Anonymity was one of the great things about the early internet, and although we benefit enormously from the ability today to quickly find and research and understand each other, as individuals we also need private spaces. We need, when we want to, for our own reasons, to get to be predictably, safely, unbreakably anonymous/pseudonymous, online. That’s why I use Tor and other FLOSS services that support anonymity, and it’s why I avoid the closed-source, commercially-motivated ones. I trust Tor, like a lot of people do, because it has a track record of successful privacy protection, and because it’s radically transparent in the same way, and presumably for the same reasons, that Wikipedia is.

I’ve got nothing to hide (and oh how I hate that I feel like I need to type out that sentence), but I value my privacy, and I want to support anonymity being understood as normal rather than perverse or suspect. So I’m increasingly using tools like Tor, ChatSecure, TextSecure, RiseUp, and DuckDuckGo. I’ve been talking about this with friends for a while and some have been asking me how to get started with Tor, and especially how to use it to access the deep web. I’m working on a post about that — with luck I’ll get it done & published within the next few weeks.


Filed under: Social Movements

by Sue Gardner at October 22, 2014 04:14 PM

October 21, 2014

Wikimedia Foundation

What we learned from making book grants on Arabic Wikipedia

Wikipedians and Wikimedia Foundation partner to experiment with microgrants

Launching Microgrants

Wikimedia Foundation Grants teamed up with The Wikipedia Library to open an Arabic, community-run branch

In early 2014 the Wikimedia Foundation began an experiment to better support individual contributors to Wikimedia projects, by giving out smaller grants to more individuals (complementing our existing grants to organizations, which mainly fund offline activities). We started by selecting a global south community that did not already have a local chapter meeting its needs: Arabic Wikipedia. We wanted to make grants that the community would find useful, so we asked them in a consultation, what kinds of small resources do you need? “Books!” was the primary answer we got, so we focused the pilot in that direction.

At this point, WMF staffers connected in the organizers of The Wikipedia Library, a community project (also WMF-funded) that helps editors access reliable sources. The Wikipedia Library already had experience delivering journal access to lots of editors on English Wikipedia, but they had not yet set up similar programs for other language communities, nor experimented with offering resources besides journals before. Their community-coordinator model appeared to offer a scalable way for distributing small resources to many editors, and they were looking for new ways to expand beyond serving the needs of English Wikipedians. Partnering on an Arabic pilot was a natural fit.

Mohamed Ouda and عباد ديرانية set up and coordinated the Arabic Wikipedia Library.

The next step was to find local partners in the Arabic community to lead the Arabic Wikipedia Library. We ran signups for local community coordinators to vet requests and purchase and track books, and selected two: User:Mohamed Ouda and User:عباد ديرانية.

Creating Infrastructure

To buy and globally ship books requested on Arabic Wikipedia, we needed pages where editors could ask for a book, payment options that volunteers could securely use to purchase books, and a way to track everything as it happened.

We made our first test purchases using Amazon.com and Neelwafurat.com (a popular Arabic bookseller). It was surprisingly difficult to get money to the local Arabic Coordinators for purchasing books in ways that were both user-friendly and easy to track. Providing them with prepaid cards, our first strategy, seemed like a good direction, but we weren’t able to find a card that WMF could purchase in the US for use by coordinators internationally. We ultimately employed a very old strategy – bank wire transfers – and worked with WMF’s finance team to add standardized processes for two other payment transfer options – Paypal and Western Union – to meet our needs for controls and flexibility.

Leveraging the existing journal access program run by The Wikipedia Library, we looked to a page design that could expand globally through a more modular set of pages. If The Wikipedia Library was going to serve many different communities, all with different needs, then its portal needed to be clear and distinct but its options needed to be adaptable and flexible. We translated the new kit into Arabic Wikipedia Library Pages: a portal page, book purchases, journal requests, and one for sharing sources between editors.

The Arabic Wikipedia Library Homepage

 

The kit pages used a customizable request template which let volunteers make requests and then interact with the local coordinators to facilitate on-wiki tracking of which books they wanted, when they received them, and how they used them.

Measuring Impact

Over the four months that the program was running, we purchased 14 books out of 19 that were requested. We shipped books to Spain, Saudi Arabia, Egypt, and Tunisia. Books like: Turks and Moroccans and Englishmen in the Age of Discovery, and July Revolution: Pros and cons after half a century. On the average, books cost $20 and shipping cost $10.

Our biggest challenge by far in purchasing books was shipping. It has been difficult to get booksellers (even regional ones) to ship books from the country where the book is stocked to many of the countries where Arabic Wikipedians have requested them. In the case of Amazon, postal codes are required for shipping, and it turns out that some editors in the MENA region do not have postal codes. We failed to have books shipped to Palestine, Jordan, Morocco, and in one case Egypt. In the first month of the pilot, this prevented about half of the requests from being successfully processed. Trouble with shipping a significant portion of requests made us hesitant to broadcast signups more widely. As a result, we fell short of our target of having 40 books successfully purchased, shipped, and used to improve or create new encyclopedia content during this pilot.

Another significant challenge was in reporting itself. It was hard to know if books were received, because despite volunteer-coordinator-pinging, only 2 books were ever marked as having gotten into the hands of the editor who they were shipped to. At this point, we still don’t have enough data to understand if the books had any impact on Wikipedia, as no editors came back to update their request with a short list or link of the articles that they improved or created.

What we learned

We originally set out to learn more about supporting the needs of individuals in the global south, test WMF grantmaking systems for making many small grants to individuals around the world, raise awareness of WMF grantmaking in communities outside English Wikipedia and Meta, and expand The Wikipedia Library beyond its English home base. Here are some of our findings:

1. Moving money to individuals globally is even harder than could reasonably be expected, and multiple options are needed to fit different users and countries.

For processes to scale easily, they need to be consistent. But the global financial reality is not particularly consistent. At the start of this pilot, we knew that trying to process lots of small money transfers to individual contributors would increase the burden on our finance department. We also knew that WMF’s standard method for sending money via bank wire transfers can take up to 2 weeks for an individual to receive, involves a lot of back and forth with individuals and banks to confirm details like SWIFT codes, and that bank transfer fees can eat up large portions of small grants. So we were hoping to find some new methods for sending a few hundred dollars at a time to our coordinators.

Over the course of this pilot, our finance team added standardized processes for sending money to individuals via both Western Union and Paypal, which we’d had only limited use of in the past. These are great options to add to our toolkit because they tend to move money to individuals in many countries more quickly than bank transfers. And we’ve also confirmed we still need a variety of other options, because individuals and countries come in all shapes and sizes. Paypal, for example, is the best option for many contributors to receive money in many countries, but Paypal doesn’t work in Egypt.

2. Moving physical things to individuals globally isn’t easy either.

It turns out that tangible objects aren’t easily transferred between countries either – unsurprisingly, we ran into regional infrastructure problems. Over the course of the pilot, we tried several bookselling websites, and we even considered having a book shipped to point A and then forwarded on to point B so that requests could be filled. Ultimately, though, shipping tangible items globally is a barrier to scale. For future experiments, it may be better to focus on transactions that can be entirely completed online.

3. Community volunteers and WMF staff have complementary strengths that make us great partners and can lead the way to scale if done right.

Community members know their communities! They understand the local processes (and policies), they speak the language, and they have built relationships with other editors. But, coordinating planning and timing can be a challenge, and it wasn’t always easy to know when to involve which members of the team, balanced with a desire to keep things moving forward as quickly as possible. Engaging all team members early and often is an area we can still improve on, to help everyone maintain a sense of shared ownership of the project.

4. Community-building and impact measurement takes time.

9 months into the project and 4 months into the active pilot, we still don’t know much about the ultimate effect we’ve had on contributors or on Wikipedia. We will need to follow up with measurement again in future months, and we may also need to come up with better ways to collect data to determine impact (see next learning).

5. Microgrant reporting may not be a feasible means for collecting data on impact.

Coordinators were more successful at handling requests than they were at getting recipients to report on how they used books, or even to confirm that they got them! Reporting is always a challenge for grants (or even surveys). In this case, we aimed for very small and lightweight reports (linking to an article that had been improved), but still lack this data. A requirement that editors coming back for a second book need to report back on their first book may gradually bring in this data, but it remains to be seen if that will be enough motivation in the long run to get people to respond, or if the program will lose steam before this happens.

6. It’s important to design for scalability, but easy to get caught up in over-designing it before it is needed.

We put a lot of initial effort into setting up book-purchasing accounts with controls for reconciling purchases. Some of that infrastructure ultimately went unused when we found issues with purchasing and shipping that were different than expected. On the other hand, we also put effort into building the kit for local satellite Wikipedia Library branches, which will be used well beyond the initial Arabic test case. Our development was better harnessed in that case, perhaps because there we understood the community needs we were designing for, and left it open-ended in cases where we didn’t yet understand the needs.

7. Having a well-defined target community to partner with is a clear benefit to your experiment.

We were not designing an experiment in a vacuum. Rather, we piloted via a program that had already demonstrated a working community model, connected to a new target community expressing a need to expand this model in new directions. This helped us better target our efforts and waste less time figuring out how to approach the pilot.

8. When the costs of your experiment start to outweigh the benefits, it’s time to wrap up and turn your ‘failure’ into learning.

Ultimately, we learned a lot from this experiment, and it has pushed our thinking, processes, and relationships forward in useful ways. At this point, we’ve learned enough about what doesn’t work to recognize that it is time to change direction. The tendency for all participants involved in a struggling pilot is to blame themselves and then try harder. But knowing when to stop trying to ‘make it work’ helps us conserve the most important resources we have: the time, energy, and morale of volunteers and staff — which deserve to be spent on future projects with brighter chances to succeed.

What’s next

The Wikipedia Library remains on Arabic Wikipedia, but we’re taking focus off making book requests work. Editors can still request books for the time being, and if they’re easy to send we’ll still ship them, but the Arabic coordinators are resetting expectations to clarify that not all requests can be met, and we’re not going to waste more volunteer time on complicated workarounds or invest further in solving these issues. If/when there is sufficient data on successfully received book requests at some point in the future, we’ll still aim to analyze the impact of book grants on the encyclopedia, to continue learning from this project.

This report will now be used as a starting point to go back to the Arabic community again for further consultation. We leave it to the Arabic community to decide whether to continue the Wikipedia Library and attempt to focus on providing other types of resources, and/or move in some other direction for supporting Arabic editors.

The pilot participants:

Siko Bouterse (Head of Individual Grants), Haitham Shammaa (Learning Strategist), Asaf Bartov (Global South Advisor), Janice Tud (Grants Administrator), Ocaasi (heading The Wikipedia Library), Patrick Earley (WMF Community Advocate), Mohamed Ouda (Arabic Library Coordinator), Abbad Diraneyya (Arabic Library Coordinator)

by wikimediablog at October 21, 2014 09:58 PM

More Than 40 Million People Await the Launch of Odia Wikisource

This blog post was first published at Rising Voices on October 18.


(“Odia Wikisource incubator project screenshot” by Wikimedia Foundation. Licensed under CC BY-SA 3.0, except the Wikisource logo which is (c) Wikimedia Foundation)

Speakers of Odia will soon have mountains of books to read online in their mother tongue, following the launch of the Odia Wikisource, which will make accessible many rare books that have entered the public domain. Authors and publishers are also invited to donate their copyrighted work, possibly bringing open access to large volumes of books and manuscripts, creating a vast archive of educational resources. And everything will be in Odia. 

One of the biggest advantages of Wikisource is that all its books are available in Unicode, meaning that Google's search engine indexes the texts’ entirety, and readers are able to copy easily what they wish. (Most conventional archival systems lack this feature.) A volunteer community administers Wikisource. To upload a book's content, volunteers either retype the books word-for-word, or, when possible, use Optical Character Recognition (commonly known as “OCR“), which converts scanned images into editable text. Available at or.wikisource.org, Odia is Wikisource's eleventh Indic language. 

There are more than 40 million native Odia speakers in the world. Most live in the Indian state of Odisha and its neighboring states, but there is a large diaspora in countries like the US, UK, UAE, and across South and East Asia. Despite being spoken by so many people, Odia's online presence is relatively small.

As of October 2014, Odia Wikipedia hosted 8,441 articles. The state government's websites have Odia-language content, naturally, but none of the text is in Unicode, making the materials invisible to search engines and difficult to share. Thanks to individual and organizational efforts, some Odia-language websites have recently emerged with Unicode content. 

With support from the non-profit organization Pragati Utkal Sangha and the National Institute of Technology Rourkela, a Bhubaneswar-based outfit has digitized about 740 books through the Open Access to Oriya Books (OAOB) project. Most of these texts were published between 1850 and 1950. The OAOB project is the largest existing digital archive of Odia literature, but the archived books are only available as scanned PDFs, restricting readers’ ability to search within the texts.

As a Wikimedia project, Odia Wikisource underwent a long approval process, after running as an active incubator project for nearly two years. Both the Language Committee and the Wikimedia Foundation's Board reviewed and endorsed the project. 

Odia Wikisource has already digitized and proofread three books entirely. In collaboration with the Wikimedia-funded Centre for Internet and Society‘s Access to Knowledge, the Kalinga Institute of Social Sciences (KISS) has partially digitized another book, as well. KISS is also busy digitizing another Nine books by Odia-language author Dr. Jagannath Mohanty that were relicensed to CC-BY-SA 3.0 earlier this year.

In response to posts on Twitter and Facebook, four new contributors recently joined Wikisource to help digitize “The Odia Bhagabata,” a literary classic compiled in the 14th century. “Content that have already been typed with fonts of non-Unicode encoding systems could be converted by converters which was the case of Odia Bhagabata. New contributors did not face the problem of retyping the text, as the book was already available on a website Odia.org and is out of copyright”, says Manoj Sahukar, who (along with yours truly) designed a converter that helped to transcribe “Bhagabata”.

Rising Voices contacted some of those whose efforts made this happen.

Mrutyunjaya Kar (MK), Long time Wikimedian who has proof-read the books on Odia Wikisource
Rising Voices (RV): Youre there with Odia Wikisource since its inception. How you think it will help other Odias?
MK: Odias around the globe will have access to a vast amount of old as well as new books and manuscripts online in the tip of their finger. Knowing more about the long and glorious history of Odisha will become easier.

Nihar Kumar Dalai (NKD), Wikisource writer
RV: How does it feel to be one of the few contributors to digitize Odia Bhagabata. How you want to get involved in future?
NKD: This is a proud opportunity for me to be a part of digitization of such old literature. I, at times, think if I could get involved with this full time!

Nasim Ali (NA), Oldest active Odia Wikimedian and Wikisource writer
RV: Do you think any particular section of the society is going to be benefited by this?
NA: Books contain the gist of all human knowledge. The ease of access and spread of books are the markers of the intellectual status of a society. And in this e-age Wikisource can be helpful by not just providing easy access to a plethora of books under free licenses but also aiding the spread of basic education in developing economies. Together with Wikisource and cheaper internet this could catalyze a Renaissance of 21st century.

Pankajmala Sarangi (PS), Wikisource writer
RV: You have digitized almost two books, are the highest contributor to the project and also one of the main reasons for Odia Wikisource getting approved. What are your plans next to grow it and take to masses?
PS: I would be happy to contribute by typing more books on Odia so that they can be stored and available to all. We can take this to masses through social, print and audio & visual media and organizing meetings/discussions.

Amir Aharoni (AA), Wikimedia Language Committee member and Software Engineer at the Language Engineering team at the Wikimedia Foundation
RV: What you feel Wikisource could do to a language like Odia with more than 40 million speakers?
AA: In schools in Odisha, are there lessons of Odia literature? If the answer is yes, then it can do a very simple thing – make these lessons more fun and help children learn more! Everybody says that in Kerala this worked very well with Malayalam literature.

Clearly, strong passions motivate Odia Wikisource's volunteers, like Nihar Kumar Dalai, who writes on Facebook:

Hindi and English are fine, but our native language is bit more special! Who of us does not now about the art, culture, noted personalities, tourist spots and festivals of Odisha? But if you search online about all of these then there is very little available. There comes a simple and easy solution – Odia Wikipedia. Like Odia Wikipedia, Odia Wikisource is another great place and this is my small contribution to bring Odia Bhagabata on Odia Wikisource.

Subhashish Panigrahi is a volunteer contributor for Wikipedia and in past worked as a community and program support consultant for the Wikimedia Foundation.

by wikimediablog at October 21, 2014 07:13 PM

Wikimedia UK

Spotlight on the residency – York Museums Trust WIR 2013-14

A painting of Monk Bar in York, painted in shades of yellow, gold and brown, it looks like a classical city gate.

A 19th century painting of Monk Bar, York – just one of the diverse range of images donated during the residency

This post was written by Pat Hadley and Daria Cybulska and was written with excerpts from the final case study report 

With three large, historically-important museums in their care York Museums Trust (YMT) have overwhelmingly rich and diverse collections – an incredibly exciting range of opportunities to work with Wikipedia.

From October 2013 to April 2014 York Museums Trust (YMT) hosted Pat Hadley as a Wikipedian­ in­ Residence in partnership with Wikimedia UK. The project offered Wikimedia UK a chance to work with a regionally important institution with internationally significant collections. Further weight was lent by YMT’s potential to affect several institutions in the area. Recently Pat has written up a case study for this cooperation, which gives a chance to reflect on what has been done in the six months.

Looking back on the project from the perspective of Wikimedia UK, there were several outstanding achievements:

Content improvement. Several of the Trust’s collections were targeted after consultation with the curators. Over 400 high ­quality images were delivered to Commons, many have contributed to the quality of Wikimedia projects. Some of the collections were previously hardly used by the museum, so the uploads led to them being known more widely. The programme originally aimed at a more extensive upload programme, however, Pat had to adapt to technical delays and obstacles.

An example of a project worked on is the W.A. Ismay Studio Ceramic collection. William Alfred Ismay spent his life building an enormous collection of Studio pottery. It is now held by YMT and was subject to a Google Cultural Institute project in November 2013. Brand new high quality photographs were taken for this and Pat was able to upload these images to Commons. These have now been used on the biographical articles for 17 of the potters. The Ismay article was also created from scratch by a Wikipedia editor.

External partnerships. Committed to the idea of engaging with many cultural organisations in the region, YMT was exploring the possibility of scoping the project out and reaching more than just the institutions in the Trust. This resulted in an idea of a Yorkshire wide Wikimedia ambassador linked to the Museum Development Yorkshire, a project YMT have shaped and planned to run in second half of 2014 and beyond.

Training and advocacy. All key curators at YMT were trained to edit Wikipedia. Pat also delivered a range of external talks reaching c. 80 people, including one to the Museum Development Yorkshire.

Outreach and events. Pat delivered 3 training sessions for staff and volunteers, and a high profile public editathon themed around the lives and works of Yorkshire’s 19th Century luminaries.

It was the idea of external partnerships that resonated especially strongly with Wikimedia UK and YMT during the cooperation, and the institutions worked on setting up a ‘phase 2’ project that would take these ideas forward.

Spreading the net: What’s next for GLAMwiki in Yorkshire?

One of the most positive elements of working at YMT was the opportunity to work in a network of museums with such diverse collections and breadth of knowledge among curators, staff and volunteers. This acted as a key inspiration in the design of a follow up project, run from July 2014 for a year. The Yorkshire Network Project with Pat Hadley as Regional Wikimedia Ambassador, is a unique chance to work with the region’s Museum Development Officers (MDOs) and offer Wikimedia partnerships and collaboration to the regions 150 registered museums.

Want to learn more?

Explore the full case study report written by Pat. It includes interviews with York Museums Trust staff, and further insights

Pat also talks about his project in the GLAM-Wiki Revolution video here.

by Stevie Benton at October 21, 2014 01:42 PM

Priyanka Nag

My story featured on Yourstory

Before your big launch, facing a few new bugs and needing to fix them in utmost urgency is something I guess every developer at a startup needs to undergo. I would rather say, these adrenaline rush makes our work life more exciting. We had a similar firefighting night at Scrollback yesterday and thus, by the time I reached home, it was 8.30am. My system (by that I mean my brain) was probably over heated already and thus I crashed as soon as I did hit the bed.

When I woke up at around 4pm, my phone showed a few too many notifications. There were innumerable congratulation messages and I was still wondering over them till I saw Aditya's post on my Facebook timeline.

I had met Aditya on a Sunday evening over a cup of coffee. Well, we had met for an interview (where I had expected Aditya to be someone like a professional interviewer, expecting me to answer all his questions), but the so called interview didn't feel anything as I had thought it to be like. It felt more like meeting a friend over a cup of coffee. We had a long (almost 3 hour) chat and during the entire conversation, it wasn't anything like Aditya asking me questions and me answering. For all the different topics he wanted me to talk about, he told me his side of the story as well....which made it all informal and friendly. Given a chance, I could also probably write a small article about him (ofcourse I don't have the technical skills like him to do that, but I have a few facts for sure).

The impact of this one article was huge indeed. In less than12 hours, I have received some 10+ emails different people....some asking me to talk at their event, some asking me to help them with their startup settings, some simply congratulating me and some even saying how their life story had been very similar to mine.

One honest confession - I feel a little embarrassed to read the article myself. It like too much nice things being said all together at the same time! Well, either I suffer from imposter syndrome or Aditya was a bit too humble to bring out such a good image of mine :P

One more person, I shouldn't forget to credit is Santosh. Well, it was he who kind of thought all my work was worth being covered in TechieTuesday and got me connected to Aditya.

Link to the Yourstory article : http://yourstory.com/2014/07/priyanka-nag-techie-tuesdays/

by priyanka nag (noreply@blogger.com) at October 21, 2014 12:29 PM

Mozilla and WeTech Women's Maker Party, Delhi

Well, I love the name Larissa came up with for today's event. It is kind of a little long but defines the event best - "Mozilla and WeTech Woman’s Maker Party".

We had landed in Delhi on the 22nd of July 2014 and as Larissa defines it, Delhi was indeed a 'steam sauna'. We did spend most of that day going around and visiting a few famous places like the Red Fort, the India Gate, Parliament house etc. In the evening, we did meet the local Mozillians in Delhi. Well, it was an informal meeting of all Mozillians, talking all 'sh!t mozillians say' ;)

23rd morning began with all excitement. It was a small crowd, but a really awesome crowd in that conference room. Right from the introduction session, we could feel the high intellectual capabilities these young ladies were filled with. After a small game of spectrogram, we immediately moved to introduce Mozilla as an organization as well as all the Mozilla projects. To my surprise, most of the participants already knew about Open Source and had a fair idea about Mozilla. To my greater surprise, all of our participants had used Firefox at some point of time (even if it was not their default/regular browser). It was thus easy to introduce the different Mozilla projects and contribution pathways to them.

Serious hacking in progress...
The confidence these dynamic ladies did showcase was beyond appreciation.
One thing each person in the room agreed to was - "being a woman in technology is indeed tough". But these girls were ready to face the tough world and fight it out for themselves!

Post lunch, we got to some webmaking. So much hacking, so much remixing...it was tough to believe that many of these people were "not from a technical background".
Some of the awesome makes can be found listed on this spreadsheet.

Well, it goes beyond saying that these superstarts definitely deserved some awards for their awesomeness and thus, we did give them some webmaker badges.

Very few events have given me the happiness of being able to convert almost all participants into Mozillians and this was one of those rare ones.

The awesome woman Webmakers of Delhi :)


by priyanka nag (noreply@blogger.com) at October 21, 2014 12:29 PM

Maker Party Bhubaneshwar

Last weekend I had a blast in Bhubaneshwar. Over two days, I was there at two different colleges for two Maker parties.

Saturday (23rd August 2014), we were at the Center of IT & Management Education (CIME) where we were asked to address a crowd of 100 participants whom we were supposed to teach webmaking. Trust me, very rarely do we get such crowd in events where we get the opportunity to be less of a teacher and more of a learner. We taught them Webmaking, true, but in return we learnt a lot from them.

Maker Party at Center of IT & Management Education (CIME)

On Sunday, things were even more fabulous at Institute of Technical Education & Research(ITER), Siksha 'O' Anusandhan University college, where we were welcomed by around 400 participants, all filled with energy, enthusiasm and the willingness to learn.

Maker Party at Institute of Technical Education & Research(ITER)

Our agenda for both days were simple....to have loads and loads of fun! We kept the tracks interactive and very open ended. On both days, we did cover the following topics:
  • Introduction to Mozilla
  • Mozilla Products and projects
  • Ways of contributing to Mozilla
  • Intro to Webmaker tools
  • Hands-on session on Thimble, Popcorn and X-ray goggles and Appmaker
Both days, we concluded our sessions by giving away some small tokens of appreciation like e T-shirts, badges, stickers etc, to the people who had been extra awesome among the group. We concluded the awesomeness of the two days by cutting a very delicious cake and fighting over it till its last pieces.
Cake.....
Bading goodbye after two days was tough, but after witnessing the enthusiasm of everyone we met during these two events, I am very sure we are going to return soon to Bhubaneshwar for even more awesomeness.
A few people who are two be thanked for making these events sucessful and very memorable are:
  1. Sayak Sarkar, the co-organizer for this event.
  2. Sumantro, Umesh and Sukanta from travelling all the way from Kolkata and helping us out with the sessions.
  3. Rish and Prasanna for organizing these events.
  4. Most importantly, the entire team of volunteers from both colleges without whom we wouldn't havebeen able to even move a desk.
 p.s - Not to forget, we did manage to grab media's attention as well. The event was covered by a local newspaper.
The article in the newspaper next morning

by priyanka nag (noreply@blogger.com) at October 21, 2014 12:28 PM

Debutsav'14 at God's own country

Kerala...God's own country. I had always wanted to visit it, but never had a chance of being there. One evening, I suddenly received a very unexpected call, inviting me to be a part of Debutsab'14 at Amrita college, Kerala.

The poster of Debutsav'14
Though it was Kerala, I wasn't entirely enthusiastic for this event. I not been keeping too well for the last few days and doctor had strictly asked me to work less and rest more! Attending one more event at this time would definitely mean another hectic trip. But someone, the idea of introducing Scrollback to another community of open source lovers, talking about this awesome project to a new group of people was an idea which I couldn't entirely ignore. After some discussion with the rest of the Scrollback team, I decided to take this event up.

13 hours to reach Ernakulam, 3 hours from there to reach Kayankulam and another 30 mins ride to finally reach Amritapuri...it wasn't a very easy or comfortable journey. But once I reached Amritapuri, I realized that this long trip was totally worth it. Kerala, rightly called God's own country. The campus was beautiful. Backwaters, boats, sea, loads of coconut trees, a very clean beach and a very peaceful environment, the campus had it all.


Amritapuri...an example of true beauty

When I had left for this event, I was really wondering about my next few days. I knew I was not going to be welcomed by too many known faces here! But, once I was there, I realized how like minds often don't need much time to get along. I was meeting almost everyone for the first time, but it didn't feel like so after the initial 5 minutes of the conversations. Sometimes, strangers don't feel strange at all...and that is exactly what happened to me here.

On the first day of the event, I took the stage for some 15 minutes to give a quick demo of Scrollback, so that every participant of the event could use Scrollback as the communication platform during as well as after the event, to keep the network alive. On the second day, I did occupy the podium for a little longer, talking about why Scrollback was built, inspite of having so many other communication media.

Me, talking about the next generation IRC
The three days actually passed way faster than expected. I had to leave a little early, before the event could be closed, but the time I did spend with everyone at Amritapuri is totally unforgettable.

by priyanka nag (noreply@blogger.com) at October 21, 2014 12:27 PM

User:Sj

Gerard Meijssen

#Wikidata - Thank you Magnus


Mr A. H. Halsey is the first person who can be put to rest now that the ToolScript works again. Mr Halsey was a sociologist, he died 14 October 2014.

Thank you Magnus, you are wonderful.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at October 21, 2014 06:26 AM

Wikimedia Foundation

Free as in Open Access and Wikipedia

This post by Yana Welinder (Legal Counsel at the Wikimedia Foundation and Non-Residential Fellow at Stanford CIS) was first published on the blog of the Electronic Frontier Foundation (EFF), as part of Open Access Week – a week to acknowledge the wide-ranging benefits of enabling open access to information and research, as well as exploring the dangerous costs of keeping knowledge constrained by copyright restrictions and locked behind publisher paywalls.

Wikipedia and the other Wikimedia sites are closely connected to open access goals of making scholarship freely available and reusable. Consistent with these goals, the Wikimedia sites make information available to Internet users around the world free of charge in hundreds of languages. Wikimedia content can also be reused under its free licenses. The content is complemented by citations to open access scholarship, and the Wikimedia sites play a unique role in making academic learning easily available to the world. As the next generation of scholars embraces open access principles to become a Generation Open, we will move closer to “a world in which every single human being can freely share in the sum of all knowledge.”

To write and edit Wikipedia, contributors need to access high quality independent sources. Unfortunately, paywalls and copyright restrictions often prevent the use of academic journals to write Wikipedia articles and enrich them with citations. Citations are particularly important to allow readers to verify Wikipedia articles and learn more about the topic from the underlying sources. Given the importance of open access to Wikipedia, Wikipedia contributors have set up a WikiProject Open Access to increase the use of open-access materials on the Wikimedia sites, improve open access-related articles on Wikipedia, and signal to readers whether sources in Wikipedia articles are open access.

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="338" mozallowfullscreen="mozallowfullscreen" src="http://commons.wikimedia.org/wiki/File:Reusing_Open_Access_materials_on_Wikimedia_projects.ogv?embedplayer=yes" webkitallowfullscreen="webkitAllowFullScreen" width="600"></iframe>

Link to video on Wikimedia Commons // CC BY-SA 3.0: Reusing Open Access materials on Wikimedia projects, Jesse Clark, Max Klein, Matt Senate, Daniel Mietchen.

Great potential lies in the reciprocal relationship between the open access scholarship that enriches Wikipedia and Wikipedia’s promotion of primary sources. As a secondary source, Wikipedia does not publish ideas or facts that are not supported by reliable and published sources. Wikipedia has tremendous power as a platform for relaying the outcomes of academic study by leading over 400 million monthly visitors to underlying scholarship cited in articles. Just as a traditional encyclopedia would, Wikipedia can make the underlying research easier to find. But unlike a traditional encyclopedia, it also provides free access and reuse to all. In that sense, Wikipedia is an ideal secondary source for open access research.

In light of this, we are thrilled to see Generation Open grow. The Digital Commons Network now boasts 1,109,355 works from 358 institutions. The Directory of Open Access Journals further has over 10,000 journals from 135 countries. Esteemed law journals such as the Harvard Journal of Law and Technology, Berkeley Technology Law Journal, and Michigan Law Review subscribe to the Open Access Law Program, which encourages them to archive their articles under open access principles. But while all these initiatives enable free access to academic scholarship, some of them still provide limited ability to reuse that work falling short of the definition of open access:

[F]ree availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

Wikipedians are also contributing to the body of published open access scholarship. Earlier this month, four Wikipedians published an article on Dengue fever in Open Medicine (an open access and peer-reviewed journal) based on a Wikipedia article that was collaboratively edited by over 1,300 volunteers and bots. In addition to providing an open access scholarly article on this important topic, this publication validated that Wikipedia’s editorial process can produce high quality content outside traditional academia. Many Wikipedia articles incorporate text from openly licensed scholarship and some scholars write and publish openly licensed scholarship specifically to have it reused in Wikipedia articles.

Placing scholarship behind paywalls and copyright restrictions has the effect of relegating new advances in human knowledge to small academic communities. We have previously joined many open access groups to demand that scholarship be not only freely accessible, but also freely reuseable. As more academics allow their work to be shared and used freely, online secondary sources like Wikipedia will play a large role in disseminating the knowledge to more people in new regions and on different devices.

Yana Welinder, Legal Counsel

Many thanks to Hilary Richardson and Camille Desai for their help in preparing this post. I would also like to thank Stephen LaPorte, Manprit Brar, Daniel Mietchen, and other members of WikiProject Open Access for their helpful feedback.

by wikimediablog at October 21, 2014 12:27 AM

October 20, 2014

Alex Druk

Will you die today?

I was always wondering why articles like “Lists of deaths by year” was in top 10 most popular Wikipedia pages. (This year popularity of this article fall dramatically – another puzzle).

Maybe because of thoughts of death and eternity visit each of us? Maybe Wikipedia data can show probability of my sudden death today? (Or more seriously, can Wikipedia data be used for some population statistics?)

So, I decided to do a little bit of research. I pulled out over 200,000 Wikipedia persona profiles with death dates between 1950 and 2014 using Dbpedia SPARQL query.
Clearly, Wikipedia coverage of famous persons (and their death dates) increase with years.

Death_dates_by_year

There is no significant correlation between weekday and death, but some correlation between day of year and mortality exists.
As you can see, the probability to die highest at New Year Eve and New Year Day (which is a well-known fact from population statistics). Next high risk dates are January 28, February 2 and September 11, followed by bad days in November (25 and 29) and December (14, 22).
Mortality in summer is significantly lower that in winter. Safest month is August (especially August 4, 31, 29 and 7).
I would like to compare these data with official mortality rates, but was unlucky in my search. Would appreciate an advice.

Mortality_by_day_of_year

So, thanks God, today is not New Year and am not famous.

by Alex Druk at October 20, 2014 09:53 PM

Wiki Education Foundation

Notes and Slides from Quarterly Reviews now available

Own work. Licensed under Creative Commons Zero, Public Domain Dedication via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Wiki_Education_Foundation_%E2%80%93_Sara_at_her_Quarterly_Review_01.jpg#mediaviewer/File:Wiki_Education_Foundation_%E2%80%93_Sara_at_her_Quarterly_Review_01.jpg

Sara at her Quarterly Review.

The Wiki Education Foundation values transparency as a means of encouraging evaluation and reflection on our work. Our Quarterly Review meetings are an opportunity for every team member to discuss accomplishments and goals every three months. To be certain that our stakeholders are also aware of these goals and achievements, we have shared the notes and slides from these Quarterly Reports on Meta.

Each Quarterly Review allows our staff to showcase their work, collect feedback from colleagues, and to incorporate ideas from every team member into the work taking place across the organization. Creating the reports offers individuals the time to reflect on their accomplishments and their challenges, and sharing these presentations gives everyone an opportunity to reflect on and contribute to the shared goals of this organization.

In addition to the Communications Quarterly Review already posted, we’ve recently added notes from three others from last quarter.

During the Digital Infrastructure Quarterly Review, Sage discussed the outlook for our website, our on-wiki user experience, and analytics tools. A key tool currently in development is an Assignment Design Wizard, which will help instructors easily create a custom course syllabus. Looking forward, we discussed improving plagiarism detection and creating a suite of additional course tools, such as student portfolios and course dashboards, which could streamline access to student activity and ease the grading process for instructors. We discussed the future of data collection and analysis, including metrics for measuring article quality for student contributions. Finally, Sage discussed testing the Assignment Design Wizard and deploying the activity feed to monitor student contributions. These tools will constitute a tremendous leap forward in creating and monitoring course content for new instructors.

During the Classroom Program and Educational Partnerships Quarterly Review, Jami discussed the current state and future goals of our Classroom Program. She discussed the work she has done in the past, including goals for number of classes and how she tracks challenges. Jami also explained the differences between the roles of classroom program manager and educational partnerships manager, now that her job will be divided for the fiscal year’s second quarter.

Finally, in the Fundraising Quarterly Review, Sara gave us an overview of fundraising goals and outreach efforts, with the intention of introducing her work and challenges to the rest of the staff. She discussed the value of strong internal communications as a way of reducing roadblocks to goal achievement, which is crucial to our organization and to our funders. But she also spoke about the importance of addressing mistakes openly and transparently.

I’m extremely proud of what our organization has accomplished in our first quarter, and look forward to sharing our next round of achievements.

Frank Schulenburg
Executive Director

by Frank Schulenburg at October 20, 2014 09:08 PM

Frank Schulenburg

Gerard Meijssen

#Charkop - a Vidhan Sabha constituency

Data about politics, politicians regularly finds its ways to Wikidata. When an item gets my attention, I often add all associated items to Wikidata as well. Charkop is a consistency in Maharashtra according to an associated category there are many more.

Given that the software I use is broken at this time, I can blog about one dilemma.

Charkop is a Vidhan Sabha constituency it is part of the Mumbai North Lok Sabha constituency. The question is if Charkop "is in the administrative territorial entity" of Mumbar North or Maharashtra.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at October 20, 2014 06:02 AM

#Google - Let us #share in the sum of all #knowledge

Dear Google, in our own ways, we share the aspiration to share in the sum of all knowledge. We are really happy to share everything we have with you. Our licenses are designed to share widely.

Dear Google, could you please help us make sure that our Labs webservices survive your bots? What we do not want is for your bots not to run. What we want is for our webservers to serve our own needs first and use all the spare capacity for you. As it is our software dies.

We really want you to have our data and, there are several other ways whereby you can get all out data any way. For this reason please help us with our software so that we can continue to share the sum of all our available knowledge with you.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at October 20, 2014 05:34 AM

Tech News

Tech News issue #43, 2014 (October 20, 2014)

TriangleArrow-Left.svgprevious 2014, week 43 (Monday 20 October 2014) nextTriangleArrow-Right.svg
Other languages:
বাংলা • ‎čeština • ‎English • ‎suomi • ‎français • ‎עברית • ‎日本語 • ‎Nederlands • ‎português • ‎русский • ‎中文

October 20, 2014 12:00 AM

October 19, 2014

Gerard Meijssen

#Wikidata - P1472, the #Commons #Creator #Template

The work of many artists is represented in Commons. Having great information available for all of them is a Herculean job. Having all that information and more available in all the languages that are supported by the Wikimedia Foundation is very much an aspiration.. Once Commons is wikidatified, all information needs to be understood in all our languages..

France Prešeren is one of 13,481 people who currently have a Creator template and are known as such in Wikidata. All the data in those templates can be harvested and included in an Wikidata item. For all the templates NOT known in Wikidata, an item can be found or created to make them known in Wikidata as well.
A lot is already known about Mr Prešeren in Wikidata and much of that data can be expressed in multiple languages. The same can be said for the Creator template itself; as you can see, the template already shows its labels in multiple languages. With Wikidata we can show the information in all our languages as well.

Realising this will introduce the Commons community in a positive way and reduce one obstacle that needs to be overcome during the wikidatification of Commons.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at October 19, 2014 11:27 AM

Brion Vibber

ogv.js MediaWiki integration updates

Over the last few weekends I’ve continued to poke at ogv.js, both the core library and the experimental MediaWiki integration. It’s getting pretty close to merge-ready!

Recent improvements to ogv.js player (gerrit changeset):

  • Audio no longer super-choppy in background tabs
  • ‘ended’ is no longer unreasonably delayed
  • various code cleanup
  • ogvjs-version.js with build timestamp available for use as a cache-buster helper

Fixes to the MediaWiki TimedMediaHandler desktop player integration (gerrit changeset):

  • Post-playback behavior is now the same as when using native playback
  • Various code cleanup

Fixes to the MediaWiki MobileFrontend mobile player integration (gerrit changeset):

  • Autoplay now working with native playback in Chrome and Firefox
  • Updated to work with current MobileFrontend (internal API changes)
  • Mobile media overlay now directly inherits from the MobileFrontend photo overlay class instead of duplicating it
  • Slow-CPU check is now applied on mobile player — this gets ogv.js video at 160p working on an old iPhone 4S running iOS 7! Fast A7-based iPhones/iPads still get 360p.

While we’re at it, Microsoft is opening up a public ‘suggestion box’ for Internet Explorer — folks might want to put in their votes for native Ogg Vorbis/Theora and WebM playback.

by brion at October 19, 2014 09:51 AM

Wikimedia Foundation

First GLAM collaboration in Canada with BAnQ

1941: Two employees at a bottling plant of Coca-Cola Canada Ltd. in Montreal, Canada
Photo: Conrad Poirier, PD-Canada, BAnQ Vieux-Montréal

1945: Two young women read the front page of The Montreal Daily Star announcing the German surrender and the impending end of World War II in Europe
Photo: Conrad Poirier, PD-Canada, BAnQ Vieux-Montréal

The Bibliothèque et Archives nationales du Québec (BAnQ) and Wikimedia Canada are announcing a pilot project to upload public domain images from the Conrad Poirier collection at BAnQ Vieux-Montréal.[1]

Freelance photographer Conrad Poirier (1921-1968) sold his photographs to various newspapers and magazines including The Montreal Gazette, La Patrie and La Presse. A follower of the “new vision” (Nouvelle Vision, a photographic movement in the first half of the 20th century), he did social photography early on. He was interested in the working world, in street life and in popular events. Poirier’s work shows the developement of Montreal through historical photographs, and more widely the province of Quebec, Canada. With more than 20,000 photographs, the collection includes photographs taken between 1932 and 1960, which show the evolution of the Quebec metropolis – especially during the 1930s and 1940s. More broadly, the work of Poirier reflects the social changes underway in Quebec in the middle of the last century.

To date, approximately 700 photographs have been uploaded to Wikimedia Commons. In the coming months, an equivalent number of photographs will be added to the selection.

This collaboration between a GLAM institution and Wikimedia is a first in Canada.

Visit the BAnQ GLAM page on the English Wikipedia and the Category:BAnQ-Projet Poirier on Commons.

Thank you to the archives diffusion team of BAnQ Vieux-Montréal.

Benoit Rochon, Project Manager, Wikimedia Canada.

  1. Fund Conrad Poirier description, Pistard catalogue, Bibliothèque et Archives nationales du Québec.

by wikimediablog at October 19, 2014 05:14 AM

October 18, 2014

Wikimedia Tech Blog

Wikimedia engineering report, August 2014

Major news in August includes:

Engineering metrics in August:

  • 160 unique committers contributed patchsets of code to MediaWiki.
  • The total number of unresolved commits went from around 1640 to about 1695.
  • About 22 shell requests were processed.

Technical Operations

Dallas data center

On August 21, our first connectivity to the new Dallas data center (codfw) came online, connecting the new site to the Wikimedia network. The following week, all network equipment was configured to prepare for server installations. The first essential infrastructure services (install server, DNS, monitoring etc.) were brought online in the days following August 25, and we are now working on deploying the first storage & data base servers to start replication & backups from our other data centers.

Labs metrics in August:

  • Number of projects: 170
  • Number of instances: 480
  • Amount of RAM in use (in MBs): 2,116,096
  • Amount of allocated storage (in GBs): 22,600
  • Number of virtual CPUs in use: 1,038
  • Number of users: 3,718

Wikimedia Labs

Andrew fixed a few sudo policy UI bugs (68834, 61129). Marc improved the DNS cache settings and resolved some long-standing DNS instability (70076). He also set up a new storage server for wiki dumps. This should resolve some long-term storage space problems that led to out-of-date dumps.
Andrew laid the groundwork for wikitech to be updated via the standard WMF deployment system. We’re investigating the upstream OpenStack user interface, ‘horizon’.

Features Engineering

Editor retention: Editing tools

VisualEditor

In August, the team working on VisualEditor presented about VisualEditor at Wikimania 2014, worked with a number of volunteers at the hackathon, adjusted key workflows for template and citation editing, made major progress on Internet Explorer support, and fixed over 40 bugs and tickets.

Users of Internet Explorer 11, who we were previously preventing from using VisualEditor due to some major bugs, will now be able to use VisualEditor. Support for earlier versions of Internet Explorer will be coming shortly. Similarly, tablet users browsing the site’s mobile mode now have the option of using a mobile-specific form of VisualEditor. More editing tools, and availability of VisualEditor on phones, is planned for the future.

Improvements and updates were made to a number of interface messages as part of our work with translators to improve the software for all users, and VisualEditor and MediaWiki were improved to support highlighting links to disambiguation pages where a wiki or user wishes to do so. Several performance improvements were made, especially to the system around re-using references and reference lists. We tweaked the link editor’s behaviour based on feedback from users and user testing. The deployed version of the code was updated three times in the regular release cycle (1.24-wmf17, 1.24-wmf18 and 1.24-wmf19).

Editing

In August, the Editing Team presented at Wikimania 2014 on better ways to develop and manage front-end software, improved the infrastructure of the key user interface libraries, and continued the planned adjustments to the MediaWiki skins system.

The TemplateData GUI editor was significantly improved, including being updated to use the new types, and recursive importing of parameters if needed, and deployed on Norwegian Bokmål Wikipedia. The volunteers working on the Math extension (for formulæ) moved closer to deploying the “Mathoid” server that will use MathJax to render clearer formulæ than with the current versions.

The Editing team as usual did a lot of work on improving libraries and infrastructure. The OOjs UI library was modified to make the isolation of dialogs using <iframe>s optional, and re-organise the theme system as part of implementing a new look-and-feel for OOUI, to make it consistent with the planned changes to the MediaWiki design, in collaboration with the Design team. The OOjs library was updated to fix a minor bug, with two new versions (v1.0.12 and then v1.1.0) released and pushed downstream into MediaWiki, VisualEditor and OOjs UI.

Parsoid

In August, we wrapped up our face-to-face off-site meetup in Mallorca and attended Wikimania in London, which was the first Wikimania event for us all. At the Wikimania hackathon, we co-presented (with the Services team) a workshop session about Parsoid and how to use it. We also had a talk at Wikimania about Parsoid.

The GSoC 2014 LintTrap project wrapped up and we hope to develop this further over the coming months, and go live with it later this year.

With an eye towards supporting Parsoid-driven page views, the Parsoid team worked on a few different tracks. We deployed the visual diff mass testing service, we added Tidy support to parser tests and updated tests, which now makes it easy for Parsoid to target the PHP Parser + Tidy combo found in production, and continued to make CSS and other fixes.

Services

Services and REST API

August was mostly a month of travel and vacation for the service team. We deployed a first prototype of the RESTBase storage and API service in Labs. We also presented on both Parsoid and RESTBase at Wikimania, which was well received. Later in August, computer science student Hardik Juneja joined the team as a part-time contractor. Working from Mumbai, he dived straight into complex secondary index update algorithms in the Cassandra back-end. At the end of the month, design work resumed, with the goal of making RESTBase easier to extend with additional entry points and bucket types.

Core Features

Flow

In August, the Flow team created a new read/unread state for Flow notifications, to help users keep track of the active discussion topics that they’re subscribed to. There are now two tabs in the Echo notification dropdown, split between Messages (Flow notifications) and Alerts (all of the other Echo notifications). Flow notifications stay unread until the user clicks on the item and visits the topic page, or marks the item as read in the notifications panel. The dropdown is also scrollable now, and holds the 25 most recent notifications. Last, subscribing to a Flow board gives the user a notification when a new topic is created on the board.

Growth

Growth

In August, the Growth team vetted CirrusSearch as back-end for personalized suggestions and prepared its first A/B test of the new task recommendations system. This test will deliver recommendations to a random sample of newly-registered users on 12 Wikipedias: English, French, German, Spanish, Italian, Hebrew, Persian, Russian, Ukrainian, Swedish, and Chinese. Several Growth team members also attended Wikimania 2014 in London. At Wikimania, the team shared presentations on its work and conducted usability tests of the recommendations system. Last but not least, design work began on the third major iteration of the team’s anonymous editor acquisition project.

Mobile

Wikimedia Apps

In August, the Mobile Apps Team focussed on bug fixes for the recently released iOS app and for the Android app, as well as gathering user feedback from Wikimania. The team also had unstructured time during Wikimania, in which the engineers are free to work on whatever they fancy. This resulted in numerous code quality improvements on both iOS and Android. On iOS, the unstructured time also spawned a preliminary version of the feature “Nearby”, which lists articles about things that are near you, tells you how near they are to you, and points towards them. On Android, the unstructured time spawned a preliminary version of full text search, an improved searching experience which aims to present more relevant results.

Mobile web projects

This month the mobile web team, in partnership with the Editing team, launched a mobile-friendly opt-in VisualEditor for users of the mobile site on tablets. Tablet users can now choose to switch from the default editing experience (wikitext editor) to a lightweight version of VE featuring some common formatting tools (bold and italic text, the ability to add/edit links and references). We also began building a Wikidata contribution game in alpha that will allow users to add metadata to the Wikidata database (to start, occupations of people) directly from the Wikipedia article where the information is contained. We hope to graduate this feature to the beta site next month to get more quantitative feedback on its usage and the quality of contributions.

Wikipedia Zero & Partnerships

Wikipedia Zero page views held steady at around 70 million in August. We launched Wikipedia Zero with three operators: Smart and Sun in the Philippines (related companies) and Timor Telecom in East Timor. That brings our total numbers to 37 partners in 31 countries. Smart has been collaborating with Wikimedia Philippines for months, and they previously offered free access to Wikipedia on a trial basis. Just announced, Smart has now officially joined Wikipedia Zero and brought in their sister brand Sun, covering a combined 70 million subscribers in the Philippines. Timor Telecom launched Wikipedia Zero with a press event including the Vice Minister of Education and much promotion. Timor Telecom is keen to support growth in the Tetun Wikipedia by raising awareness in universities, with resources from the Wikipedia Education Program. In Latin America, we made progress toward app preloads by completing testing for the Qualcomm Reference Design (QRD) program. The Wikipedia Android app is now certified for preload on QRD. We made terrific connections with Global South community members at Wikimania, which will lead to more direct local collaboration between partners and Wikimedia communities. Smriti Gupta, partnerships manager for Asia, moved to India where she will work remotely. We’re recruiting our third partnerships manager to cover South East Asia and tech partnerships.

Language Engineering

Language tools

Niklas Laxström (outside his WMF job) completed most of the work needed in Translate to Recover gracefully from session expiration, a known pain point for translators. The PageMigration feature (a GSoC project mentored by Niklas) was (GSoC project mentored by Niklas) released . The team also worked on session expiry checking (to prevent errors in long translations), updated YAML handling, deployed auto-translated screenshots for the VisualEditor user guide (a GSoC project mentored by Amir and done by Vikas Yaligar). They did internationalization testing of the new Android and iOS apps, as well as internationalization testing and bug fixes in VisualEditor, MobileFrontend and Flow.

Milkshake

Webfonts were enabled on the English Wikisource and Divehi wikis, following requests from the respective communities.

Language Engineering Communications and Outreach

The team was at Wikimania in London. Santhosh Thottingal and Amir Aharoni presented on Machine-aided machine translation, and Runa Bhattacharjee and Kartik Mistry on Testing multilingual applications. They conducted user testing for ContentTranslation in several languages (Catalan, Spanish, Kazakh, Russian, Bengali, Hebrew, Arabic), continued conversations with translators from Wikipedias in several languages, and published a retrospective on ContentTranslation and Wikimania.

Content translation

achine translation abuse algorithm was redone. The team also worked on reference adaptation improvements, refactoring the front-end event architecture and rewriting the cxserver registry to support multiple machine translation engines.

Platform Engineering

MediaWiki Core

HHVM

We migrated test.wikipedia.org to HHVM in early August and saw very few issues. Giuseppe shared some promising benchmarks. Re-imaging an app server was surprisingly painful, in that Giuseppe and Ori had to perform a number of manual actions to get the server up-and-running, and this sequence of steps was poorly automated. Doing this much manual work per app server isn’t viable.

Mark submitted a series of patches to create a service IP and Varnish back-end for an HHVM app server pool, with Giuseppe and Brandon providing feedback and support. The patch routes requests tagged with a specific cookie to the HHVM back-ends. Tech-savvy editors were invited to opt-in to help with testing by setting the cookie explicitly. The next step after that will be to divert a fraction of general site traffic to those back-ends. The exact date will depend on how many bugs the next round of testing uncovers.

Tim is looking at modifying the profiling feature of LuaSandbox to work with HHVM; it is currently disabled.

Admin tools development

Most admin tools resources are currently directed towards SUL finalisation. There was a roundtable at Wikimania with developers and admins/tool users discussing some issues they’ve had, and feature requests they would like to see implemented. The GlobalCssJs extension was deployed to all public Wikimedia wikis, allowing for proper user global CSS and JS.

Search

tarted deploying Cirrus as the primary search back-end to more of the remaining wikis and we found what looks like our biggest open performance bottleneck. Next month’s goal is to fix it and deploy to more wikis (probably not all). We’re also working on getting more hardware.

SUL finalisation

The SUL finalisation team continues to work on building tools to support the finalisation. There are four ongoing streams of work, and the team is on track to have the majority of the work completed by the end of September.

The ability to globally rename users was deployed a while ago, and is currently working excellently!

The ability to log in with old, pre-finalisation credentials has been developed so that users are not inadvertently locked out of their accounts. From an engineering standpoint, this form is now fully working in our test environment. Right now, the form uses placeholder text; that text needs to be ‘prettified’ so that the users who have been forcibly renamed get the appropriate information on how to proceed after their rename, and more rigorous testing should be done before deployment.

A form to globally merge users has been developed so that users can consolidate their accounts after the finalisation. From an engineering standpoint, this form is now fully working in our test environment. The form needs design improvements and further testing before it can be deployed.

A form to request a rename has been developed so that users who do not have global accounts can request a rename, and also so that the workload on the renamers is reduced. From an engineering standpoint, the form to request a rename has been implemented, and implementation has begun on the form that allows renames to rename users. Once the end-to-end experience has been fully implemented and tested, the form will be ‘prettified’.

Security auditing and response

ecurity reviews of the Graph, WikibaseQuery and WikibaseQueryEngine extensions. Initial work was done to enable regular dynamic security scanning.

Release Engineering

Quality Assurance

Having completed the migration of our Continuous Integration infrastructure from a third party host to Wikimedia’s own Jenkins instance, we are thinking about improvements and changes for future work. We aim to improve performance for Jenkins and also for beta labs. We are looking into creating other shared test environments along with beta labs to better support changes like we did this month with HHVM and with a security and performance test project. We also continue to improve the development experience with Vagrant and other virtual machine technologies.

Browser testing

This month, we continued to build out and adjust the new browser test builds on Jenkins. We saw updates to tests and issues identified for UploadWizard, VisualEditor, Echo, and MobileFrontend. New tests for GettingStarted pointed out a need to update our Redis storage on the beta cluster. We are currently monitoring an upstream problem with Selenium/Webdriver and IE11 on behalf of VisualEditor, as VE support for IE11 is coming soon.

Multimedia

Multimedia

Media Viewer’s new ‘minimal design’.

In August, the multimedia team had extensive discussions with community members about the various projects we are working on. We started with seven different roundtable discussions and presentations at Wikimania 2014 in London, including sessions on: Upload Wizard, Structured Data, Media Viewer, Multimedia, Community and Kindness. To address issues raised in recent Requests for Comments, we also hosted a one-week Media Viewer Consultation, inviting suggestions from community members across our sites.

The team also worked to make Media Viewer easier to use by readers and casual editors, our primary target users for this tool. To that end, we created a new ‘minimal design’ including a number of new improvements such as a more prominent button linking to the File: page, an easier way to enlarge images and more informative captions. These new features were prototyped and carefully tested this month to validate their effectiveness. Testers completed easily most of tasks we gave them, suggesting that the new features are now usable by target users, and ready for development in September.

This month, we prepared a first plan for the Structured Data project, in collaboration with many community members and the Wikidata team: we propose to gradually implement machine-readable data on Wikimedia Commons, starting with small experiments in the fall, followed by a wider deployment in 2015. We also continued our code refactoring for the UploadWizard, as well as fixed more bugs across our multimedia platform. To keep up with our work, join the multimedia mailing list.

Engineering Community Team

Bug management

Daniel made Bugzilla use ssl_ciphersuite to add HSTS and removed a superfluous STS header setting. Andre worked around a Bugzilla XML RPC API issue which created problems for exporting Bugzilla data for a Phabricator import. In Bugzilla’s taxonomy (components, descriptions, default CCs, etc.) some smaller changes took place.

Phabricator migration

The project is getting close to Day 1 of a Wikimedia Phabricator production instance. For better overview and tracking, the Wikimedia Phabricator Day 1 project was split into three projects: Day 1 of a Phabricator Production instance in use, Bugzilla migration, and RT migration. Furthermore, the overall schedule was clarified. In the last month, Security/permission related requirements got implemented (granular file permissions and upload defaults, enforcing that policy, making file data inaccessible and not only undiscoverable). In upstream, Mukunda added API to create projects and Chase added support for mailing lists as watching users. Chase worked on and tested the security and data migration logic. Mukunda continued to work on getting the MediaWiki OAuth provider merged into upstream. Chase and Mukunda also worked on the Project Policy Enforcer action for Herald, providing a user-friendly dropdown menu to restrict ticket access when creating the ticket. A separate domain for user content was purchased. Chase also worked on the scripts to export and import data between the systems and support for external users in Phabricator and the related mail setup. Chase and Chad also took a look at setting up Elasticsearch for Phabricator.

Mentorship programs

All Google Summer of Code and FOSS Outreach Program for Women were evaluated by their mentors as PASSED, although many were still waiting for completion, code reviews and merges. We hosted a wrap-up IRC meeting with the participation of all teams except one. We are still waiting for some final reports from the interns. In the meantime, you can check their weekly reports:

Technical communications

In August, Guillaume Paumier attended the Wikimania conference and the associated hackathon. He gave a talk about Tech News (video available on YouTube) and created a poster summarizing the talk. He also continued to write and distribute Tech News every week, and started to contribute to the Structured data project.

Volunteer coordination and outreach

We ran the Wikimania Hackathon in an unconference manner together with the Wikimania organizers. The event went well in a unique venue, and we are compiling a list of lessons learned to be applied in future events. Together with other former organizers of hackathons, we decided that the next Wikimedia Hackathon in Europe will be organized by Wikimedia France (details coming soon). Also at Wikimania, Quim Gil gave a talk about The Wikimedia Open Source Project and You (videoslides).

Analytics

Wikimetrics

Following the prototype built for Wikimania, the team identified many performance issues in Wikimetrics for backfilling Editor Engagement Vital Signs (EEVS) data. The team spent a sprint implementing some performance enhancements as well as properly managing sessions with the databases. Wikimetrics is better at running recurring reports concurrently and managing replication lag in the slave DBs.

Data Processing

The team continued monitoring analytics systems and responding to issues when [non-critical] alarms in went off. Packet losses and kafka issues were diagnosed and handled.

Hadoop worker nodes now automatically set memory limits according to what is available. Previously all workers had the same fixed limit. This allows for better resource utilization.

Logstash is now available at https://logstash.wikimedia.org (Wikitech account required). Logs from Hadoop are piped there for easier search and diagnosis of Hadoop jobs.

Some uses of udp2log were migrated to kafkatee. The latter is not prone to packet losses. In particular Webstatscollector was switched over and error rates were seen to drop drastically. Eventually, the “collecting” part of Webstatscollector will be implemented in Hadoop, a much more scalable environment to handle such work.

Editor Engagement Vital Signs

The team implemented the stack necessary to load EEVS in a browser and has a rough implementation of the UI according to Pau’s design . The team also made available to EEVS two metrics already implemented on Wikimetrics: number of pages created, and number of edits.

Research and Data

This month we hosted the WikiResearch hackathon, a dedicated research track of the Wikimania hackathon. 3 demos of research code libraries were broadcast during the event and several research ideas filed on Meta. Highlights from the hackathon include: Quarry (a web client to query Wikimedia’s slave databases on Labs); wpstubs (a social media bot broadcasting newly categorized stubs on the English Wikipedia); an algorithmic classification of articles due to be re-assessed from the English Wikipedia WikiProject Medicine’s stubs.

We gave or participated in 8 presentations during the main conference.

We published a report on mobile trends expanding the data presented at the July 2014 Monthly Metrics meeting. We started work on referral parsing from request log data to study trends in referred traffic over time.

We generated sample data of edit conflicts and worked on scripts for robust revert detection. We published traffic data for the Medicine Translation Taskforce, with a particular focus on traffic to articles related to Ebola.

We wrote up a research proposal for task recommendations in support of the Growth team’s experiments on recommender systems. We analyzed qualitative data to assess the performance of Cirrus Search “morelike” feature for identifying articles in similar topic areas. We provided support for the experimental design of a first test of task recommendations. We performed an analysis of the result of the second experiment on anonymous editor acquisition run by the Growth team.

We hosted the August 2014 research showcase with a presentation by Oliver Keyes on circadian patterns in mobile readership and a guest talk by Morten Warncke-Wang on quality assessment and task recommendations in Wikipedia.

We also gave presentations on Wikimedia research at the Oxford Internet Institute, INRIA, Wikimedia Deutschland (slides) and at the Public Library of Science (slides). Aaron Halfaker presented at OpenSym 2014 a paper he co-authored on the impact of the Article for Creation workflow on newbies (slides, fulltext).

Wikidata

The Wikidata project is funded and executed by Wikimedia Deutschland.

August was a very busy month for Wikidata. The main page was redesigned and is now much more inviting and useful. A lot of new features were finished and deployed. Among them are:

  • Redirects: allowing you to turn an item into a redirect.
  • Monolingual text datatype: allowing you to enter new kinds of data like the motto of a country.
  • Badges: allowing you to store badges for articles on Wikidata. This includes “featured article” and “good article”. More will be added soon.
  • In other projects sidebar as a beta feature: allowing you to show links to sister projects in the sidebar of any article.
  • Special:GoToLinkedPage: allowing you to go to a Wikipedia page based on its Wikidata Q-ID. This will be especially useful if you want to create links to articles that don’t change even if the article is moved.
  • Wikinews: Wikinews has been added as a supported sister project. Wikinews can now maintain their sitelinks on Wikidata. Access to the other data will follow in due time.
  • Wikidata: Sitelinks to pages on Wikidata itself can now also be stored on Wikidata. This is useful to connect for example its help pages with those on the other projects.
  • Change of the internal serialization format: The internal serialization format changed to be consistent with the serialization format that is returned by the API.
In addition, the team worked on a lot of under-the-hood changes towards the new user interface design and started the discussions around structured data support for Commons. The log of the IRC office hour is available.

Future

The engineering management team continues to update the Deployments page weekly, providing up-to-date information on the upcoming deployments to Wikimedia sites, as well as the annual goals, listing ongoing and future Wikimedia engineering efforts.

This article was written collaboratively by Wikimedia engineers and managers. See revision history and associated status pages. A wiki version is also available.

by Guillaume Paumier at October 18, 2014 05:35 PM

Gerard Meijssen

Bringing #Wikidata to #Commons, one step at a time

There is this big project that is to bring structured data to the 23,422,581 media files that make up one of the biggest resources of freely usable media files.

It is to bring many different benefits to the users of Commons. To accomplish this many steps have to be taken. Many of these steps can already be taken and will indicate why this project is done and, what its benefits are.

Take for instance Mr Daniel Havell. He is an English engraver born in  Reading. There is no Wikipedia article about him but there is information about him in Wikidata. It includes all the information that is in his "Creator" template and the category about him on Commons.

Having such information for all the "Creators" on Wikidata is easy and obvious. Having all those templates refer to Wikidata builds an anticipation of things to come. Next steps are making sure that the information looks good on Wikidata and is informative. Currently the best we can offer is by showing the information in Reasonator.

Using tools like Reasonator for now establishes that the WMF and the Wikidata team appreciates all the efforts that promote the use of Wikidata and accepts it as indicative of the type of information it will have to bring.

This can all be done today. No waiting is necessary and it makes data from Commons available in multiple languages. This is Mr Havell in Russian. Bringing the benefits of Wikidata to Commons today helps. It brings awareness to our public of the inherent benefits. It allows them to comment and get involved slowly but surely. It will prevent a "big bang" announcement of this is "it",take it or leave it. It will even bring more information in more languages to Commons sooner rather than later.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at October 18, 2014 06:53 AM

Wikimedia Foundation

First editathon on the Spanish language and literature in Madrid

“Wiki Editatón Madrid 2014 – 04″ by Carlos Delgado, under CC-BY-SA-4.0

Group photograph of participants in the editathon at the National Library of Spain.

On Saturday, September 27th, Wikimedia España co-organized in Madrid the first editathon focused on improving content about the Spanish language and literature in the Spanish Wikipedia. This editathon was fostered by three relevant institutions strongly committed to the promotion and dissemination of the Spanish language and culture around the world: the Cervantes Institute, the Royal Spanish Academy and the National Library of Spain. The meeting was hosted in the Board’s Hall (Salón del Patronato), an emblematic room inside the museum of the National Library of Spain, and it was primarily aimed at participants without prior experience editing in Wikipedia. The directors from the three institutions were present at the start of the event to welcome all attendees and thank them for their participation. Progress of the meeting could be tracked through Twitter, Facebook and other social media platforms by following the hashtag #WikiEditatonMadrid. This facilitated the participation of other virtual editors who could not attend this meeting.

“Wiki Editatón Madrid 2014 – 14″ by Carlos Delgado, under CC-BY-SA-4.0

From left to right, José Manuel Blecua Perdices (director of the Royal Spanish Academy), Ana Santos Aramburo (director of the National Library of Spain) and Víctor García de la Concha (director of the Cervantes Institute) welcome participants in this editathon.

The registration was quite successful, with 114 enrolled participants from which approximately 61% were women. This was an outstanding achievement, especially considering the still low participation of women editors in Wikipedia. Ten volunteers, experienced Wikipedians from Wikimedia España, offered guidance to all editors and resolved their questions and doubts. The meeting took place from 10.00 to 18.00 (local time) and started with a short introduction to effective participation in Wikipedia. Lunch, beverages and cupcakes were served to all participants to keep up the editing enthusiasm.

The meeting was a great success and its main accomplishments can be summarized as follows:

During the editathon.
“Madrid – Editatón Madrid BNE 2014 – 140927 145624″ by Barcex, under CC-BY-SA-3.0

All editors received special surprise gifts: books from the Royal Spanish Academy, image products from Cervantes Institute, the National Library of Spain and WMF. On top of that, the National Library invited all attendees (editors and volunteering Wikipedians) to participate in an exclusive guided tour through the National Library museum, including visits to special areas and rooms. Overall, we were quite satisfied with the development of this editathon. We also hope that it can be the first step in a new series of similar initiatives in Spain to engage these and other renowned organizations and institutions on improving access to free knowledge in Wikipedia.

Felipe Ortega, co-organizer and member of Wikimedia España.

by wikimediablog at October 18, 2014 02:29 AM

October 17, 2014

Wikimedia Foundation

How the #wikinobel Nobel Peace Prize collaboration came to be

Bente Erichsen, Executive Director at the Nobel Peace Center, and Atrid Carlsen of Wikimedia Norway, edit after the announcement. “Edit-a-thon Nobel Peace Prize 04″ by WMNOastrid, under CC-BY-SA-4.0

In April 2013, the Nobel Peace Center and Wikimedia Norway came together for their first collaboration: an edit-a-thon to enhance the quality of Wikipedia articles on the Nobel Peace Prize, various Peace Prize laureates, and other related articles on war, peace and conflict resolution.

Both groups agreed it was a great experience, and were looking for opportunities to continue working together. Last week, they came together again at the Nobel Peace Center for the announcement of the 2014 Nobel Peace Prize. On Friday, 10th October, a group of Wikipedians from Wikimedia Norway converged at the Peace Center, in order to follow the announcement. There, they made updates to Wikipedia in real time as the winners — girl’s education activist Malala Yousafzai, of Pakistan, and childhood rights activist Kailash Satyarthi, of India — were made public.

At the same time, 500km away in the northern Norwegian city of Trondheim, Wikimedian Jon Harald Søby followed remotely, supporting updates to other language versions of Wikipedia by Wikimedians all around the world. Throughout the day we kept in contact via Skype, and Jon Harlad was even interviewed about the experience on Norwegian national radio.

Knowledge and education of young and old alike is pivotal to all activities at the Nobel Peace Center, which is visited by 220,000 people every year, one third of whom are children and young people. The Nobel Peace Center works to increase the knowledge of the Nobel Peace Prize and its history, its laureates and topics within the fields of war, peace, and conflict resolution. The Nobel Peace Center and Wikimedia Norway both want this collaboration to contribute to even more quality and fact-based knowledge to Wikipedia, to enhance public conversation on these important issues. We greatly appreciate all the efforts and feedback from community members around the world in connection with the event.

Kirsti Svenning at The Nobel Peace Center sums up: “The way a Wikipedia article is made, the fact that several people co-write it, bringing a joint pool of knowledge and facts together and continuously enhancing the quality of the final output, is very much in keeping with the Nobel Peace Center’s mission: to increase the knowledge and reflection about the Nobel Peace Prize. The collaboration with Wikimedia Norway is much appreciated and there are new events already being planned.”

Wikimedia Norway looks forward to a continued collaboration with the Nobel Peace Center. If there are any community members, Wikimedia chapters, or institutions with ideas or thoughts on an international collaboration, please contact astrid@wikimedia.no.

Astrid Carlsen
Prosjektleder, Wikimedia Norge

by maherwiki at October 17, 2014 06:42 PM