en.planet.wikimedia

July 26, 2014

Wikisorcery

Why I deleted the Qur’an

As we near the end of Ramadan and Eid approaches, it seems somewhat topical to bring up the time I deleted the Holy Qur’an as a copyright violation.

Despite being about half a millennium old, the Qur’an was still under copyright in the United States, so it had to go. (Strictly speaking, I just nominated it for deletion, rather than actually deleting it myself, but it’s close enough.)

This is not a joke about the United States’ famously long periods of copyright protection.  The problem in this case is something that often seems to be missed in other cases too.  This copy of the Qur’an was an English translation and a translator receives a brand new copyright on their work in addition to any potential copyright that may or may not apply to the original.  The true Arabic original is very much in the public domain.  I doubt the translator in question, Indian Islamic scholar Abdullah Yusuf Ali (1872–1953), wanted to restrict access to his work, and very much doubt the Prophet (pbuh) would want that either, but we have no proof it was ever released, so copyright law must apply.

This was complicated by the fact that Ali published his 1934 version in Lahore, a part of British India that is now Pakistan, although he was born in Bombay, a part that is now India, and died in Surrey, which was and is in the United Kingdom. Quite which body of copyright law to use was unclear. Unusually, the URAA laws solved some of this because, whichever country was involved, it was still under copyright in 1996, so it became copyrighted in the United States, if it wasn’t already, and remains so under that country’s laws.

(For reference: Pakistan is the most generous—from a certain point of view—and uses Life+50 for its copyright terms, so it would have become public domain there in 2004. India uses Life+60, so it actually entered the public domain there at the beginning of this year. The UK uses Life+70, so it is still under copyright there for another decade.)

You can still find his translation on Wikilivres (Canada uses Life+50 just like Pakistan) and a derivative on Project Gutenberg (I don’t know why).

I’ve done a little work in adding a new, non-copyrighted Qur’an to Wikisource but it is not currently my priority (it’s the most important book of the world’s second largest, and fastest growing, religion; it isn’t hard to get a copy if you want one).


by wikisorcery at July 26, 2014 06:00 AM

July 25, 2014

Wikimedia Foundation

Africa’s first Regional Conference gathers Wikimedians in Johannesburg

Wiki Indaba 2014

Group photo

Conference memorabilia

This past June, Wikimedia South Africa hosted more than 35 Wikimedians in Johannesburg for the first ever Wiki Indaba Regional Conference. All four regions of Africa were represented by at least one country, with West Africa having the lion’s share. For three days we talked about the challenges and possible solutions for initiating Wikipedia editing communities in the continent, in an effort to fulfill our vision of sharing the sum of all human knowledge with the world.

We left the conference with a renewed sense of purpose and a united goal to create Wikipedia editing communities in our respective countries through clear communication channels and co-operation plans, even though we were well reminded that we don’t have a magic wand to accomplish this overnight.

The first day was spent listening to delegates recounting community efforts in their home countries, the unique challenges they face as well as their future plans. We learned how group dynamics and diversity helped Tunisia acquire their status as a newly recognized African user group. From Egypt we heard about how universities are responding to Wikipedia. From Côte d’Ivoire and Cameroon we learned of local efforts from WiR at the Africa Centre and how they are linking up with local academic and art institutions to expose the public to Wikipedia. We learned of grass root efforts in Ghana and Nigeria where they’ve actively reached out to schools and the general public. We heard how difficult it is to arrange events without the approval of local authorities. From Cameroon we learned how Wiki Loves Monuments improved acceptance of Wikipedia. From Ethiopia we learned about the dangers faced by bloggers and how Wikipedia is often mistaken for Wikileaks. We learned how some Wikipedians have actually been incarcerated for blogging. Representatives from Malawi and Tanzania discussed how Wikipedians are fusing their entrepreneurial skills with open knowledge. From Kenya we learned of efforts to regroup and pursue chapter status. We learned of the efforts of university students to build a community in Botswana. Namibia highlighted its renewed effort to experiment on oral citations as a way to create acceptance of local and indigenous knowledge through Wikipedia. We explored the efforts made in South Africa, which still is the only chapter in the continent. At the end of the day, we reviewed statistics of African language Wikipedias and gathered as many insights as possible. The day was completed with a presentation on the Wikimedia Foundation’s global south strategy and how it is poised to assist communities throughout the continent.

After a refreshing social event in the heart of Johannesburg, we were back for day 2 and onto business. Delegates got a chance to interact with the Wikimedia Foundation Grants team (Anasuya and Asaf) and ask them questions regarding available funding opportunities to enable them to run outreach events in their countries. Many misconceptions were dispelled and delegates gained confidence in planning future projects with the assurance that funding and support will be available when they need them. The Wikipedia Zero presentation (by Adele) was awe-inspiring and instilled a sense of excitement when delegates learned about recent developments in their own countries. A deeper understanding of the technology and its potential impact in the continent was shared with delegates. The afternoon session was divided into two tracks running concurrently, where insights on exceptional local projects in education and copyright issues were discussed. We listened on how mission-aligned thematic organizations can complement local community efforts in increasing Wikipedia’s reach and understanding. The day was capped by a social event hosted by Creative Commons ZA, where a movie on copyright highlighted issues brought to light by entities like The Pirate Bay and the European lawsuit against their peer-to-peer online content sharing.

WMF Grants Team

The third and final day of the conference saw exhausted delegates wrap up with local success stories such as Omaheke, Namibia’s outreach and research sandbox. Wikipedia Primary Education, Wikipedia Education Program in Egypt, The Siyavula Open Education portal in South Africa as well as Mesh Sayada, a free community network for open data and free culture in Tunisia were all showcased. An evaluation of Wiki Loves Monuments successes in South Africa was discussed. The session culminated with the announcement from the project Wiki Loves Africa, which will be a photographic competition modeled after the Wiki Loves Monuments concept.

At the close of the conference, delegates were requested to write down personal pledges on how they plan to continue and increase their efforts to build editing communities in their countries. These were documented and will be sent back to delegates to remind them of their personal pledges. Delegates also deliberated on the best ways to stay connected as a group through discussions on Meta-wiki, in social networks as well as the creation of an African mailing list.

Wikipedia Zero Presentation

As captured by the program director in his closing statements, there are no expectations that this conference will magically result in super active editing communities in Africa, however there is now hope that an organized group of dedicated volunteers will work together to spark the much needed Wikipedia editing communities in the continent – one step at a time.

(Watch videos from the conference participants on Youtube here. Complete conference documentation is available on Meta-wiki and more pictures of the conference can be seen on Commons. Many thanks to our wonderful hosts at Wikimedia South Africa in Johannesburg for a well-organized event and to all participants for sharing their knowledge and experiences at this conference. We look forward to continuing this conversation in the coming months).

Dumisani Ndubane, Project lead, Wikimedia ZA

by Dumisani Ndubane at July 25, 2014 06:52 PM

Gerard Meijssen

#Wikidata #statistics - waiting for storage

With the Games being really popular and with the mass additions of statements using AutoList2, you would expect that it shows in the stats for Wikidata.

It does not. There is a problem. Labs ran out of storage. New hardware was ordered and with a bit of luck, the three shelves of new storage will provide ample space for the foreseeable future.. There is a truism that has it that discs will fill up in half the expected time..

Yesterday they were initialising the hard drives and next will be moving and copying the data to its new location. With a bit of luck the software that generates the statistics will find it and, we will have something to ponder again.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at July 25, 2014 03:39 PM

Wikimedia UK

Historic library collections get worldwide exposure

The photo shows the front of the Library seen from across the street on a sunny day

The National Library of Scotland, Edinburgh

This post was written by the team at the National Library of Scotland and was originally published on their website here

Images from the National Library of Scotland’s (NLS) historic collections have been added to one of the world’s most popular websites where they can be seen and shared by people everywhere.

It is the first stage in a partnership between the Library and Wikimedia Commons (WikiCommons), the online repository of free-to-use images, sounds and other media files. It is part of the Library’s commitment to widen access to material in its collections and advance knowledge and understanding about Scotland around the world.

NLS has added photographs of the construction of the Forth Rail Bridge (1882-1889) and the aftermath of the collapse of the Tay Rail Bridge in 1879 to the website which hosts more than 20 million files. Other material will be added over the coming months to increase the Library’s presence on WikiCommons which is one of the top 150 websites in the world.

It follows agreement on a new digital content licencing policy at NLS, based on the principle of making information openly available where there are no legal, contractual, privacy or related restrictions.

Last year the Library appointed Scotland’s first Wikimedian-in-Residence, Ally Crockford, who has been working with NLS staff and the public to add content to the online encyclopedia Wikipedia and its sister projects.

She said WikiCommons offered NLS exciting opportunities to reach out to a new audience. ‘People will be able to find this material and will be encouraged to share it. Hopefully they will then come back to the Library’s website to see what more is there. This is the beginning of a process that will see much more of the Library’s collections made available worldwide.’

The residency and the uploading of NLS digital content are part of the on-going collaboration between the NLS and Wikimedia UK.

by Stevie Benton at July 25, 2014 09:21 AM

Wikimedia Foundation

Wikimedians in Residence: a journey of discovery

GLAM-WIKI 2013 attendees

A bit of background

In April of 2014 I found myself digging deep into analytics in search of possible improvements and insight into what we do as a chapter. What brought me there? One of our most renowned programs, Wikimedians in Residence. A Wikimedian in Residence (WIR) is a person who, as a Wikimedia contributor, accepts a placement within an institution to facilitate open knowledge in a close working relationship between the Wikimedia movement and the institution. They work to facilitate content improvements on Wikimedia projects, but more importantly serve as ambassadors for open knowledge within the host organization.

Wikimedia UK has been involved with WiR in the UK with varying degrees of support and supervision. Since the creation of the chapter, we always felt that the program was worth running, seeing it as one of the key ways we can engage with external organizations. However, I never knew for sure, if that was just a feeling. Toward the end of 2013 we decided to explore these notions.

Why and how to evaluate

As I focused on my questions about program impact, I embarked on a review process of the program, which eventually included: a questionnaire for all the key parties, online surveys, meetings, group discussions, the analysis of existing materials (e.g. residents’ reports) and creation of a review document.

In January of this year I planned to survey the Residents and host institutions about their views on the program. Since I wasn’t sure what to ask, I reached out to the Program Evaluation and Design team for help.

Their stringent approach was worth it. We boiled down the issues around what I actually wanted to find out from the survey. Doing that before creating the questions was a revelation to me. The questionnaire went much deeper than I had originally anticipated. This meant that when we worked on creating the survey questions, every point was there for a specific reason and in a sensible order. With their help, I developed three surveys: for residents, residency hosts  and another for community member input.

I was impressed with the amount of feedback that was shared. The Residents were clearly committed to the project and keen on telling me what could make the program more successful. At the same time I ran interviews with the host institutions. By that stage I was deeply entrenched in the review process. Discovering more about the program increased my appetite for a deeper analysis. This culminated in an April brainstorming meeting aimed at completing an analysis of the strengths, weaknesses, opportunities and threats (SWOT analysis) of our Wikipedian in Residence program.

With the data collection completed, I then examined all the reports and case studies produced by the residents and summarized them in terms of the impact made to Wikimedia projects. (Click here to read Overview of the residencies.)

Lessons learned

After running the program for a long time, one may assume they know everything about it. I was surprised to hear from many Residents that it often took them a couple of months to fully understand what their role within the host organization was. I had assumed that they would have connected with one another to share the resources they created without my help, but this was not the case.

Before doing this research, I did not appreciate how important it is to have a ‘team’ within the institution working with the residents. Having a line manager and/or senior staff support seemed to be one of the main reasons some residencies were more successful than others.

With the data pulled from the report, the program evaluation and design team helped in the elaboration of an infograph (see below). This resource seeks to showcase the numbers behind the program. How do the efforts of the wikimedians in residence impact the Wikimedia projects? Overall, Wikimedia UK invested only 30% of the total cost to fund in-house residents over the course of their term. Each residency is singular, with variations from one to the next, however they also have many points in common. Take a look and follow the colors to single out residencies. The graphics are not exact but an approximation, due to gaps in reporting. If you would like to add more data to these graphics, please email eval@wikimedia.org.

Looking Ahead: An improved WiR program

The aim of the review was to assess the program, focusing on the feedback of successful models for the residencies and analyzing key obstacles to greater success. Six months later, with some volunteer support, I finished a review report. (Click here to read the summary). What I appreciated most about this project was how I was able to analyze an existing program and see how it could run better, rather than stopping it and trying something completely new. Innovation is usually expected to arise from brand new initiatives, but I found it motivating and useful to find novelty looking deep into WiR.

The areas for improvement we have identified are:

  • Duration of residencies – residencies should be longer to ensure impact (e.g. 9-12 months for larger organizations)
  • Project goals – should be clearer for each residency to improve assessing impact. They should be reflected on the job description. Better reporting should follow.
  • Sharing of information – set up a forum for the sharing of advice, information and best practice between institutions and between residents.
  • Supporting the program – additional capacity is needed for supporting the residents and the program. This will be considered in the future.

References

  • Watch the video of the Survey Strategies virtual meet-up, where I share reflections and commentary on the process and what I learnt from the survey process on a recorded hangout:

Daria Cybulska, Wikimedia UK

Wikimedians in Residence – Report May 2014

by Daria Cybulska at July 25, 2014 03:41 AM

July 24, 2014

Jeroen De Dauw

Semantic MediaWiki 2.0 RC3

I am happy to announce the third release candidate for Semantic MediaWiki 2.0 is now available.

Semantic MediaWiki 2.0 is the next big release, which brings new features and many enhancements. Most notably vastly improved SPARQL store support, including a brand new connector for Jena Fuseki.

The target for the actual 2.0 release is August 3rd. This release candidate is meant to gather feedback and to provide you with a peak at 2.0 already. If you find any issues, please report them on our issue tracker.

Upgrading instructions

If you are using SMW via Composer, update the version in your composer.json to “~2.0@rc” and run “composer update”. If you where running dev versions of the 1.9 series using “~1.9@dev” or similar, switch to “~2.0@dev”. Note that several extensions to SMW, such as Semantic Maps and Semantic Result Formats, do not yet have any stable release which is installable together with SMW 2.x. If you are also running those, you will likely have to switch them to use a development version.

You can also download the SMW 2.0 RC3 tarball.

More detailed upgrading instructions will be made available for the 2.0 release.

by Jeroen at July 24, 2014 10:33 PM

Gerard Meijssen

#Wikipedia - the death of Eric Garner

Mr Garner died in New York. For whatever reason he was held in a choke hold by a police officer. Because of the stress or whatever, Mr Garner died of a cardiac arrest. Sadly there are many such incidents in the United States. It is not that strange given how much violence in all kind of forms is celebrated. It is not strange when many police officers think they are invulnerable to any critique.

It is a sad story, it is a news worthy story but it is not a story worthy of an encyclopedia. At best it is worthy of a footnote in an article on police behaviour in 2014. Given that there is a Wikipedia article, there is a Wikidata item. Given that Wikidata has the aspiration to include Wikinews, it may not be that bad but still.
Thanks,
     GerardM


by Gerard Meijssen (noreply@blogger.com) at July 24, 2014 05:04 PM

Wikimedia UK

Wikimania – Nine working days to go…

The image shows the red and blue

The Wikimania shard logo

This post was written by Jon Davies, Wikimedia UK Chief Executive

Our office white board now says only nine working days to go to Wikimania. In reality it will be a few more as weekends seem to be as busy as Monday to Friday now.

I’d like to share a few thoughts with our community.

Firstly the next fortnight will disappear in a blur. The big jobs have been done, we have a venue, speakers, food and wifi so there will be a conference and it will be the best Wikimania ever.

Secondly the devil will be in the detail and I am surrounded by people tying down the last bits and pieces, chasing printers, correcting mistakes and making last minute decisions. So be patient if we are slow in replying to anything. We have to make judgements and sorting out someone’s visa application on a call to the British Embassy in Delhi can cause havoc with our otherwise smooth timetables.

Thirdly thanks to the volunteers who are making all this possible. Looking round I can see six people bringing Wikimania to life designing the programme booklet, editing videos, writing a ‘who’s who’ database for the conference, setting up the AV for the venue and a lot of other things I am not even aware of that will contribute to a smooth experience.

If you haven’t registered yet please do so here.

See you at Wikimania!

by Stevie Benton at July 24, 2014 02:38 PM

Gerard Meijssen

#Wikidata - Henri-Guy Caillavet; a French MeP

At #Wikidata several people are adding information about former and present Members of the European Parliament. In the "Mix-n-Match" tool we link information from the European Parliament to Wikidata and, by inference to many Wikimedia projects.

It involves identifying people who were also an MeP. Often items can be merged when multiple articles exist for the same person or Wikidata does not know a person was a MeP.

What is surprising is that the demise of Mr Caillavet is not known on the website of the European Parliament. Linking data will make it easier for them and for us to know about such things in a more timely manner.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at July 24, 2014 12:28 PM

#Wikidata - let us get rid of fixed descriptions

In Wikidata, there are "descriptions" for each item. They may have been a good idea at the time.. everybody does them. But really what helps most:
When there are two Mr Gerokostopoulos to choose from, which one to choose .. or you do not know English and prefer another language.. Even more adventurous, new information is added and, the generated text gets updated automagically while the fixed texts weighs us down even more.

Really.. Why have them? What is the added value? What stops Wikidata to get rid of all that junk? The best argument is that it frees up time to add more statements!
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at July 24, 2014 05:56 AM

July 23, 2014

Wikimedia UK

University Challenge recognises Wikipedia

 

The photo shows Jeremy Paxman, host of University Challenge

Jeremy Paxman, host of University Challenge

In what feels like something of a landmark, iconic UK quiz show University Challenge hosted by journalist and former Newsnight presenter Jeremy Paxman this week featured a bonus round of questions about Wikipedia editors. UK readers can watch the clip on BBC iPlayer here for the next few days. The whole show is good, but the section in question begins around 5m45 in to the clip.

For Wikipedia to be featured in such a prestigious television show with a core audience of people in higher education is a great acknowledgement of the growing credibility that Wikipedia holds. But how would you fare against the students of Jesus College, Oxford University who faced the questions?

1. In 2012, the philosophy graduate Justin Knapp became the first person to be credited with 1,000,000 Wikipedia edits. He’s especially noted for his work on the bibliography page of which English novelist born in Bengal in 1903?

2. Secondly, what institutions associated with a beverage, and with for instance the Chinese city of Chengdu, give their name to the help space which is, quote: “A friendly place to help new editors become accustomed to Wikipedia culture”?

3. In an interview in 2014, the Wikipedia co-founder Jimmy Wales stated that he “used to edit a lot of entries” about which specific UK political body often symbolised by a red portcullis?

The answers are below, but if you correctly answered any of the questions then you did better than the students, who got each one wrong. A sign that there is still some way to go, perhaps!

(Answers: 1. George Orwell. 2. Teahouse. 3. The House of Lords.)

by Stevie Benton at July 23, 2014 07:00 PM

Wikimedia Foundation

The First ever Creative Commons event in Telugu: Ten Telugu Books Re-released Under CC

Event flyer, User:రహ్మానుద్దీన్, CC-BY-SA 3.0

Telugu is one of the 22 scheduled languages of the Republic of India (Bhārat Gaṇarājya) and is the official language of the Indian states of Andhra Pradesh, Telangana and the Union Territory district of Yanam. In India alone Telugu is spoken by 100 million people and is estimated to have 180 million speakers around the world. The government of India declared Telugu a Classical language in 2008.

Telugu Wikipedia has been in existence for more than 10 years and has 57,000 articles. Telugu Wikisource is one of the sister projects that has more than 9,400 pages. Several Telugu books are being typed and proofread using Proofread extension. Since Telugu is one of the complex Indic scripts, computing in Telugu came much later. Many books that were published (or are being published) are not in Unicode. Telugu Wikisource has now emerged as the largest searchable online book repository in Telugu. Telugu Wikisourcerers, despite being a small community, did a great job of digitizing many prominent Telugu literary works. Attempts have been made to convince contemporary writers to re-release their books in CC-BY-SA 3.0 license. Such an effort was made a year ago by bringing in a translation of the Quran in Telugu. Recently, 10 Telugu books by a single author were re-released under the Creative Commons license (CC-BY-SA 3.0) on June 22, 2014 at The Golden Threshold, an off-campus annex of the University of HyderabadCIS-A2K played an instrumental role in getting this content donated. This is one of the first instances in an Indian languages where a single author re-released such a large collection of books under the CC license. These books are being uploaded on Telugu Wikisource using Unicode converters.

Audience at the event.

Telugu Wikimedians in collaboration with CIS-A2K came together to celebrate this first Creative Commons event in Telugu. The event was attended by about 100 people from various walks of life. The patron of Indu Gnaana Vedika, Sri Sri Sri Prabodhananda Yogeeswarulu, presided over as chief guest and N Rahamthulla, long time Wikipedian and senior bureaucrat was the guest of honor. Prabodhananda emphasized the importance of the availability of knowledge in ones native tongue and how knowledge should not be confined to books alone. Telugu Wikisource, he said, would not only ensure a wider audience for the books, but also enable the language to survive in the digital era. A video interview of the guest of honor, Rahamthulla was played, where he spoke about the creation of new technical terms in ones native language and how Telugu is being used as an administrative language in his office.

One participant sought clarification on Creative Commons licensing and Wikisource at the event.

Rahamthulla also shared his experience using Telugu in the office and suggested that the Government should enact measures to support wider use of Telugu in official correspondence. This was followed by Veeven’s talk on “End-User’s perspective of free licenses,” where he spoke about the importance of open content, free software and free licenses. Speaking about the importance of creative commons licenses in the context of Indian languages on Internet, Vishnu Vardhan pointed to the enormous amount of content available in Indian languages which is increasingly inaccessible as most of it is published under copyright and in non-Unicode formats. He noted that many authors writing in Indian languages are keen on their work reaching as many people as possible and are not interested in making profits. In fact, many of the writers who publish their works incur losses, but they are nonetheless passionate about publishing their works despite the losses. However, these writers choose copyright as a default and do not realize that they are curtailing the wider circulation of their books. Awareness about Creative Commons in Indian languages is very essential. Vishnu Vardhan went onto state how CIS-A2K is leaving no stone unturned when bringing awareness on this topic.

Wikipedians and Wikisourcerers were presented physical copies of the books that are released under CC.

There was a vibrant discussion regarding Creative Commons during the open session. Participants wanted to know the difference between copyright and Creative Commons. Some asked why there is a need to re-release content under CC license if the books could be made available on a website. Some were worried if they release it under CC license they will be deprived from publishing their books as anyone can now use them. All these apprehensions were answered by long-time Telugu Wikimedian Veeven and the Program Director of CIS-A2K, T. Vishnu Vardhan. A Wikisource demonstration by Rahmanuddin Shaik was shown followed by Q&A session.

The event inspired some of the participants to come forward and donate their books under the CC license. We may soon expect another 50 books to enrich Telugu Wikisource after being released under appropriate CC license.

by Rahmanuddin Shaik and Veeven

by Rahimanuddin Shaik at July 23, 2014 06:41 PM

Gerard Meijssen

#Wikidata - a letter from Mr #Modi

I receive a letter on behalf of Mr Modi, the prime minister of India. It is an invitation to give input to the PM for the transformation of India. The idea is to connect people and their elected representatives effectively.

I am not from India but I am pleased with this invite. My suggestion is obvious; I want the people of India and the world to know about all the members of the Lok Sabha past and present.

In all the Wikipedias and Wikidata we know about many of them. For some we know their political affiliation, for others we do not. For some we know their gender and for a few we do not. For most of the representatives we do not know if they studied and where.

Mr Modi, having this information in Wikidata makes it easy to learn about the elected representatives of India. To find them, their name as written in any language needs to be provided only once.  Mr Modi, India is relatively well served in this but it would be appreciated when more facts are available.

You will be inundated with ideas that may transform India. Having information in all the official languages of India about politicians past and present will bring them closer to the people they represent. It will be appreciated when you give our project your blessing.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at July 23, 2014 04:11 PM

Wikimedia UK

Scholarships for UK based attendees of AdaCamp Berlin 2014

The photo shows two young women smiling for the camera

Two of the attendees from AdaCamp 2013

This post was written by Daria Cybulska, Wikimedia UK Programme Manager

AdaCamp is a conference dedicated to increasing women’s participation in open technology and culture. It brings women together to build community, share skills, discuss problems with open tech/culture communities that affect women, and find ways to address them. It has been taking place for several years in the US and Australia, but in 2014 it is coming to Europe for the first time. The Berlin AdaCamp will be October 11-12, 2014 at the Wikimedia Deutschland offices. It will also focus strongly on the Wikimedia community in particular.

It is a valuable opportunity for UK Wikimedians to attend a focused event (of only about 50 attendees) where we will focus on learning practical things and planning future projects.

If you have experience in open tech/culture, experience or knowledge of feminism and advocacy and the ability to collaborate with others, you should apply!

We would like to help build a community of UK Wikimedians supporting women in open tech/culture – especially Wikimedia projects – and for that reason we are offering scholarships for UK applicants.

To learn more about the event visit their website.

To read about the application process and find out how to receive a scholarship, please visit this page.

If you have any questions about the UK scholarships, email Daria Cybulska, Wikimedia UK Programme Manager

by Stevie Benton at July 23, 2014 02:17 PM

Gerard Meijssen

#Mediawiki - the #Media viewer

The #Wikimedia Foundation has a problem with people accepting new functionality. The reasons why are often irrational and steeped in conservatism but that is another story. A blog post does not help much at that.

What may help is the assessment of bugs. In bug 68372 it has been identified that in certain browsers a name like MilutinDostanić.jpg will show up properly in the URL when seen from Commons and not from within the Mediaviewer. The Mediaviewer will show it like MilutinDostani%C4%87.jpg.

Technically, technically there is nothing wrong with that. From a user perspective it looks like shit. When a bug is closed because technically there is nothing wrong and a difference in behaviour is not considered as being of enough relevance, a user gets pissed off.

When bugs are reported and when user acceptance is important, differences between expected behaviour and actual behaviour become important because they are often what prevents acceptance of new functionality.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at July 23, 2014 09:56 AM

User:Wllm

Has Wikipedia Been Running with the Wrong Crowd?

In response to my last post, Captain Obvious came swooping in to point out what had been hiding in plain sight. While I was focussing on the potential dangers of predators lurking in the darkest shadows of the Wikipedia community, I failed to see the very real danger right in front of me: Wikipedia itself.

Could all the free, multilingual, educational content Wikipedia provides, paired with underdeveloped judgement, put children at risk? As a former bored, judgement-impaired, pubescent boy, I knew exactly what how to find out. I searched on “Sniffing glue“. And here’s what I found:

 

Kids Huffing Glue

 

WTF? There is absolutely nothing OK about this picture. These are children. I understand that there is a problem with children huffing on the streets in certain parts of the world. It needs to be acknowledged and addressed. But I will thank you as a father for not spreading that problem around by showing smiling, relaxed, seemingly “mature” kids sticking faces in a bag and casually flipping off the photographer. If this article on such a dangerous act was incomplete without a visual aid, couldn’t you at least have looked for a picture of an adult to kick things off? Maybe a picture that reflects the incredibly destructive aspect of huffing would be more appropriate. And, while we’re at it, we should probably dig up something that doesn’t make huffing look cool to kids. Maybe this?

 

Gold Paint Adult Huffer

 

What follows amounts to a manual on huffing all kinds of extremely dangerous gases. If my son looked this up, he wouldn’t just figure out the best technique to sniff glue, he’s also be turned on to:

The Smorgasbord

Grab a bag and take your pick.

  • Gasoline
  • Kerosene
  • Propane
  • Butane
  • Toluene
  • Xylene
  • Acetone
  • Hydrofluorocarbons
  • Chlorofluorocarbons
  • Trichloroethylene
  • Alkyl Nitrites
  • Nitrous Oxide
  • Diethyl Ether
  • Enflurane
  • Methylene Chloride
  • Carbon Tetrachloride
  • Benzene

 

 

After going over this smorgasbord of inhalants, the article covers a few pro-tips for abusing them:

Inhalant users inhale vapors or aerosol propellant gases using plastic bags held over the mouth or by breathing from an open container of solvents, such as gasoline or paint thinner. Nitrous oxide gases from whipped cream aerosol cans, aerosol hairspray or non-stick frying spray are sprayed into plastic bags. When inhaling non-stick cooking spray or other aerosol products, some users may filter the aerosolized particles out with a rag. Some gases, such as propane and butane gases, are inhaled directly from the canister. Once these solvents or gases are inhaled, the extensive capillary surface of the lungs rapidly absorb the solvent or gas, and blood levels peak rapidly. The intoxication effects occur so quickly that the effects of inhalation can resemble the intensity of effects produced by intravenous injection of other psychoactive drugs.

The article wraps up its description of the whole experience by mentioning that all of these substances are at best dangerous and at worst fatal. Cross your fingers that your child look below the fold before they stick their face in a bag.

Safety concerns aside, it’s a pretty good article. For a responsible parent, it might provide some desperately needed answers to the problems created by a child’s abuse of inhalants. For an irresponsible child, it could create the problems themselves.

There’s only one thing we can be sure about. Whatever happens, Wikipedians won’t be taking responsibility for it.

I think it’s time we did.

,Wil


Filed under: Child Protection, Wikimedia, Wikipedia Tagged: Wikimedia, Wikipedia

by Wil Sinclair at July 23, 2014 03:24 AM

July 22, 2014

Gerard Meijssen

#Pywikibot - wants you to peek under the hood

For most #Wikimedia projects, Pywikibot has proven itself to be a trusted hard working tool. Literally millions and millions of edits were only possible because of the people operating the bot.

Before Wikidata, the interlanguage links were maintained by what was called "pywikipedia" bot. Now Wikipedia has been harvested for information with this bot to enrich Wikidata.

As time goes by, the architecture of MediaWiki has changed a lot. Consequently Pywikibot had to evolve as well. Its architecture changed as a result; it aims to use the latest API and other MediaWiki functionality.

From July 24th, 2014 and ending on Sunday, July 27th there will be a big online event to learn what more needs to be done for Pywikibot. What bugs need an urgent fix, what features are missing or incomplete. Obviously, it is a also time to look at the code and look for "bit rot".

It would be awesome when more people who care for the Wikimedia projects and know their Python get involved and ensure that Pywikibot will remain the meanest and most powerful bot platform around.
Thanks,
    GerardM

by Gerard Meijssen (noreply@blogger.com) at July 22, 2014 11:41 PM

Wikimedia Foundation

A look back at Wikimania 2013

File:Wikimania 2013 in Hong Kong.webm

This is a short documentary about Wikimania 2013 in Hong Kong. You can also view it on YouTube.com here and Vimeo.com here. A version without burned-in English language subtitles is available on Wikimedia Commons here.

Wikimania is the annual global gathering of Wikimedians. To me, it feels like the United Nations moved to Github. It feels like the future of civilization. You get to meet people who have become empowered by human-computer symbiosis. Like any convention or meetup, Wikimania is also an ode to serendipity – there’s no accurate way to predict or measure what the dynamics or the outcomes of it will be.

Interviewing at Wikimania in 2013

Last year, I went to Wikimania in Hong Kong to shoot a documentary about the annual convention (film above). A little background – in 2012, my team and I interviewed as many people on camera as we could at Wikimania in Washington D.C., because Wikimania is where you can find the highest diversity and concentration of Wikimedians in the same place at the same time, anywhere in the world. I’d say that maybe half of the people we talked to we hadn’t known before Wikimania. For that project, I decided to remove the convention from the story and focus exclusively on Wikimedians themselves and their personal testimonies. So, when I went to Wikimania in Hong Kong in 2013, I set out to do the opposite and shoot a short documentary that makes the convention itself the main character of the movie. I had a question in my head – if all these people can collaborate online, then why do they need to collaborate face-to-face? I wanted to make something that suggests an answer to that question that would also let you feel like you are an attendee at the conference.

The film above is just under thirty minutes long. I wanted to talk about topics like language and culture, copyright, Wikipedia in education, Wikipedia offline, the Visual Editor, Wikimedia Grants, Wikipedia Zero, demonstrate what a hackathon is, and basically show things that I thought were important to talk about; things you should know that you can learn from being at Wikimania. I couldn’t talk about everything of course, and I had to cut some stuff out. One interview that stands out to me that didn’t make the cut was a conversation with Christoph Zimmermann about the Public Domain Project in Switzerland. Their goal is to make an encyclopedia of music that exists in the public domain. They accept any turntable records that are old enough to be in the public domain in Switzerland and scan the records with a super expensive laser turntable and archive that recording on their wiki for public use.

My thanks to everyone who let me interview them for this film.

Wikimania 2014 in London is just around the corner. It will be the tenth Wikimania. If you’ve never attended Wikimania, and have the opportunity to, you should. It’s always exciting. I’ll be there this August looking for fresh faces to talk to. You can sign up for Wikimania here, and if you have ideas for things to do you can post them here. And if you can’t make it to Wikimania 2014, you can watch the movie above and get a sense of what Wikimania is all about.

Victor Grigas
Storyteller and Video Producer, Wikimedia Foundation

by Victor Grigas at July 22, 2014 10:57 PM

Expanding local history with The Wikipedia Library

Find out more about The Wikipedia Library!

If you are an editor on the English Wikipedia, you might have noticed the recent uptick in announcements for accounts offered by The Wikipedia Library! The Wikipedia Library gives active, experienced Wikipedia editors free access to a wide range of paywalled databases – reliable sources that are vital for their work (see also: “The Wikipedia Library Strives for Open Access“). We have been having a lot of success meeting the goals of our Individual Engagement Grant from the Wikimedia Foundation. Established partnerships, like that with JSTOR, are expanding, getting Wikipedia editors more access to high quality research materials! Moreover, because of those successes, we are having many fruitful discussions with organizations large and small that are interested in helping Wikipedians create public knowledge and link Wikipedia in to the larger network of scholarly source materials.

We surveyed Wikipedia users interested in the Wikipedia Library about which sources would be best for us to get access to, and one from that list, British Newspaper Archive, has been a very active recent success. It started with 50 accounts and has since expanded to 100 because of the enthusiasm in the initial sign-up period. An archive of high-quality scans of newspapers from the collection of the British Library, it provides a great source of reference materials for Wikipedia articles about 18th, 19th, and early 20th century Britain and its global interests. Even though the accounts have only been available for a couple of weeks, Wikipedians have been successfully using them to create new and expand old articles about historical topics, both about local history and topics of national British interest. These range from articles about geographical features (Swithland Reservoir) to sports (1884 FA Cup Final and Jack Kid Berg), coal mines (Pendleton Colliery) to politicians (Sewallis Shirley).

User:Sitush’s experience

As part of our partnership with the British Newspaper Archive, they have offered us an opportunity to talk about improving Wikipedia on their blog, highlighting the success of the account donation. More importantly though, it enables us to communicate to their social media audience – researchers investigating historical topics through old newspapers – how Wikipedians motivated by similar interests are able to use that research to provide knowledge to our vast audience. Here is what one of our Wikipedia editors who got access through this partnership, User:Sitush, shared on their blog about his new account:

I have a degree from Cambridge in History, and Wikipedia has always been a way for me to explore my interest in Indian and local history. When I got BNA access through the Wikipedia Library, I saw it as an opportunity to explore a local history mystery raised by several people who had been apprentices with the engineering firm of Sir James Farmer Norton & Co Ltd at Adelphi Ironworks in Salford. They often speak with some pride and affection of their time there and of the products that the company manufactured. Those products were sold worldwide, many are still being used and resold now, and some were truly innovative, such as a fast printing press.
None of these people, however, could really tell me anything about Sir James Farmer (the Norton bit of the name came later, when another family became involved in the business). They only knew that he was once mayor of Salford. Although the company did produce a celebratory booklet for an anniversary, there really doesn’t seem to have been much effort made by way of tipping the hat to the man who started it all. Yet, because of the impact on my friends and our community, I suspected him to be one of the more notable of the many self-made – often world-changing – engineering men who inhabited Manchester, Salford and the surrounding areas in the 19th century. He needed a Wikipedia article!
Wikipedia’s model for article development supports the “from little acorns …” approach. So, if I could start an article about Farmer then perhaps at some time in the future someone might find more information and add to it. But Wikipedia also has limitations, meaning that I couldn’t use primary source material available at a couple of archives and, really, there wasn’t much else that I could find without some extensive trawling through microfilms. Inaccessible verifiable information usually means no article – it is meant to be an encyclopaedia, after all, and thus there needs to be some type of public and reliably documented conversation to show that it is of interest to the public (we on Wikipedia call this public interest “notability”).
Enter the BNA! Forget spending days, probably weeks, twiddling at a film reader. I could could get access to the most important information about Farmer with one simple search. In the space of a couple of hours, most of which was spent being pleasantly distracted by other news articles surrounding the ones about Farmer, I’d gathered enough material to justify an article, to plant that acorn. The man is now recognised on a major educational project that gets millions of viewers and, although it’s not the best thing I’ve ever written for Wikipedia, the hat has been tipped. Hopefully, given time, much more can be said about him and his company.

User:Sitush’s new article based on research done with the British Newspaper Archive is titled “James Farmer (knight)” and can be found on English Wikipedia.

Get Wikipedia Library access!

We would love to see more Wikipedians like Sitush get access to these resources that publishers are donating. If you are interested in getting access to the British Newspaper Archive for improving Wikipedia, sign up at https://en.wikipedia.org/wiki/Wikipedia:BNA . If you would like access to one of our other resources or want to suggest a publisher to reach out to, check out https://en.wikipedia.org/wiki/Wikipedia:TWL/Journals . We hope to continue harnessing the resources of libraries and publishers to strengthen the reference materials on Wikipedia!

Alex Stinson (User:Sadads), Project Manager, The Wikipedia Library

2014-07-23: Edited to add a link to britishnewspaperarchive.co.uk

by Alex Stinson at July 22, 2014 09:08 PM

Victory in Italy: Court rules Wikipedia “a service based on the freedom of the users”

This post is available in 2 languages:
English Italiano

English

Last week, the Wikimedia community obtained a resounding victory in Italian court. For more than four years, the Wikimedia Foundation and Wikimedia Italia [1] had been involved in a lawsuit initiated by Italian politician Antonio Angelucci and his son, Giampaolo. The Angeluccis were seeking €20,000,000 from the Wikimedia Foundation over allegedly defamatory statements appearing on two Italian-language Wikipedia pages.

The Roman Civil Tribunal handed down its ruling [in Italian] on 9 July, 2014 with respect to the Wikimedia Foundation, dismissing the lawsuit and declaring that the Foundation is not legally responsible for content that users freely upload onto the Wikimedia projects. The victory, however, runs deeper than the case at hand. The judgment is the first full consideration of Wikimedia’s standing in Italy,[2] and the ruling itself paves the way for more robust free speech protections on the Internet under Italian law.

The Angeluccis argued that the Wikipedia pages for Antonio Angelucci and for the Italian-language newspaper Il Riformista contained false statements that supposedly harmed their reputations according to their claims. Generally, the European Union’s E-Commerce Directive limits the liability of hosting providers for content that users upload; however, the Angeluccis asserted that Wikimedia Foundation’s activities were more akin to a content provider and that no exemption of liability according to the Directive would apply or at least Wikipedia should be deemed as an “online journal” and thus the Foundation should be liable under the stricter standards that apply to the Italian press.

The Italian court rejected this argument, stating that while the Directive does not directly apply to the Wikimedia Foundation as a non-EU-based organization, the basic principles of the Directive apply. In compliance with such principles, Wikimedia must be recognized to be a hosting provider, as opposed to a content provider, and thus it can be liable for user generated content only if it gets explicit notice of illicit information by the competent authority and fails to remove it.

The court stated that Wikipedia “offers a service which is based on the freedom of the users to draft the various pages of the encyclopedia; it is such freedom that excludes any [obligation to guarantee the absence of offensive content on its sites] and which finds its balance in the possibility for anybody to modify contents and ask for their removal.” The court went on to state that the Foundation was very clear in its disclaimers about its neutral role in the creation and maintenance of content, further noting that anyone, even the Angeluccis themselves, could have modified the articles in question.

Lively discussions and even disagreements about content are a natural outgrowth of creating the world’s largest free encyclopedia. However, the vast majority of these editorial debates can be and are resolved every day through processes established and run by dedicated members of the Wikimedia community. We strongly encourage those who have concerns about content on the Wikimedia projects to explore these community procedures rather than resorting to litigation.

Attempts to impose liability upon neutral hosting platforms — our modern day public forums — threaten the very existence of those platforms, and stifle innovation and free speech along the way. When the need arises, the Wikimedia Foundation will not hesitate to defend the world’s largest repository of human knowledge against those who challenge the Wikimedia community’s right to speak, create, and share freely.[3]

Michelle Paulson, Legal Counsel

Geoff Brigham, General Counsel

The Wikimedia Foundation would like to express its immense appreciation towards the incredibly talented attorneys at Hogan Lovells, who represented the Foundation in this matter, particularly Marco Berliri, Marta Staccioli, and Massimiliano Masnada. Special thanks also goes to Joseph Jung (Legal Intern), who assisted with this blog post.

Note: While this decision represents important progress towards protecting hosting providers like the Wikimedia Foundation, it is equally important to remember that every individual is legally responsible for his or her actions both online and off. For your own protection, you should exercise caution and avoid contributing any content to the Wikimedia projects that may result in criminal or civil liability under the laws of the United States or any country that may claim jurisdiction over you. For more information, please see our Terms of Use and Legal Policies.

References

  1. While the court has handed down the judgment with respect to the Wikimedia Foundation, it has not yet done so with respect to Wikimedia Italia. We expect a ruling to be handed down shortly.
  2. In a special proceeding, an Italian court previously declared that Wikimedia is a mere hosting provider that it is not liable for user-generated content. An account of the earlier victory can be found at: https://blog.wikimedia.org/2013/06/26/wikimedia-foundation-legal-victory-italy/.
  3. The Wikimedia Foundation has successfully defended against similar lawsuits in the past. You can read more about some of our previous victories here: https://blog.wikimedia.org/2013/06/26/wikimedia-foundation-legal-victory-italy/, https://blog.wikimedia.org/2013/12/02/legal-victory-german-court-wikimedia-foundation/, and https://blog.wikimedia.org/2012/12/04/two-german-courts-rule-in-favor-of-free-knowledge-movement/.

Italiano

Vittoria in Italia: il tribunale dichiara Wikipedia “un servizio basato sulla libertà degli utenti”

La scorsa settimana, la comunità di Wikimedia ha ottenuto dal tribunale italiano una vittoria fragorosa. Per oltre quattro anni, Wikimedia Foundation e Wikimedia Italia[1] sono state coinvolte in una causa avviata dal politico italiano Antonio Angelucci e suo figlio, Giampaolo. Gli Angelucci chiedevano a Wikimedia Foundation €20.000.000 per affermazioni presumibilmente diffamatorie, che comparivano su due pagine in lingua italiana di Wikipedia.

Il 9 luglio 2014 il Tribunale Civile di Roma ha emesso la sua sentenza in relazione a Wikimedia Foundation, archiviando il caso dichiarando che la Fondazione non è legalmente responsabile per i contenuti che gli utenti caricano liberamente sui progetti Wikimedia. Ad ogni modo, la vittoria, ha delle ripercussioni più profonde del caso in questione. La sentenza costituisce il primo e completo riconoscimento della posizione di Wikimedia in Italia [2]e la sentenza stessa ha spianato la strada a una maggiore tutela della libera comunicazione su Internet nell’ordinamento giuridico italiano. Gli Angelucci sostenevano che le pagine di Wikipedia su Antonio Angelucci e il giornale italiano Il Riformista, contenevano affermazioni false e che presumibilmente, in base alle loro pretese, danneggiavano la loro reputazione. In generale, la Direttiva sull’e-Commerce dell’Unione europea limita la responsabilità dei provider di hosting sui contenuti che gli utenti caricano; ma gli Angelucci asserivano che le attività di Wikimedia Foundation erano più affini a un provider di contenuti e che non erano esonerati da responsabilità come la Direttiva disponeva o perlomeno Wikipedia avrebbe dovuto ritenersi come un “giornale online” e quindi la Fondazione doveva essere soggetta ai rigidi standard applicati alla stampa italiana.

Il tribunale italiano ha respinto tale argomentazione, affermando che, sebbene la Direttiva non si applichi direttamente a Wikimedia Foundation, non essendo un’organizzazione con sede in Europa, si applicano i principi fondamentali della Direttiva. In conformità a tali principi, Wikimedia deve essere riconosciuta come un provider di hosting, in contrapposizione a un provider di contenuti, e può essere responsabile dei contenuti generati dagli utenti solo se riceve una nota esplicita di informazioni illecite da parte dell’autorità competente e quindi non li rimuove.

Il tribunale ha dichiarato che Wikipedia “offre un servizio basato sulla libertà degli utenti di redigere le varie pagine dell’enciclopedia; è questa libertà che esclude qualsiasi [obbligo di garantire l'assenza di contenuti offensivi dei suoi siti] e che trova il suo equilibrio nella possibilità che chiunque può modificarne i contenuti e

chiederne la rimozione”. Il tribunale ha continuato dichiarando che la Fondazione era molto chiara nelle sue dichiarazioni di non responsabilità sul proprio ruolo neutrale nella creazione e gestione dei contenuti, da notare inoltre che chiunque, anche gli Angelucci stessi, potevano modificare gli articoli in questione.

La creazione della più grande enciclopedia libera del mondo è il risultato naturale di discussioni animate e addirittura di disaccordi sui contenuti. Comunque, la maggioranza di tali discussioni editoriali può essere e viene risolta ogni giorno, tramite processi stabiliti e gestiti da membri dedicati della comunità di Wikimedia. Consigliamo vivamente coloro che sono in disaccordo con i contenuti dei progetti Wikimedia, di esaminare le procedure della comunità, anzichè ricorrere a una controversia legale.

I tentativi di imporre la responsabilità a piattaforme di hosting neutrali — i forum dei nostri giorni — minacciano l’esistenza stessa di queste piattaforme, e nel percorso soffocano l’innovazione e la libera comunicazione. In caso di necessità, Wikimedia Foundation non esita a difendere la raccolta più grande al mondo della conoscenza umana, contro coloro che sfidano il diritto della comunità di Wikimedia di comunicare, di creare e di condividere liberamente.[3]

Michelle Paulson, Consulente legale

Geoff Brigham, Responsabile area legale

Wikimedia Foundation esprime la sua immensa gratitudine verso i procuratori di incredibile talento presso Hogan Lovells, che hanno rappresentato la Fondazione in questa questione, in particolare Marco Berliri, Marta Staccioli e Massimiliano Masnada. Un ringraziamento speciale va anche a Joseph Jung (Interno legale), che ha fornito assistenza per questo post del blog.

Nota: Sebbene questa decisione rappresenti un progresso importante verso la protezione dei provider di hosting come Wikimedia Foundation, è parimenti importante ricordare che ogni singolo individuo è legalmente responsabile delle proprie azioni sia online che offline. L’utente, per la sua protezione, dovrebbe prestare attenzione ed evitare di contribuire con contenuti, nei progetti Wikimedia, che possano risultare in responsabilità penale o civile sotto la legge degli Stati Uniti o qualsiasi altro Paese che potrebbe reclamare la giurisdizione nei suoi confronti. Per ulteriori informazioni, consulta i nostri Termini di utilizzo e Politiche legali.

Riferimento

  1. Sebbene il tribunale abbia emesso il giudizio nei confronti di Wikimedia Foundation, non l’ha ancora fatto per Wikimedia Italia. Ci aspettiamo a breve che venga emessa una sentenza.
  2. Precedentemente, in un procedimento speciale, un tribunale italiano aveva dichiarato che Wikimedia è un semplice provider di hosting, non responsabile dei contenuti generati dagli utenti.Si può trovare un resoconto della precedente vittoria alla pagina: https://blog.wikimedia.org/2013/06/26/wikimedia-foundation-legal-victory-italy/.
  3. In passato Wikimedia Foundation si è difesa con successo contro cause simili. Alcune delle nostre precedenti vittorie si possono leggere qui: https://blog.wikimedia.org/2013/06/26/wikimedia-foundation-legal-victory-italy/, https://blog.wikimedia.org/2013/12/02/legal-victory-german-court-wikimedia-foundation/, e https://blog.wikimedia.org/2012/12/04/two-german-courts- rule-in-favor-of-free-knowledge-movement/.

by Michelle Paulson at July 22, 2014 06:19 PM

Luis Villa

Slide embedding from Commons

A friend of a friend asked this morning:

<script async="" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>

I suggested Wikimedia Commons, but it turns out she wanted something like Slideshare’s embedding. So here’s a test of how that works (timely, since soon Wikimanians will be uploading dozens of slide decks!)

This is what happens when you use the default Commons “Use this file on the web -> HTML/BBCode” option on a slide deck pdf:

Wikimedia Legal overview 2014-03-19

Not the worst outcome – clicking gets you to a clickable deck. No controls inline in the embed, though. And importantly nothing to show that it is clickable :/

Compare with the same deck, uploaded to Slideshare:

<iframe frameborder="0" height="400" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/37246419" width="476"></iframe>

Some work to be done if we want to encourage people to upload to Commons and share later.

Update: a commenter points me at viewer.js, which conveniently includes a wordpress plugin! The plugin is slightly busted (I had to move some files around to get it to work in my install) but here’s a demo:

Update2: bugs are fixed upstream and in an upcoming 0.5.2 release of the plugin. Hooray!

<iframe height="380px" mozallowfullscreen="true" src="http://lu.is/blog/wp-content/plugins/viewerjs-wordpress-0.5.2/index.html#/blog/wp-content/uploads/2014/07/Wikimedia_Legal_overview_2014-03-19.pdf" style="border: 1px solid black; border-radius: 5px;" webkitallowfullscreen="true" width="450px"></iframe>

by Luis Villa at July 22, 2014 04:50 PM

Gerard Meijssen

#Wikidata - Awards of Vienna

There is Vienna the city and, there is Vienna the state. The Wikipedia article merged them into one while they are not the same thing.

A person who was awarded the "Goldenes Verdienstzeichen des Landes Wien" died recently and as information was available in a category it was possible to include all the people who were awarded in this way.

One slight problem; there was no item for the award because all the "Verdienstzeichen des Landes Wien" are in one article. It is easy to create an item. I did. Doing the same for the state of Vienna is for another day or for someone else.
Thanks,
       GerardM

by Gerard Meijssen (noreply@blogger.com) at July 22, 2014 10:19 AM

Wikimedia Foundation

Recovering the shared history editing Wikipedia in Argentina, Mexico and Spain

This post is available in 3 languages:
English  •  Spanish Catalan

English

The Spanish Republican Exile forced thousands of Spanish citizens to leave their country after the Spanish Civil War and the aftermath of persecutions by the Francisco Franco dictatorship. Nearly 220,000 supporters of the Second Republic left Spain to other countries like Argentina and Mexico.

Attendants at the edit-a-thon

To mark the 75th anniversary of the arrival of the Sinaia vessel to the Mexican port of Veracruz, the Wikimedia chapters in Argentina, Spain and Mexico ran ​​the First Spanish Republican Exile Edit-a-thon of Wikipedia, Wikimedia Commons and Wikisource on historical facts, biographies and testimonials related to these events.

The coordination of this event was conducted by the Iberocoop initiative. The event in Mexico City was held at the Space X of Cultural Center of Spain in Mexico. This edit-a-thon was curated by Guiomar López Acevedo, historian of the Spanish Ateneo of Mexico, who contributed sources and reviews for the activity. At the opening, Macarena Pérez, staff of the Cultural Center of Spain, said that the Spanish exile is a prolific theme and many more working sessions will be needed to retrieve all available evidence.

At around 2 pm local time in Mexico, Santiago Navarro Sanz, member of the board of Wikimedia Spain, joined in a videoconference from Vila-real and saluted the participants and noted that he was happy that a hard episode in Spanish history is a positive reason to gather Wikipedians in three countries and contribute to the growth of information on Wikimedia projects.

Attendants at the edit-a-thon

The event in Mexico produced articles about the Administrative Committee of the Funds for the Relief of Spanish Republican as well as a letter for Wikisource from former President Lazaro Cardenas, who facilitated the coming of thousands to Mexico. Other articles created related to the House of Spain in Mexico, a harbor for Spanish researchers and intellectuals that helped them continue their work, and which eventually became one of the most prestigious academic institutions in the country: El Colegio de Mexico. Other articles included the Ermita Building, a famous building in Mexico City that few know was initially created to accommodate Spanish exiles, including some very relevant individuals like the poet Rafael Alberti.

At the end of the event, Macarena Pérez introduced the Atlas of Exile project, a collaborative map that shows where the Spanish exiles located after leaving Spain.

In the case of Argentina, the event was held inside the Casal de Catalunya, where Wikipedians and members of Wikimedia Argentina met for the First Spanish Republican Edit-a-thon.

From the beginning, the attendees could see that the Edit-a-thon would be an event with particular characteristics: several founders of the Children of Spanish Civil War in Argentina Civil Association attended, people who keep alive the memory of the events that took place a long time ago. Their testimonies about how their experiences translated to key political movements in the twentieth century were deeply emotional.

The great amount of evidence, the building of a generational story that can only be told by their protagonists and the gathering of many pictures and historical documents demanded recorded audio and video material in addition to the digitization of documents, including interviews. This material will be the basis for an audiovisual documentary about the Spanish exile in Argentina and the experiences of children of war. The material is being collected in a special category for that purpose in Wikimedia Commons.

Iván Martínez, Wikimedia México president, Nicolás Miranda, Wikimedia Argentina head of communications, and Santiago Navarro Sanz, Wikimedia Spain vicepresident.

Spanish

Recuperando la historia compartida editando en Argentina, México y España

El exilio republicano español forzó a miles de españoles y españolas a abandonar su país luego de la Guerra Civil Española y el posterior periodo de persecución durante la posguerra por la dictadura de Francisco Franco. Cerca de 220 mil personas simpatizantes de la Segunda República abandonaron España hacia otros países como Argentina y México, quienes lo acogieron de distinta manera.

Attendants at the edit-a-thon

Con motivo del 75 aniversario del arribo del buque Sinaia al puerto mexicano de Veracruz, los capítulos Wikimedia de Argentina, España y México, realizaron el Primer Editatón del Exilio Republicano Español, en el que se editó Wikipedia, Wikimedia Commons y Wikisource sobre hechos históricos, personajes y testimonios de este proceso.

La coordinación de este evento, realizado bajo la iniciativa Iberocoop, implicó que el trabajo se realizara en horarios distintos el pasado 16 de junio. Desde temprana hora, editores desde territorio español escribieron artículos en español y catalán, como el del escritor y militante socialista Marcial Badia Colomer o el del periodista Isaac Abeytúa.

El evento en la Ciudad de México se realizó en el Espacio X del Centro Cultural de España en México. El evento reunió a la comunidad de editores de Wikimedia México y motivó la presencia de familiares de exiliados españoles. Este editatón contó con el apoyo de la Lic. Guiomar Acevedo López, del Ateneo Español de México, quién aportó fuentes y opiniones para el mejor desarrollo de la actividad. Al inicio de la actividad Macarena Pérez, del Centro Cultural de España, destacó que el exilio español es un tema prolífico y del que se necesitarán muchas más sesiones de trabajo para recuperar todos los testimonios a su alrededor.

Attendants at the edit-a-thon

Cerca de las dos de la tarde, hora local de México, Santiago Navarro Sanz, miembro de la mesa directiva de Wikimedia España, en videconferencia desde Vila-real, saludó a los presentes y se dijo contento de que un hecho difícil para la historia española sea una razón positiva para reunir a wikipedistas en tres países y crecer la memoria sobre este hecho en los proyectos Wikimedia. En la actividad en México se editaron artículos como el de la Comisión Administradora de los Fondos para el Auxilio de los Republicanos Españoles o las cartas en Wikisource del entonces presidente Lázaro Cárdenas, quien gestionó el refugio de miles desde España en territorio mexicano. Otros artículos creados fueron la Casa de España en México, en donde fueron acogidos investigadores e intelectuales españoles para que continuaran su labor y que a la postre se convertiría en una de las instituciones académicas más prestigiadas del país: El Colegio de México; o bien, el Edificio Ermita, un afamado edificio de la capital mexicana del que pocos saben que su razón de ser inicialmente fue acoger exiliados españoles, algunos muy relevantes como Rafael Alberti.

Al final del evento Macarena Pérez presentó el proyecto Atlas de Exilio, un proyecto en el que de forma colaborativa se elabora un mapa en el que se sitúa dónde se establecieron los españoles exiliados tras la Guerra Civil; proceso que es posible hoy al no existir una persecución en su contra.

En el caso de Argentina, el evento se realizó dentro del edificio Casal de Catalunya, donde miembros de la comunidad de wikipedistas y de Wikimedia Argentina se reunieron en el Editatón del Exilio Español en Argentina junto a sobrevivientes de la experiencia del desarraigo en la posguerra.

Desde el comienzo, los asistentes pudieron comprobar que el Editatón del Exilio Español iba a ser un evento con características particulares: acudieron al Casal a varios miembros fundadores de la Asociación Civil Niños de la Guerra Civil Española de Argentina, personas que recuerdan y mantienen vivo el significado de los hechos de los que fueron víctimas hace tanto tiempo. Pausadamente y de a uno, sus testimonios acerca de la experiencia del exilio siendo muy pequeños y de cómo sus vivencias personales explican movimientos políticos clave en el siglo XX resultaron muy emotivas. Que el evento resultara tan impactante disparó en los presentes gran cantidad de preguntas que expandieron la temática y enriquecieron los relatos.

Por otro lado, el editatón tuvo una modalidad diferente a la usual, en la que se mejoran, expanden y crean artículos nuevos con el material que se esté tratando en ese momento. En este caso, se consideró que la riqueza de los testimonios, la construcción de un relato generacional que sólo puede ser contado por sus protagonistas y el recopilación de gran cantidad de imágenes y documentos históricos demandó un registro de audio y video -además de la digitalización de los documentos- que incluye entrevistas y que será la base para una recopilación documental audiovisual sobre el exilio español en Argentina y las experiencias de los niños de la guerra. Este material está siendo recopilado en una categoría especial a ese fin en Wikimedia Commons.

Iván Martínez, Wikimedia México president, Nicolás Miranda, Wikimedia Argentina head of communications, and Santiago Navarro Sanz, Wikimedia Spain vicepresident.

Catalan

Attendants at the edit-a-thon

L’exili republicà espanyol va forçar a milers d’espanyols i espanyoles a abandonar el seu país després de la Guerra Civil Espanyola i el període de persecució a la postguerra, durant la dictadura de Francisco Franco. Vora 220 mil persones simpatitzants de la Segona República van abandonar Espanya cap a altres països com ara Argentina o Mèxic, que els van acollir de distinta manera. Amb motiu del 75 aniversari de l’arribada del buc Sinaia al port mexicà de Veracruz, els capítols Wikimedia Argentina, Espanya i Mèxic, van realitzar el primer editató de l’Exili Republicà Espanyol, en el qual es va editar la Viquipèdia, Wikimedia Commons i Viquitexts sobre fets històrics, personatges i testimonis d’aquest procés.

La coordinació d’aquesta activitat, realitzat sota la iniciativa Iberocoop, va implicar que el treball es realitzara en horaris diferents el passat 16 de juny. Des de bon prompte, editors des de territori espanyol van escriure articles en castellà i català, com el de l’escriptor i militant socialista Marcial Badia Colomer o el del periodista Isaac Abeytúa. L’activitat a la Ciutat de Mèxic es va dur a terme a l’Espai X del Centre Cultural d’Espanya a Mèxic. L’acte va reunir a la comunitat d’editors de Wikimedia Mèxic i va motivar la presència de familiars d’exiliats espanyols. Aquest editató va comptar amb el suport de la Llic. Guiomar Acevedo López, de l’Ateneu Espanyol de Mèxic, qui va aportar fonts i la seua opinió per a millorar el desenvolupament de l’activitat. A l’inici de l’activitat, Macarena Pérez, del Centre Cultural d’Espanya, va destacar que l’exili espanyol és un tema prolífic i del que faran falta moltes més sessions de treball per a recuperar tots els testimonis al seu voltant.

Attendants at the edit-a-thon

Al voltant de les dues de la vesprada, hora local de Mèxic, Santiago Navarro Sanz, membre de la junta directiva de Wikimedia Espanya, en videoconferència des de Vila-real, va saludar als presents i es va manifestar content de que un fet difícil per a la història espanyola siga una raó positiva per a reunir a viquipedistes en tres països i fer créixer la memòria sobre aquest fet en els projectes Wikimedia. En l’activitat a Mèxic es van editar articles con el de la Comissió Administradora dels Fons per a l’Auxili dels republicans espanyols o les cartes a Viquitexts del aleshores president Lázaro Cárdenas, qui va gestionar el refugi de milers des d’Espanya en territori mexicà. Altres articles creats varen ser la Casa d’Espanya a Mèxic, on varen ser acollits investigadors i intel·lectuals espanyols per a que continuaren la seua tasca i que a la fi es va convertir en una de les institucions acadèmiques més prestigioses del país: el Col·legi de Mèxic; o bé, l’edifici Ermita, un afamat edifici de la capital mexicana del qual pocs saben que el seu origen va ser inicialment acollir exiliats espanyols, alguns dels quals molt rellevants com Rafael Alberti.

Al final de l’acte, Macarena Pérez va presentar el projecte Atles de l’Exili, un mapa col·laboratiu on es mostra on es van establir els exiliats espanyols després de la Guerra Civil; procés que és possible avui en dia ja que no existeix una persecució contra ells.

En el cas de l’Argentina, l’acte es va dur a terme a l’edifici del Casal de Catalunya, on membres de la comunitat de viquipedistes i de Wikimedia Argentina es van reunir en l’Editató de l’Exili Espanyol a l’Argentina junt a supervivents de l’experiència del desarrelament durant la postguerra.

Des del començament, els assistents van poder comprovar que l’Editató de l’Exili espanyol anava a ser un esdeveniment amb característiques particulars: es va rebre al Casal a diversos membres fundadors de l’Associació Civil Niños de la Guerra Civil Española d’Argentina, persones que recorden i mantenen viu el significat dels fets dels que van ser víctimes fa tant de temps. De forma pausada i d’un en un, els seus testimonis a voltants de l’experiència de l’exili, quan eren molt menuts, i de com les seues vivències personals expliquen moviments polítics clau al segle XX van resultar molt emotives. Que l’esdeveniment resultara tant impactant va disparar entre els presents gran quantitat de preguntes que van expandir la temàtica i van enriquir els relats.

Per una altra banda, l’editató va tindre una modalitat diferent a la usual, en la que es milloren, amplien i creen articles nous amb el material que s’està tractant en eixe moment. En aquest cas, es va considerar que la riquesa dels testimonis, la construcció d’un relat generacional que tan sols pot ser contat pels seus protagonistes i l’eixida a la llum de gran quantitat d’imatges i documents històrics va demandar un registre d’àudio i vídeo -a més de la digitalització dels documents- que inclou entrevistes i que serà la base per a una recopilació documental audiovisual sobre l’exili espanyol en Argentina i les experiències dels xiquets de la guerra. Aquest material està sent recopilat en una categoria especial per a aquesta finalitat a Wikimedia Commons.

Iván Martínez, Wikimedia México president, Nicolás Miranda, Wikimedia Argentina head of communications, and Santiago Navarro Sanz, Wikimedia Spain vicepresident.

by Carlos Monterrey at July 22, 2014 02:41 AM

Gerard Meijssen

#Wikidata - for some balance; survivors of KZ #Dachau

With some regularity I referred on my blog to people known for their part in Nazi Germany. It always leaves me with a bad feeling and as a result I blogged several times about the victims.

A survivor of KZ Dachau died recently. It not only said so in the text, there was also a category indicating he was once a prisoner in Dachau.

The total number of people who were prisoner at Dachau is significantly higher in Wikidata than in any Wikipedia. This is the consequence of each Wikipedia knowing about a different subset of humans.

I am sure that when the references to Wikipedia articles for KZ Dachau are analysed many more people will be known to have been imprisoned in Dachau.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at July 22, 2014 12:26 AM

July 21, 2014

Sumana Harihareswara

The Art Of Writing In The Dark

Wordsworth tells us that his greatest inspirations had a way of coming to him in the night, and that he had to teach himself to write in the dark that he might not lose them. We, too, had better learn this art of writing in the dark. For it were indeed tragic to bear the pain, yet lose what it was sent to teach us.
-Arthur Gossip in "How Others Gained Their Courage", p. 7 of The Hero In Thy Soul (Scribners, 1936), quoted on p. 172 of The Art of Illustrating Sermons by Dawson C. Bryan (Cokesbury Press, 1938), which was in my father's library. He died in late July 2010.

He had a crowded office full of books, which I described in "Method of Loci", and he was enthusiastic about sharing his knowledge, as I mentioned in my eulogy for him. If you didn't know me four years ago and weren't reading my blog, go take a look; they're worth a read. (Most of Cogito, Ergo Sumana for the second half of 2010 is pretty raw and emotional, a lot of the writing-in-the-dark that Wordsworth described.) I'm a lot like my dad. The first copyediting I ever did was for the prayer ritual guides my father wrote, which, of course, had footnotes. I am so glad he was writing for Usenet and the web at the end of his life, getting to enjoy hypertext and linking. One of the last books he wrote was a set of essays about sparrows in literature and the word "sparrow." I think I grok the joy of that more now than I did in 2010.

And I'll repeat the anecdote I heard from a guy who came to offer his condolences after my dad's death, and who told me something about my dad's scholarship. Dad had been tapped to update a Sanskrit reference text, and the publisher told Dad he only had to check sources for the entries he was adding or updating, the diff from the previous edition. Dad didn't think this was good enough, and meticulously checked or found original sources for every entry in the book. This fairly thankless task will help numberless future scholars. Most won't know. We joke about "citation needed" but my dad stepped up and did something about it. You can tell how proud I am, right?

On my insecure days I am terrified that I am not making a difference. It calms, heartens, and sustains me to see other people move on different vectors because of my influence - billiard balls on new trajectories because I was on the baize too - or even completely new endeavors springing up from seeds I scattered. And the chain of attribution is what grounds me. I honor those whose work I reuse, and I am honored when others credit me. Accurate citations make a constellation connecting the filaments of light we lit to dispel the darkness. Accurate citations are an act of love.

I am a sentimental person and I wear my heart on my sleeve. I think it would clutter up the edit summaries on Wikipedia if I included a "<3" in each one, every time I added a citation. But you should imagine they're there anyway.

July 21, 2014 12:56 PM

Gerard Meijssen

#Wikipedia & Wikidata - set theory and categories

According to many Wikipedias, Mr H. is a German. When this fact was introduced in Wikidata it was reverted; Mr H. was to be considered a national of "Nazi-Germany".

This raises an issue; when Mr H. is not a German all his victims are not German either. Arguably even the people who lived in the territory of Nazi Germany and were judged by its laws, are not necessarily Dutch, Belgian, French either.

This example is stark. However, the same issue exists in so many other contexts as well. Are the people who died before the break up of the Netherlands Dutch or Belgian? How to consider the people who lived in colonial times and lived in the colonies? What about the people who are only notable because of their actions in the USSR and now live in Russia, Armenia, Estland, Ukraine... ?

The categories of the Wikipedias are used to provide specific information for Wikidata items. For over 400 categories queries have been defined showing what Wikidata recognises as its content. All of them are in many parts; these are all about "humans" and items will only show when subsequent statements are true as well.

When "nationality" is involved, it follows that both the Wikipedia categories and consequently items in Wikidata suffer from the complexities indicated above. When for instance Spanish governors of Cuba are part of the category tree of Cuban people, it is arguably wrong. However the argument also has it in for the people who lived their whole life on the island that is Cuba..
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at July 21, 2014 12:21 AM

Tech News

Tech News issue #30, 2014 (July 21, 2014)

TriangleArrow-Left.svgprevious 2014, week 30 (Monday 21 July 2014) nextTriangleArrow-Right.svg
Other languages:
العربية 100% • ‎čeština 100% • ‎English 100% • ‎Canadian English 86% • ‎español 23% • ‎فارسی 5% • ‎suomi 59% • ‎français 100% • ‎עברית 100% • ‎日本語 18% • ‎polski 100% • ‎தமிழ் 14% • ‎українська 100% • ‎中文 100%

July 21, 2014 12:00 AM

July 20, 2014

Gerard Meijssen

#Wikimania - a visa denied

Meeting Amir at a conference, a hackathon is a pleasure. Having him around is a sure way of getting all kinds of problems solved. Giving his intimate expertise of pywikibot and our projects he makes it seem easy to resolve issues. The biggest issue is often that it takes time for his bots to complete.

Amir has been to many conferences but we will have to miss him in London.. It took the British embassy several weeks and all kinds of excuses why it took so long to decide that they are afraid that he will not go back.

Really?

The success of Wikimania 2014 will not be as great because one of our best and brightest will not be among us.
Thanks,
       GerardM

by Gerard Meijssen (noreply@blogger.com) at July 20, 2014 11:11 AM

July 19, 2014

Not Confusing (Max Klein)

Häskell und Grepl: Data Hacking Wikimedia Projects Exampled With The Open Access Signalling Project

In what could easily be a recurring annual trip,Matt Senate, and I came to Berlin this week to participate in Open Knowledge Festival. We spoke at the csv,conf a fringe event in its first year, ostensibly about the comma separated values, but more so about unusual data hacking. On behalf of WikiProject Open Access – Signalling OA-ness team, we generalized our experience in data-munging with Wikimedia projects for the new user. We were asked to make the talk more story-oriented than technical; and since we were in Germany, we decided to use that famous narrative of Häskell and Grepl. In broad strokes we go through: how Wikimedia projects work, history of Wiki Data-Hacking, from “Ignore All Rules” to calcification, Wikidata told as Hänsel and Gretel, signalling OA-ness, how you could do it too.

These are the full slides (although slide show does not seem to like our Open Office document so much):
<iframe frameborder="0" height="400" marginheight="0" marginwidth="0" scrolling="no" src="http://www.slideshare.net/slideshow/embed_code/37151192" width="476"></iframe>

And a crowdsourced recording of the session:
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="http://www.youtube.com/embed/w5T6RoEckAQ" width="560"></iframe>

We missed half of lunch with the queue of questions extending past our sessions, which was fabulous to see such interest. There is a particular affinity we found with the Content Mine initiative, which wants to programmatically extract facts from papers. Since we are finding and uploading mine-able papers, you could imagine some sort of suggestion system which says to an editor “you cited [fact x] from this paper, do you also want to cite [extracted facts] in the Wikipedia article too?”. Let’s work to make that system a fact in itself.

by max at July 19, 2014 12:51 PM

Tony Thomas

Using PEAR::mimeDecode to strip email bounce out of its headers

Writing indigenous email header stripping functions involve tedious work and a lot of regex, as the bounce headers email can be encoded. up/down cased, and saving it, we planned to include the mailMimeDecode class – a pretty straightforward approach. Header stripping is quite easy, and it can be installed via composer too. * Prepare your […]

by Tony Thomas at July 19, 2014 11:11 AM

Wikisorcery

Sherlock Holmes vs. the Martians

It is often assumed that something that is generically old-ish is “obviously” in the public domain.  This is not necessarily true.  Add to that the variation between the copyright laws of different nations and some odd things can happen.

The United States gets a lot of stick for its copyright laws and long copyright terms (although France is to blame for a lot of that; I prefer America’s old registration and renewal system). However, it isn’t the only country to throw unexpected spanners in the public domain’s works, and a combination of different country’s laws can have odd results.

The Sherlock Holmes stories are entirely in the public domain almost everywhere on Earth—except in the United States. The United Kingdom uses Life+70, or 70 years pma (post mortem author), and Sir Arthur Conan Doyle died in 1930, so his works entered the public domain in their home country in 2001. The Rule of the Shorter Term means that this applies to most other countries as well. In America, however, Doyle’s children renewed the copyright on most of his last collection, The Case-Book of Sherlock Holmes, per the laws of the time and subsequent amendments, granting it 67 extra years of protection, on top of the standard 28 from the date of publication. The majority of the Holmes stories are out of copyright as  they are everywhere else but the last of these particular stories will not enter the public domain until 2023. (To add confusion, these stories were in initially in the public domain in the UK in 1981 under Life+50 terms, were dragged back into copyright in 1996 by European harmonisation, then returned to the public domain in 2001. In countries like Canada and Australia, which maintain the old Life+50 term, they were public domain in 1981 and stayed there.)

H. G. Wells published The War of the Worlds in 1898 and died in 1946, so at time of writing, all of his works are still under copyright in Britain. They will enter the public domain on 1st January 2017. In America, however, copyright terms were measured from the date of first publication and only lasted for either 28, 42 or 56 years; so it entered the public domain in that country in either 1927, 1941 (both during Wells’ lifetime), or 1955—depending on the exact details of the copyright situation, of which I am not aware.

Both are British in origin, from authors who are considered to be Victorian (although both died in the Twentieth century) but the copyright situations vary wildly.

The Adventure of Shoscombe Old Place“, the last of the Holmes canon, first published in 1927, entered the public domain in its home country in 2001, but won’t enter the public domain in America until 22 years later. Conversely, The War of the Worlds, first published much earlier in 1898, entered the public domain in America in 1955 at the latest, but won’t enter the public domain in its home country until 62 years later (possibly even 90 years later if the shortest of the possible terms is correct).

As ever, copyright can be odd and counter-intuitive. Also: generically old things aren’t necessarily free just because they seem old.


by wikisorcery at July 19, 2014 06:00 AM

July 18, 2014

Wikimedia Foundation

Wiki Loves Pride 2014 and Adding Diversity to Wikipedia

Logo for the proposed user group Wikimedia LGBT

Since Wikipedia’s gender gap first came to light in late 2010, Wikipedians have taken the issue to heart, developing projects with a focus on inclusivity in content, editorship and the learning environments relevant to new editors. 

Wiki Loves Pride started from conversations among Wikipedians editing LGBT topics in a variety of fields, including history, popular culture, politics and medicine, and supporters of Wikimedia LGBT - a proposed user group which promotes the development of LGBT-related content on Wikimedia projects in all languages and encourages LGBT organizations to adopt the values of free culture and open access. The group has slowly been building momentum for the past few years, but had not yet executed a major outreach initiative. Wiki Loves Pride helped kickstart the group’s efforts to gather international supporters and expand its language coverage.

Pride Edit-a-Thons and Photo Campaigns Held Internationally

We decided to run a campaign in June (LGBT Pride Month in the United States), culminating with a multi-city edit-a-thon on June 21. We first committed to hosting events in New York City and Portland, Oregon (our cities of residence), hoping others would follow. We also gave individuals the option to contribute remotely, either by improving articles online or by uploading images related to LGBT culture and history. This was of particular importance for users who live in regions of the world less tolerant of LGBT communities, or where it may be dangerous to organize LGBT meetups.

San Francisco Pride (2014)

In addition to New York City and Portland, offline events were held in Philadelphia and Washington, D.C., with online activities in Houston, Seattle, Seoul, South Africa, Vancouver, Vienna and Warsaw. Events will be held in Bangalore and New Delhi later this month as part of the Centre for Internet and Society’s (CIS) Access to Knowledge (A2K) program. Other Wikimedia chapters have expressed interest in hosting LGBT edit-a-thons in the future.

Campaign Results

The campaign’s “Results” page lists 90 LGBT-related articles which were created on English Wikipedia and links to more than 750 images uploaded to Wikimedia Commons. Also listed are new categories, templates and article drafts, along with “Did you know” (DYK) hooks that appeared on the Main Page and policy proposals which may be of interest to the global LGBT community.

Pride parade in Portland, Oregon in 2014

The campaign also attracted participation from Wikimedia projects other than Wikipedia. Wikimedia Commons hosted an LGBT photo challenge, which received more than 50 entries and an LGBT task force was created at Wikidata. So far the group, which also seeks to improve LGBT-related content, has gathered 10 supporters and has adopted a rainbow-colored variation of the Wikidata logo as its symbol.

Continuing Efforts

Our hope is that the campaign will continue to grow and evolve, galvanizing participation in more locations and in different languages. Wiki Loves Pride organizers will continue to provide logistical support to those interested in hosting events and collaborating with cultural institutions.

Contiguous with the events of Wiki Loves Pride, Wikimedia LGBT has an open application to achieve user group status from the Wikimedia Affiliations Committee and looks forward to expanding its members and efforts on all fronts.

Jason Moore, Wikipedian

Dorothy Howard, Wikipedian

by Dorothy Howard at July 18, 2014 06:00 PM

Wikisorcery

Adventures in Copyright

I didn’t mean for this blog to be all about copyright, and the blind fumbling through its labyrinthine corridors that is the lot of Wikimedians, but looking at my little tag cloud gadget, copyright is clearly the most common topic so far. It’s going to get more common in the near future.

As I’ve mentioned before, I don’t really understand copyright law. Like many Wikimedians, I am an amateur enthusiast just trying to get the project I like to run properly. It is definitely an area fraught with problems, however.

For example, I have been accused of practising law without a licence just because I listed the years when some of Robert E. Howard’s works would drop out of copyright and therefore which were still protected. All this involved was adding Wikisource’s “copyright until” template, which shows an end date for an item and converts itself to a wikilink at that time. I thought this would be useful, especially as some of his still-copyrighted works were getting uploaded now and again by good-faith users, and it only involves looking up conditions on a table and adding some numbers together. It shouldn’t be controversial to share basic information like that, but here we are. (Howard is my favourite author but I’ve had some of his works deleted from Wikisource when they turned out not to be in the public domain after all.)

Nevertheless, I’ve picked up a few bits and pieces in my time on Wikisource and it’s interesting to find all the little quirks in what should be a simple binary choice (either still-protected or public-domain).  So here starts a brief series of posts on the random foibles and oddities I’ve encountered so far.


by wikisorcery at July 18, 2014 11:53 AM

Gerard Meijssen

#Wikidata - Joep Lange, HIV researcher

Mr Lange was on his way to a conference in Melbourne about HIV. He and several other HIV researchers died because his airplane was shot out of the air over the Ukraine.

Mr Lange was a professor at the University of Amsterdam. He was considered to be a „World’s Top AIDS Researcher“.

I am appalled that Mr Putin considers the Ukraine responsible because "it happened over Ukrainian territory".. That is a bit too simplistic and obvious an excuse.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at July 18, 2014 09:35 AM

July 17, 2014

Wikimedia Foundation

WikiProject Report: Indigenous Peoples of North America

A Zuni girl with a pottery jar on her head, photographed in 1909. Most Zuni live in Zuni Pueblo in southern New Mexico.

Wikipedia’s community-written newsletter, The Signpost, recently talked to a number of participants in WikiProject Indigenous Peoples of North America. Encompassing more than 7,000 articles, the project currently boasts sixteen featured articles—articles that have gone through a thorough vetting process and are considered some of the best on the encyclopedia—as well as 63 WikiProject good articles, which have been through a similar, though less rigorous, process. The WikiProject aims to improve and maintain overall coverage of the indigenous peoples of North America on Wikipedia.

Members CJLippert, Djembayz, RadioKAOS, Maunus and Montanabw were asked for their thoughts on various aspects of the project. All five have a strong interest in the topic, though not all have direct ties to the indigenous peoples of North America. CJLippert, who works for the Mille Lacs Band of Ojibwe, a federally recognized American Indian tribe in Minnesota, comes pretty close. “Minnesota is a cross-road of where the Indian Removal Policy ended and Reservation Policy began and where the old and small Reserve system and the new and large Reservation system intersect,” he explains.

He adds, “As I work for a Native American tribal government, though not Native but also not ‘White’, I have the privilege of participating as the third party between the two. This also means I get to see both the strengths and weaknesses of both in regards to the relations between the Native Americans and the majority population. As that third party, trying to help to close some gaps in understanding is what led me to participate in Wikipedia and then to join the WikiProject.”

Maunus, a linguist and anthropologist, focuses on Mexican indigenous groups, which he feels is an underrepresented topic area on Wikipedia. “I am one of the only people doing dedicated work on these groups, but I have been focusing on languages and I agree that Mexican indigenous people require improved coverage compared to their Northern neighbors,” he says. “There are some articles on the Spanish Wikipedia of very high quality, mainly because of the work of one editor, but likewise other articles that are of very poor quality, with either romanticizing or discriminatory undertones. They also tend to use very low quality sources.”

An Iñupiat man, photographed in 1906. There are an estimated 13,500 Iñupiat in north and northwest Alaska.

He is not the only contributor keeping his focus precise. RadioKAOS lives in Fairbanks, Alaska, an “intersecting point for a variety of distinct groups of Alaskan Natives” thanks to its position as the second-largest city in the northernmost state. Finding information on these rural communities, however, can be a challenge given the areas lack of online coverage. “Because a large part of what constitutes sourcing on Wikipedia is web-based and/or corporate media-based, coverage is hamstrung by the lack of any media outlets in scores of small, rural communities throughout Alaska,” he says. “Look at the ‘coverage’ of many of these communities and you’ll see that the articles are little more than a dumping ground for the Census Bureau and other public domain data that provide little or no insight as to what life there is like. Most attempts to provide factual insights of rural Alaska wind up deleted due to lack of [online] reliable sources.”

Montanabw, an editor of over eight years with a catalog of featured and good articles, says systemic bias is a big issue throughout Wikipedia’s coverage of indigenous peoples. “My first concern is use of language and phrasing that treats Native People like they are merely interesting historic figures instead of a living, modern people with current issues and current leaders,” he adds. “My second concern is uninformed, and at times inadvertently insulting, use of terminology in articles. For example, not all Native leaders are called ‘chief,’ yet many biographies labeled certain people this way even though it was not an appropriate title for that person.”

He adds, “Respect for a living culture and living people is not ‘political correctness,’ and it is frustrating to run across that attitude.”

For more info on WikiProject Indigenous Peoples of North America, read the full feature on the Signpost, or go to the WikiProject’s overview page.


Joe Sutherland, communications volunteer for the Wikimedia Foundation

by Joe Sutherland at July 17, 2014 06:18 PM

Wikimedia UK

Wikimedia at the heart of open education

The photo shows Dr Martin Poulter presenting at the conference

Dr Poulter presenting at OER14

This post was written by Dr Martin Poulter

The UK has a flourishing Open Educational Resources (OER) movement. Educators, librarians, support staff and others are working to open up the culture and content of the education system. They are linked by face-to-face working relationships, and more distributed groups such as Open Knowledge’s Education Working Group or the  Association for Learning Technology’s OER Special Interest Group.

The main meeting point for the community in the UK is the annual OER conference, which this year was hosted at the University of Newcastle. Simon Knight and I attended this year, with support from Wikimedia UK.

Despite some big successes for OER in the UK, trying to open up academic culture from within can feel very much like a struggle. The OER advocates see themselves as a small minority working to change a massive, well-embedded system.

In Wikimedia, we have a different perspective. Open resources are not only freely available and in legal and technical terms are repurposable, adaptable. Our creations – Wikipedia and its sister sites – meet this definition very well indeed.

If Wikipedia is an OER, then the open education movement is not a struggling minority: in fact, we’re winning. It means the world’s fifth most popular web site is an OER; the biggest and most popular Welsh-language web site is an OER; and there are languages in which the only written reference work is an OER.

Over the years, the focus of the OER movement has changed from “open resources” to “open practice”. Rather than just putting educational material online, the discussions are more about how sharing, reuse and remixing can become a natural part of everyday practice. How do we involve learners in creating OERs? How do we take advantage of the explosion of open-access research outputs? How do we get students not just consuming educational materials, but critiquing, reviewing, and improving?

There are also questions about policy: does the open revolution require policy changes? What are the wider changes around the world that open education can bring about? An idea discussed at a past OER conference was a “National Learning Service”, free at the point of use to all citizens. How could this become a reality?

As Wikimedians, we have a lot to say about all of these topics. This has been the third OER conference with a Wikimedia UK presence. This time, thanks to the kindness of the organisers, we were listed as a sponsor organisation, among a diverse bunch from the Open University to Lego.

We delivered a workshop, a paper and a lunchtime free-for-all helpdesk where we were bombarded with friendly questions. We also injected a Wikipedian perspective into other sessions, reminding the OER professionals that Wikimedians are their fellow travellers.

The conference was also a chance to show the diversity of Wikimedia’s work. Everyone has heard of Wikipedia, but sister projects like Wikibooks, Wikiversity or Wikisource are less well-known despite being ideal platforms for some OER activities.

We’ve written up some outcomes and reactions on our wiki.  Simon Knight has blogged about the conference with some great insights (and photographs!) My workshop, about partnership with the Wikimedia projects,  is written up on-wiki for use in other events.

Within minutes of us arriving, one academic remarked what a surprise it was to meet us: until then, she hadn’t really thought of there being people behind Wikipedia. This is a common response: despite our open way of working, we Wikipedians, and the processes we use, are often invisible to the people who would most benefit from working with us.

If the people behind Wikipedia are invisible, then preconceptions about those people go unchallenged. It is easy to hear about the gender imbalance in Wikipedia contributors (much-discussed at the conference) and assume that Wikipedians don’t see this as a problem. We took the chance to explain what Wikimedia UK is doing about the problem and how central diversity is to our mission.

The invisibility means we are not yet automatically invited in to academic events. Wikimedia sites are seen as outside academia, even though many of the volunteer contributors are credentialed experts. We need to knock and ask to be let in, but when we do we get a very warm welcome. It’s a chance to put Wikipedia in the centre of the open education debate where it belongs and to learn from the innovators who are driving education forward.

by Stevie Benton at July 17, 2014 12:46 PM

Gerard Meijssen

#Wikidata - many more edits

It is so easy to add a lot of information to Wikidata. Mr Wekwerth died for instance. You read his article and, you find a category indicating that he was awarded the order of Karl Marx. They were 437 edits for me because I added all the people in the category using Autolist2.

There are many more categories on the profile of Mr Wekwerth and arguably each one establishes Mr Wekwerth more in who he was and the people, places and occurrences he was connected to.

Typically I do only one category for one person who died. When that someone was a bishop, I know him to be a priest. They are some 777 edits I am adding at the moment. It could have been a diplomat or Wikidata does not even know that the person is "human"..

Adding an additional 100K edits is not that hard. It does enrich Wikidata and the results are obvious when you regularly wander using the Reasonator or when you add the dates of death to those who died like I do.
Thanks,
    GerardM

by Gerard Meijssen (noreply@blogger.com) at July 17, 2014 10:50 AM

Wikimedia Tech Blog

First Look at the Content Translation tool

The projects in the Wikimedia universe can be accessed and used in a large number of languages from around the world. The Wikimedia websites, their MediaWiki software (bot core and extensions) and their growing content benefit from standards-driven internationalization and localization engineering that makes the sites easy to use in every language across diverse platforms, both desktop and and mobile.

However, a wide disparity exists in the numbers of articles across language wikis. The article count across Wikipedias in different languages is an often cited example. As the Wikimedia Foundation focuses on the larger mission of enabling editor engagement around the globe, the Wikimedia Language Engineering team has been working on a content translation tool that can greatly facilitate the process of article creation by new editors.

About the Tool


The Content Translation editor displaying a translation of the article for Aeroplane from Spanish to Catalan.

Particularly aimed at users fluent in two or more languages, the Content Translation tool has been in development since the beginning of 2014. It will provide a combination of editing and translation tools that can be used by multilingual users to bootstrap articles in a new language by translating an existing article from another language. The Content Translation tool has been designed to address basic templates, references and links found in Wikipedia articles.

Development of this tool has involved significant research and evaluation by the engineering team to handle elements like sentence segmentation, machine translation, rich-text editing, user interface design and scalable backend architecture. The first milestone for the tool’s rollout this month includes a comprehensive editor, limited capabilities in areas of machine translation, link and reference adaptation and dictionary support.

Why Spanish and Catalan as the first language pair?

Presently deployed at http://es.wikipedia.beta.wmflabs.org/wiki/Especial:ContentTranslation, the tool is open for wider testing and user feedback. Users will have to create an account on this wiki and log in to use the tool. For the current release, machine translation can only be used to translate articles between Spanish and Catalan. This language pair was chosen for their linguistic similarity as well as availability of well-supported language aids like dictionaries and machine translation. Driven by a passionate community of contributors, the Catalan Wikipedia is an ideal medium sized project for testing and feedback. We also hope to enhance the aided translation capabilities of the tool by generating parallel corpora of text from within the tool.

To view Content Translation in action, please follow the link to this instance and make the following selections:

  • article name – the article you would like to translate
  • source language – the language in which the article you wish to translate exists (restricted to Spanish at this moment)
  • target language – the language in which you would like to translate the article (restricted to Catalan at this moment)

This will lead you to the editing interface where you can provide a title for the page, translate the different sections of the article and then publish the page in your user namespace in the same wiki. This newly created page will have to be copied over to the Wikipedia in the target language that you had earlier selected.

Users in languages other than Spanish and Catalan can also view the functionality of the tool by making a few tweaks.

We care about your feedback

Please provide us your feedback on this page on the Catalan Wikipedia or at this topic on the project’s talk page. We will attempt to respond as soon as possible based on criticality of issues surfaced.

Runa Bhattacharjee, Outreach and QA coordinator, Language Engineering, Wikimedia Foundation

by Runa Bhattacharjee at July 17, 2014 12:29 AM

July 16, 2014

Wikimedia UK

Counting down to Wikimania

The photo shows the whiteboard with plans on it.

The planning whiteboard

This post was written by Jon Davies, Wikimedia UK Chief Executive

We have a big white board in the office where we share calendars and meetings; a few months ago I started a box counting down the days to Wikimania on it. However hard we tried it seemed a long way off in the distant future but now with less than three weeks to go we know differently!

The programme may be set, the speakers arranged, the food ordered and the wifi tested and for the first time in thirty years I feel that I actually know my way around the Barbican but the scale of the event is beginning to make itself felt. All around me are volunteers and staff wrestling with the last minute details: how many laptops do we need, where will the walkie talkies be stored, how much cash will we need over the conference days, can you fit a mobility scooter in the lifts? Small details but if everyone is going to have a great Wikimania it is the detail like this we need to get right.

So if we are a little slower than normal answering emails or getting back to you please be patient!

by Stevie Benton at July 16, 2014 02:29 PM

Wikimedia Tech Blog

Coding da Vinci: Results of the first German Culture Hackathon

Mnemosyne, goddess of memory

From the Delaware Art Museum, Samuel and Mary R. Bancroft Memorial, © public domain via Wikimedia Commons

The weather was almost as hot as it was in Hong Kong one year ago. But whereas on that occasion a time machine had to catapult the audience ten years into the future, at the event held on Sunday, July 6 at the Jewish Museum Berlin, the future had already arrived.

It was not only virtual results that were presented at the award ceremony for the culture hackathon Coding da Vinci in Berlin. Image from Marius Förster © cc-by-sa 3.0

At the final event of the programming competition Coding da Vinci, seventeen projects were presented to both a critical jury and the public audience in a packed room. Five winners emerged, three of whom used datasets from Wikimedia projects. This result signals that the predictions put forward by Dirk Franke in Hong Kong have already become a reality: that in the future more and more apps will use the content of Wikimedia projects and that the undiscerning online user will barely notice where the data actually comes from. There is a clear trend towards providing information in a multimedia-based and entertaining way. That’s the meta level, but the source of the knowledge is still clear: Wikipedia.

The aims of Coding da Vinci

The new project format used by Wikimedia Deutschland (WMDE) for the first time this year ended successfully. Coding da Vinci is a culture hackathon organized by WMDE in strategic partnership with the German Digital Library, the Open Knowledge Foundation Germany and the Service Center Digitization Berlin. Unlike a standard hackathon, the programmers, designers and developers were given ten weeks to turn their ideas into finished apps. Most of the 16 participating cultural institutions had made their digital cultural assets publicly available and reusable under a free license especially for the programming competition. With the public award ceremony on July 6 at the Jewish Museum, we wanted to show not just these cultural institutions but also what “hackers” can do with their cultural data. We hope that this will persuade more cultural institutions to freely license their digitized collections. Already this year, 20 cultural data sets have been made available for use in Wikimedia projects.

Exciting til the very end

It was an exciting event for us four organizers, as we waited with baited breath to see what the community of programmers and developers would produce at the end. Of course, not all the projects were winners. One of the projects that did not emerge as a winner, but that I would nevertheless like to give a special mention, was Mnemosyne – an ambitious website that took the goddess of memory as its patron. We are surely all familiar with those wonderful moments of clarity as we link-hop our way through various Wikipedia pages, so who would say no to being guided through the expanse of associative thought by a polymath as they stroll through a museum?

The polymath as a way of life died out in the end of the 19th century, according to Wikipedia – a fact that the Mnemosyne project seeks to address by using a combination of random algorithms to make finding and leafing through complex archive collections a simpler and more pleasurable activity. In spite of some minor blips during the on-stage presentation, the potential of the cast concrete Mnemosyne was plain to see. Hopefully work will continue on this project and the developers will find a museum association that wants to use Mnemosyne to make their complex collections available for visitors to browse.

The five winners

After two hours of presentations and a one-hour lunch break, the winners were selected in the five categories and were awarded their prizes by the jury.

Out of Competition: The zzZwitscherwecker (chirping alarm clock) really impressed both the audience and the jury. It’s a great solution for anyone who finds it difficult to be an early bird in the morning. That’s because you can only stop the alarm if you’re able to correctly match a bird to its birdsong. You’re sure to be wide awake after such a lively brain game.

Funniest Hack: The Atlas beetle is a real Casanova. It inspired IT enthusiast Kati Hyppä and her brother to build not only a dancing Cyberbeetle, but also an accompanying hi-tech insect box. We’ll see if the Museum für Naturkunde (museum for natural sciences) incorporates the project into its entomology exhibition. The jury was enchanted by the dancing beetle and awarded its creators the prize for Funniest Hack.

Best Design: The prize for most impressive design went to Ethnoband. The organ was the inspiration behind this project. The inventors of the organ packed a full orchestra in the pipes of just one instrument. With Ethnoband, Thomas Fett has made it possible to conduct an orchestra with instruments from all over the world using a computer. You can also invite friends from around the world to a jam session.

Screen shot of the Alt Berlin app by Claus Höfele. Winner of the Most Technical category. © cc-by-sa 3.0

Most Useful: In this category, it was important to come up with an idea and strategy that would make the jury wonder why nobody had ever come up with this idea before. Insight – 19xx excelled at this almost impossible task. It is based on a list of names of authors ostracized by the Nazis and linked with additional information, from Wikipedia and other sources. This turns the list of mere names into intriguing biographies that are an engaging introduction to the author’s work. During the project it emerged, among other things, that a total of almost 20,000 books had been put on the prohibition list by the Nazis – a number much greater than previously estimated.

Most Technical: The app Alt-Berlin (old Berlin) impressed the jury on account of its great level of technical sophistication. In the app, the digitized collection of paintings from the Stadtmuseum Berlin, which hosted a Wikipedian in Residence in 2012, illustrates modern OpenStreetMap maps. Anyone wanting to experience time travel can discover historical maps along the streets of today. Even current images from Wikimedia Commons can be laid over old photographs of the streets of Berlin. You will soon be able to easily access the app from your cell phone while out and about.

All applications have a free license and can be further developed and reclassified accordingly.

 

Thank you to everyone who took part in Coding da Vinci! Photo: Volker Agueras Gäng, CC-BY 3.0

Looking to 2015 Next year, we would once again like to invite the programming community to participate in our culture hackathon Coding da Vinci. We hope to attract more cultural institutions, programmers and designers, to receive more data and to produce more creative projects; but more than anything we hope to help increase accessibility to the digitized cultural heritage that has already been made available. Our aim is to fully integrate this data into Wikimedia projects so that they can be used directly by all volunteers working on these projects.

Photographs from the event can be accessed from the Wikimedia Commons page. Photos of the award ceremony will be posted soon.

 

Barbara Fischer, curator for cultural cooperations at Wikimedia Deutschland.

German blogpost

by Katja Ullrich at July 16, 2014 08:13 AM

Luis Villa

Designers and Creative Commons: Learning Through Wikipedia Redesigns

tl;dr: Wikipedia redesigns mostly ignore attribution of Wikipedia authors, and none approach the problem creatively. This probably says as much or more about Creative Commons as it does about the designers.

disclaimer-y thing: so far, this is for fun, not work; haven’t discussed it at the office and have no particular plans to. Yes, I have a weird idea of fun.

Refresh variant from interfacesketch.com.
A mild refresh from interfacesketch.com.

It is no longer surprising when a new day brings a new redesign of Wikipedia. After seeing one this weekend with no licensing information, I started going back through seventeen of them (most of the ones listed on-wiki) to see how (if at all) they dealt with licensing, attribution, and history. Here’s a summary of what I found.

Completely missing

Perhaps not surprisingly, many designers completely remove attribution (i.e., history) and licensing information in their designs. Seven of the seventeen redesigns I surveyed were in this camp. Some of them were in response to a particular, non-licensing-related challenge, so it may not be fair to lump them into this camp, but good designers still deal with real design constraints, and licensing is one of them.

History survives – sometimes

The history link is important, because it is how we honor the people who wrote the article, and comply with our attribution obligations. Five of the seventeen redesigns lacked any licensing information, but at least kept a history link.

Several of this group included some legal information, such as links to the privacy policy, or in one case, to the Wikimedia Foundation trademark page. This suggests that our current licensing information may be presented in a worse way than some of our other legal information, since it seems to be getting cut out even by designers who are tolerant of some of our other legalese?

Same old, same old

Four of the seventeen designs keep the same old legalese, though one fails to comply by making it impossible to get to the attribution (history) page. Nothing wrong with keeping the existing language, but it could reflect a sad conclusion that licensing information isn’t worth the attention of designers; or (more generously) that they don’t understand the meaning/utility of the language, so it just gets cargo-culted around. (Credit to Hamza Erdoglu , who was the only mockup designer who specifically went out of his way to show the page footer in one of his mockups.)

A winner, sort of!

Of the seventeen sites I looked at, exactly one did something different: Wikiwand. It is pretty minimal, but it is something. The one thing: as part of the redesign, it adds a big header/splash image to the page, and then adds a new credit specifically for the author of the header/splash image down at the bottom of the page with the standard licensing information. Arguably it isn’t that creative, just complying with their obligations from adding a new image, but it’s at least a sign that not everyone is asleep at the wheel.

Observations

This is surely not a large or representative sample, so all my observations from this exercise should be taken with a grain of salt. (They’re also speculative since I haven’t talked to the designers.) That said, some thoughts besides the ones above:

  • Virtually all of the designers who wrote about why they did the redesign mentioned our public-edit-nature as one of their motivators. Given that, I expected history to be more frequently/consistently addressed. Not clear whether this should be chalked up to designers not caring about attribution, or the attribution role of history being very unclear to anyone who isn’t an expect. I suspect the latter.
  • It was evident that some of these designers had spent a great deal of time thinking about the site, and yet were unaware of licensing/attribution. This suggests that people who spend less time with the site (i.e., 99.9% of readers) are going to be even more ignorant.
  • None of the designers felt attribution and licensing was even important enough to experiment on or mention in their writeups. As I said above, this is understandable but sort of sad, and I wonder how to change it.

Postscript, added next morning:

I think it’s important to stress that I didn’t link to the individual sites here, because I don’t want to call out particular designers or focus on their failures/oversights. The important (and as I said, sad) thing to me is that designers are, historically, a culture concerned with licensing and attribution. If we can’t interest them in applying their design talents to our problem, in the context of the world’s most famously collaborative project, we (lawyers and other Commoners) need to look hard at what we’re doing, and how we can educate and engage designers to be on our side.

I should also add that the WMF design team has been a real pleasure to work with on this problem, and I look forward to doing more of it. Some stuff still hasn’t made it off the drawing board, but they’re engaged and interested in this challenge. Here is one example.

by Luis Villa at July 16, 2014 06:31 AM

July 15, 2014

Wikimedia DC

We are led by volunteers—here is how you can help

Wikimedia DC volunteers at the National Archives, October 2013Wikimedia DC works on the ground in Washington, DC, and in the surrounding area to teach others about Wikipedia. We are proud of all that we’ve accomplished in our three years, from our large gatherings like Wikimania 2012 and WikiConference USA, to our regularly held edit-a-thons with cultural and educational organizations throughout DC. We are also excited about the future; we are in the midst of our expanding our program offerings so that we can do more to serve DC and to improve the Wikimedia projects.

What you may not know is that Wikimedia DC is led almost entirely by volunteers. With very few exceptions, volunteers do everything: we plan the events, we follow up with organizations we work with, even our board members and officers are volunteers. And we always need more volunteers. Whether you know how Wikipedia works or not, there are many ways you can help us. Here are some ways you can help:

  • If you’re well versed in the ins and outs of Wikipedia editing, we always need Wikipedia trainers for our edit-a-thons. You will have the opportunity to share your knowledge of Wikipedia with someone eager to learn. If you are interested in this opportunity, email james.hare@wikimediadc.org or just show up to an upcoming event.
  • Have something interesting to share about the Wikimedia projects, free knowledge, open source software, open data, or open government? We are looking for guest bloggers to make occasional contributions to our blog. Your writing will be shared with the broader Wikimedia community here in DC and around the world. Feel free to email recommendations to james.hare@wikimediadc.org.
  • Our organization is aided by the advise of our committees. We have three committees focused on programs: Content Programs Committee, Technology Programs Committee, and Community Programs. We also have committees dedicated to fundraising, governance, public policy, and technical infrastructure. If you are interested in serving on any of these committees, send an email to info@wikimediadc.org.

Thank you very much for your interest. We hope to see you help out at Wikimedia DC!

by James at July 15, 2014 07:35 PM

Wikimedia Foundation

Wikimedia Foundation offers assistance to Wikipedia editors named in U.S. defamation suit

Since posting, we have learned that Mr. Barry’s attorney has requested to withdraw their complaint without prejudice and their request has been granted by the court. Mr. Barry’s attorney has further indicated that Mr. Barry intends to file an amended complaint some unspecified time in the future.

Wikipedia’s content is not the work of one, ten, or even a thousand people. The information on Wikipedia is the combined product of contributions made by hundreds of thousands of people from all over the world. By volunteering their time and knowledge, these people have helped build Wikipedia into a project that provides information to millions every day.

With many different voices come many different perspectives. Resolving them requires open editorial debate and collaboration with and among the volunteer community of editors and writers. Disagreements about content are settled through this approach on a daily basis. On extremely rare occasions, editorial disputes escalate to litigation.

This past month, four users of English Wikipedia were targeted in a defamation lawsuit brought by Canadian-born musician, businessman, and philanthropist Yank Barry. In the complaint, Mr. Barry claims that the editors, along with 50 unnamed users, have acted in conspiracy to harm his reputation by posting false and damaging statements onto Wikipedia concerning many facets of his life, including his business, philanthropy, music career, and legal history.

However, the specific statements Mr. Barry apparently finds objectionable are on the article’s talk page, rather than in the article itself. The editors included in the lawsuit were named because of their involvement in discussions focused on maintaining the quality of the article, specifically addressing whether certain contentious material was well-sourced enough to be included, and whether inclusion of the material would conform with Wikipedia’s policies on biographies of living persons.

A talk page is not an article. It is not immediately available to the readers of the encyclopedia. Its purpose is not to provide information, but a forum for discussion and editorial review. If users are unable to discuss improvements to an article without fear of legal action, they will be discouraged from partaking in discussion at all. While some individuals may find questions about their past disagreeable and even uncomfortable, discussions about these topics are necessary for establishing accurate and up-to-date information. Without discussion, articles will not improve.

In our opinion, this lawsuit is an effort to try and chill free speech on the Wikimedia projects. Since Wikipedia editors do not carve out facts based on bias or promotion this lawsuit is rooted in a deep misinterpretation of the free-form truth-seeking conversations and analysis that is part of the editorial review process that establishes validity and accuracy of historical and biographical information. As such, we have offered the four named users assistance through our Defense of Contributors policy. Three of the users have accepted our offer and obtained representation through the Cooley law firm. We thank Cooley for its assistance in the vigorous representation of our users. The fourth user is being represented by the California Anti-SLAPP Project and is working closely with the Wikimedia Foundation and Cooley.

Lawsuits against Wikipedia editors are extremely rare — we do not know of of any prior cases where a user has been sued for commenting on a talk page. The Wikipedia community has established a number of dispute resolution procedures and venues to discuss content issues that are available for anyone to use. Most content disputes are resolved through these processes. We are unaware of Mr. Barry taking advantage of these processes to work directly with the editors involved in this lawsuit or the greater Wikipedia community to address these issues.

Wikipedia’s mission is to provide the world with the sum of all human information for free and we will always strongly defend its volunteer editors and their right to free speech.

Michelle Paulson, Legal Counsel

by Michelle Paulson at July 15, 2014 04:00 PM

Wikimedia engineering report, June 2014

Major news in June include:

Note: We’re also providing a shorter, simpler and translatable version of this report that does not assume specialized technical knowledge.

Engineering metrics in June:

  • 151 unique committers contributed patchsets of code to MediaWiki.
  • The total number of unresolved commits went from around 1440 to about 1575.
  • About 14 shell requests were processed.

Personnel

Work with us

Are you looking to work for Wikimedia? We have a lot of hiring coming up, and we really love talking to active community members about these roles.

Announcements

  • Elliot Eggleston joined the Wikimedia Foundation as a Features Engineer in the Fundraising-Tech team (announcement).

Technical Operations

New Dallas data center

On-site work has started in our new Dallas (Carrollton) data-center (codfw). Racks have been installed, the equipment we moved from Tampa has been racked and cabling work has been mostly completed over the course of the month. We are now awaiting the installation of connectivity to the rest of our network as well as the arrival of the first newly-ordered server equipment, so server & network configuration can commence.

Puppet 3 migration

In July we migrated from Puppet 2 to Puppet 3 on all production servers. Thanks to the hard work of both volunteers and Operations staff on our Puppet repository in the months leading up to this, this migration went very smoothly.

Labs metrics in June:

  • Number of projects: 173
  • Number of instances: 424
  • Amount of RAM in use (in MBs): 1,741,312
  • Amount of allocated storage (in GBs): 19,045
  • Number of virtual CPUs in use: 855
  • Number of users: 3,356

Wikimedia Labs

Last month we switched the Labs puppetmaster to Puppet 3; this month all instances switched over as well. Some cleanup work was needed in our puppet manifests to handle Trusty and Puppet 3 properly; everything is fairly stable now but a bit of mopping up remains.

Features Engineering

Editor retention: Editing tools

VisualEditor

In June, the VisualEditor team provided a new way to see the context of links and other items when you edit to make this easier, worked on the performance and stability of the editor so that users could more swiftly and reliably make changes to articles, and made some improvements to features focussed on increasing their simplicity and understandability, fixing 94 bugs and tickets. The editor now shows with a highlight where dragging-and-dropping content will put it, and works for any content, not just for images. The citation and reference tools had some minor adjustments to guide the user on how they operate, based on feedback and user testing. A lot of fixes to issues with windows opening and closing, and especially the link editing tool, were made, alongside the save dialog, categories, the language editing tool, table styling, template display and highlights on selected items. The mobile version of VisualEditor, currently available for alpha testers, moved towards release, fixing a number of bugs and improving performance. Work to support languages made some significant gains, and work to support Internet Explorer continued. The new visual interface for writing TemplateData was enabled on the Catalan and Hebrew Wikipedias. The deployed version of the code was updated five times in the regular release cycle (1.24-wmf8, 1.24-wmf9, 1.24-wmf10 and 1.24-wmf11).

Parsoid

In June, the Parsoid team continued with ongoing bug fixes and bi-weekly deployments; the selective serializer, improving our parsing support for some table-handling edge case, nowiki handling, and parsing performance are some of the areas that saw ongoing work. We began work on supporting language converter markup.

We added CSS styling to the HTML to ensure that Parsoid HTML renders like PHP parser output. We continued to tweak the CSS based on rendering differences we found. We also started work on computing visual diffs based on taking screenshots of rendered output of Parsoid and PHP HTML. This initial proof-of-concept will serve as the basis of more larger scale automated testing and identification of rendering diffs.

The GSoC 2014 LintTrap project saw good progress and a demo LintBridge application was made available on wmflabs with the wikitext issues detected by LintTrap.

We also had our quarterly review this month and contributed to the annual engineering planning process.

Core Features

Flow

Presentation slides on Flow from the metrics meeting for June

In June, the Flow team finished an architectural re-write for the front-end, so Flow will be easier to keep updating in the future. This will be released to mediawiki.org the first week of July, and Wikipedia the following week.

The new feature in this release is the ability to sort topics on a Flow board. There are now two options for the order that topics appear on the board: you can see the most recently created threads at the top (the default), or the most recently updated threads. This new sorting option makes it easier to find the active conversations on the board.

We’ve also made a few changes to make Flow discussions easier to read, including: a font size now consistent with other pages; dropdown menus now easier to read; the use of the new button style, and the WikiGlyphs webfont.

Growth

Growth

In June, the Growth team completed analysis of its first round of A/B testing of signup invitations for anonymous editors on English, French, German, and Italian Wikipedias. Based on these results, the team prepared a second version to be A/B tested. Additionally, the team released a major refactor of the GuidedTour extension‘s API, as well as design enhancements like animations, a new CSS-based way of drawing guider elements, updated button styles, and more. The team also launched GuidedTours on three new Wikipedias: Arabic, Norwegian, and Bengali.

Support

Wikipedia Education Program

This month, the Education Program extension again received incremental improvements and bugfixes. Sage Ross of the Wiki Education Foundation submitted two patches: one that adds information to the API for listing students, and another that lets anonymous users compare course versions. Also, a student from Facebook Open Academy fixed a usability issue in the article assignment feature.

Mobile

Wikimedia Apps

The Mobile Apps team released the new Android Wikipedia app and it is now available to be downloaded through the Google Play store on Android devices.

Core features of the app include the ability to save pages for offline reading, a record of your browsing history, and the ability to edit either as a logged in user or anonymously. Therefore the app is the first mobile platform that allows anonymous editing! The app also supports Wikipedia Zero for participating mobile carriers.

Additional work done this month includes the start of implementing night mode for the Android app (by popular demand), creating an onboarding experience which is to be refined and deployed in July, and numerous improvements to the edit workflow.

Mobile web projects

This month, the mobile web team finished work on styling the mobile site to provide a better experience for tablet users. We began redirecting users on tablets, who had previously been sent to the desktop version of all Wikimedia projects, to the new tablet-optimized mobile site on June 17. Our early data suggests that this change had a positive impact on new user signup and new editor activation numbers. We also continued work on VisualEditor features (the linking and citation dialogs) in preparation for releasing the option to edit via VisualEditor to tablet users in the next three months.

Wikipedia Zero

During the last month, the team deployed the refactored Wikipedia Zero codebase that replaces one monolithic extension with multiple extensions. The JsonConfig extension, which allows a wiki-driven JSON configuration system with data validation and a tiered configuration management architecture, had significant enhancements to make it more general for other use cases.

Additionally, the team enabled downsampled thumbnails for a live in-house Wikipedia Zero operator configuration, and finished Wikipedia Zero minimum viable product design and logging polish for the Android and iOS Wikipedia apps. The team also supported the Wikipedia apps development with network connection management enhancements in Android and iOS, with Find in page functionality for Android, and response to Wikipedia for Android Google Play reviews.

The team facilitated discussions on proxy and small screen device optimization, and examined the HTML5 app landscape for the upcoming fiscal year’s development roadmap. The team also created documentation for operators for enabling zero-rating with different connection scenarios. Bugfixes were issued for the mobile web Wikipedia Zero and the Wikipedia for Firefox OS app user experience.

Routine pre- and post-launch configuration changes were made to support operator zero-rating, with routine technical assistance provided to operators and the partner management team to help add zero-rating and address anomalies. Finally, the team participated in recruitment for a third Partners engineering teammate.

Wikipedia Zero (partnerships)

We launched Wikipedia Zero with Airtel in Bangladesh, our third partner in Bangladesh, and our 34th launched partner overall. We participated in the Wiki Indaba conference, the first event of its kind to be held in Africa. The event, organized by Wikimedia South Africa, brought together community members from Tunisia, Egypt, Ghana, Kenya, Namibia, Nigeria, Ethiopia, Malawi and South Africa. The attendees shared experiences and challenges to work in the region and formulated strategies to support and strengthen the movement’s efforts across the continent. While in South Africa, Adele Vrana also met with local operators. Meanwhile, Carolynne Schloeder met with numerous operators and handset manufacturers in India. Carolynne joined Wikimedian RadhaKrishna Arvapally for a presentation at C-DOT, and both participated a blogger event hosted by our partner Aircel, along with other members of Wikimedia India in Bangalore. Smriti Gupta joined the group as Mobile Partnerships Manager, Asia.

Language Engineering

Language tools

The Translate extension received numerous bug fixes, including fixing workflow states transitions for fundraising banners.

Content translation

The team added support for link adaptation, worked on the infrastructure for machine translation support using Apertium and on hiding templates, images and references that cannot be easily translated. They also prepared for deployment on beta wikis and made multiple bug fixes and design tweaks.

Platform Engineering

MediaWiki Core

HHVM

The team has been running HHVM on a single test machine (“osmium”) for the purpose of testing the job queue in production. The machine is only put into production on a very limited basis, while enough bugs are found to keep the team busy for a while, and then it’s disabled again as the team fixes those bugs. We’re planning on having HHVM running on a few job runner machines (continually) in July, then turning our focus toward running HHVM on the main application servers, taking a similar strategy.

Release & QA

The Release and QA Team had their mid-quarter check-in on June 27. Phabricator work is progressing nicely. The latest MediaWiki tarball release (1.23) was made and the second RFP started and is close to completion. We are moving to only WMF-hosted Jenkins for all jobs, and we are working with the MediaWiki Core and the Operations teams on HHVM-related integration (both for deployment and for the Beta Cluster).

Admin tools development

Work on this project is currently being completed along with the SUL finalisation project, including the global rename tool (bug 14862) and cleaning up the CentralAuth database (bug 66535).

Search

CirrusSearch is running as the default search engine on all but the highest traffic wikis at this point. Nik Everett and Chad Horohoe plan to migrate most of the remaining wikis in July, leaving only the German and English Wikipedia to migrate in August.

Auth systems

Continued work on the SOA Authentication RFC and Phabricator OAuth integration. We made OAuth compatible with HHVM and made other minor bug fixes.

SUL finalisation

The MediaWiki Core team has committed to having the following work completed by the end of September 2014:

  • Completing the necessary engineering work to carry out the finalisation.
  • Setting a date on which the finalisation will occur (Note: this date may not be later than September).
  • Have a communications strategy in place, and community liaisons to carry that out, for the time period between the announcement of the date of the finalisation and the finalisation proper.

Security auditing and response

We released MediaWiki 1.23.1 to prevent multiple issues caused by loading external SVG resources. We also performed security reviews of the Wikidata property suggester, Extension:Mantle for mobile/Flow, and Flow’s templating rewrite.

Quality assurance

Quality Assurance

This month saw significant improvements to the MediaWiki-Vagrant development environments from new WMF staff member Dan Duvall. We have completed support for running the full suite of browser tests on a Vagrant instance under the VisualEditor role. In the near future, we will extend that support to the MobileFrontend and Flow Vagrant roles, as well as making general improvements to Vagrant overall. Another great QA project is from Google Summer of Code intern Vikas Yaligar, who is using the browser test framework to automate taking screen captures of aspects of VisualEditor (or any other feature) in many different languages, for the purpose of documentation and translation.

Browser testing

After two years of using a third-party host to run browser test builds in Jenkins, this month we have completed the migration of those builds to Jenkins hosted by the Wikimedia Foundation. Hosting our browser test builds ourselves gives us more control over every aspect of running the browser tests, as well as the potential to run them faster than previously possible. Particular thanks to Antoine Musso, whose work made it possible. Simultaneously, we have also ported all of the remaining tests from the /qa/browsertest repository either to /mediawiki/core or to their relevant extension. This gives us the ability to package browser-based acceptance tests with the release of MediaWiki itself. After more than two years evolving the browser testing framework across WMF, the /qa/browsertests repository is retired, and all if its functions now reside in the repositories of the features being tested.

Multimedia

Multimedia

In June, the multimedia team released Media Viewer v0.2 on all Wikimedia wikis, with over 20 million image views per day on sites we track. Global feedback was generally positive and helped surface a range of issues, many of which were addressed quickly. Based on this feedback, Gilles Dubuc, Mark Holmquist, and Gergő Tisza developed a number of new features, with designs by Pau Giner: view images in full resolution, view images in different sizes, show more image information, edit image file pages, as well as easy disable tools for anonymous users and editors.

This month, we started working on the Structured Data project with the Wikidata team, to implement machine-readable data on Wikimedia Commons. We are now in a planning phase and aim to start development in Fall. We ramped up our work on UploadWizard, reviewed user feedback, collected metrics, fixed bugs and started code refactoring, with the help of contract engineer Neil Kandalgaonkar. We also kept working on technical debt and bug fixes for other multimedia tools, such as image scalers, GWToolset and TimedMediaHandler, with the help of Summer contractor Brian Wolff.

As product manager, Fabrice Florin helped plan our next steps, hosting a planning meeting and other discussions of our development goals, and led an extensive review of user feedback for Media Viewer and UploadWizard with new researcher Abbey Ripstra. Community liaison Keegan Peterzell introduced Media Viewer and responded to user comments throughout the product’s worldwide release. To learn more about our work, we invite you to join our discussions on the multimedia mailing list.

Engineering Community Team

Bug management

Apart from gruntwork (handling new tickets; prioritizing tickets; pinging on older tickets) and Andre’s main focus on Phabricator, Parent5446, Krinkle and Andre created several requested Bugzilla components, plus moved ‘MediaWiki skins’ to a Bugzilla product of their own. In Bugzilla’s codebase, Tony and TTO styled Bugzilla’s Alias field differently, Tony removed the padlock icons for https links in Bugzilla and cleaned up the codebase, and Odder fixed a small glitch in Bugzilla’s Weekly Summary and rendering of custom queries on the Bugzilla frontpage. Numerous older tickets with high priority were triaged on a bugday.

Phabricator migration

Apart from discussions on how to implement certain functionality and settings in Phabricator among team members and stakeholders, Mukunda implemented a MediaWiki OAuth provider in Phabricator (Gerrit changes: 1, 2; related ticket) and Chase created a Puppet module for Phabricator.

Mentorship programs

Google Summer of Code and FOSS Outreach Program for Women interns and mentors evaluated each other as part of the mid-term evaluations. Reports are available for all projects:

Technical communications

In addition to ongoing communications support for the engineering staff, Guillaume Paumier focused on information architecture of Wikimedia engineering activities. This notably involved reorganizing the Wikimedia Engineering portal (now linked from mediawiki.org’s sidebar) and creating a status dashboard that lists the status of all current activities hosted on mediawiki.org. The portal is now also cross-linked with the other main tech spaces (like Tech and Tech News) and team hubs.

Volunteer coordination and outreach

Volunteers and staff are beginning to add or express interest in topics for the 2014 Wikimania Hackathon in London. The WMUK team is working hard to finalize venue logistics so that we can schedule talks and sessions in specific rooms. Everything is on track for a successful (and very large!) Hackathon. Tech Talks held in June: How, What, Why of WikiFont on June 12 and A Few Python Tips on June 19. A new process has been set up for volunteers needing to sign an NDA in order to be granted special permissions in Wikimedia servers. On a similar note, we have started a project to implement a Trusted User Tool in Phabricator, in order to register editors of Wikimedia projects that have been granted special permissions after signing a community agreement.

Architecture and Requests for comment process

Developers had several meetings on IRC about architectural issues or Requests for comment:

Analytics

Wikimetrics

To support Editor Engagement Vital Signs, the team has implemented a new metric: Newly Registered User. There is also a new backup system to preserve user’s reports on cohorts as well as the ability to tag cohorts. A number of bugs have been fixed, including fixing the first run of a recurrent report and preventing the creation of reports with invalid cohorts.

Data Processing

The team has now integrated Data Processing as part of its Development Process. New Stories/Features have been identified and tasked. Also, experimentation with Cloudera Hadoop 5 is complete and we are ready to upgrade the cluster in July.

Editor Engagement Vital Signs

The ability to run a metric over an entire project (wiki) in Wikimetrics drives us closer to producing data daily for our first Vital Sign. The team has also iterated on the design of the dashboard and navigation. We added a requirement from executives to have a default view when EEVS is loaded. This view would display metrics for the 7 largest Wikipedias.

EventLogging

We fixed a serious bug where cookie data was getting captured in the country column. Saved data was scrubbed of the unwanted information and some old and unused tables were dropped. The team also implemented Throughput Monitoring to help catch potential issues in EventLogging.

Research and Data

This month we refined the Editor Model – a proposal to model the main drivers of monthly active editors – and expanded the documentation of the corresponding metric definitions. We applied this model to teams designing editor engagement features (Growth, Mobile) and supported them in setting targets for the next fiscal year.

We analyzed the early impact of the tablet desktop-to-mobile switchover on traffic, edit volume, unique editors, and new editor activation.

We hosted the June 2014 edition of the research showcase with two presentations on the effect of early socialization strategies and on predictive modeling of editor retention.

We released wikiclass, a library for performing automated quality assessment of Wikipedia articles.

We released longitudinal data on the daily edit volume for all wikis with VisualEditor enabled, since the original rollout.

We continued work on an updated definition for PageViews.

Finally, we held our quarterly review (Q4-2014) and presented our goals for the next quarter (Q1-2015).

Wikidata

The Wikidata project is funded and executed by Wikimedia Deutschland.

The team worked on fixing bugs as well as a number of features. These include data access for Wikiquote, support for redirects, the monolingual text datatype as well as further work on queries. Interface messages where reworked to make them easier to understand. First mockups of the new interface design have been published for comments. The entity suggester a team of students worked on over the last months has been deployed. This makes it easier to add new statements by suggesting what kind of statements are missing on an item. Wikidata the Game has been extended by Magnus by 2 games to add date of birth and date of death to people as well as to add missing images.

Future

The engineering management team continues to update the Deployments page weekly, providing up-to-date information on the upcoming deployments to Wikimedia sites, as well as the annual goals, listing ongoing and future Wikimedia engineering efforts.

This article was written collaboratively by Wikimedia engineers and managers. See revision history and associated status pages. A wiki version is also available.

by Guillaume Paumier at July 15, 2014 03:37 PM

Wikimedia UK

Wikipedia – is it fit for patient consumption?

John Byrne speaking at an edit-a-thon on the topic of women in science

This post was written by John Byrne, the Wikimedian in Residence at Cancer Research UK. It was first published here under a CC-BY-NC-SA licence.

In our increasingly internet-enabled world, answering a question or checking a fact can be just a few clicks, swipes or touches away.

In many cases these searches are likely to leave you looking at a Wikipedia page. And if that burning question relates to your health, the desire for information can be far more pressing.

In the case of any health concern it’s important to see your GP as a first port of call. But as more people turn to the web for information as well, how can you be sure that the articles you’re reading on Wikipedia, for example, are accurate and up to date?

This question reared its head in May as numerous media outlets covered US research published in The Journal of the American Osteopathic Association. The research set out to examine the reliability and accuracy of Wikipedia’s coverage of the “Top 10 most costly conditions in terms of public and private expenditure in the United States”.

These include cancer, and Wikipedia’s page on lung cancer came under scrutiny from the researchers.

While it’s hard to disagree with the overall take-home message of the stories – that people shouldn’t be relying solely on sources like Wikipedia to diagnose themselves (something Wikipedia itself is completely clear on) – the study leaves little room for suggestions on how Wikipedia could be improved for patients and the general public – something Cancer Research UK is actively involved in, as we’ll discuss below.

It also led to headlines claiming that “90% of Wikipedia’s medical entries are inaccurate”. Is this a fair representation of the research, and of Wikipedia?

Errors or ‘discordances’?

As the lead researcher on the new study, Professor Robert Hasty, from Campbell University in North Carolina, US, explained in an interview, the study was prompted by seeing young doctors looking things up on Wikipedia.

The use of the site by medical professionals has been the subject of a fair amount of research (e.g. see the summary on p12/13 of this PDF) though mostly looking to answer questions on how much do they do it (answer: a lot) and should they do it (answer: not as a primary source) rather than why do they do it.

In the new study, each of the “10 most costly conditions” the researchers looked at was matched to a relevant Wikipedia article, which was sent out to two randomly assigned junior doctors tasked with assessing the reliability of the content.

They were asked “to identify every assertion (ie, implication or statement of fact) in the Wikipedia article and to fact-check each assertion against a peer-reviewed source that was published or updated within the past 5 years.”

They found many “discordances” in the content, which they later referred to as “errors” in the conclusions of the research (so, unsurprisingly, this word became the focus of the media coverage).

This led them to conclude that “Health care professionals, trainees, and patients should use caution when using Wikipedia to answer questions regarding patient care” and “physicians and medical students who currently use Wikipedia as a medical reference should be discouraged from doing so”.

This was translated into the headlines we mentioned earlier, flagging the “90% inaccurate” figure.

But the design of the study has come in for unusually heavy criticism at the WikiProject Medicine talk page – where Wikipedia’s regular medical editors talk things over.

For example – to pick one out of many points the editors have discussed – the study doesn’t say where these “errors” are, meaning it’s very hard to check or change the articles.

For a general readership

To quote the physicist Freeman Dyson FRS: “Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it…. The information that it contains is totally unreliable and surprisingly accurate.” – a useful distinction when looking at Wikipedia.

Wikipedia – and its volunteer editors – have always made it clear that  it does not offer medical advice, let alone represent a substitute for professional advice, nor is it a medical textbook.

The internal style manual for medical articles is emphatic that Wikipedia’s medical content is aimed at a general readership, and cautions against writing directed at either patients or medical professionals, as well as banning the inclusion of information such as pharmaceutical dosages.

In practice, however, many of the articles do contain technical terms, and this can make some of them difficult for the average member of the public to understand. Clearly, there’s room for improvement.

Wikipedian in residence

In 2011, Cancer Research UK approached Wikipedia to see if the two organisations could work together to improve the cancer-related content on the site. This led ultimately to my appointment as the charity’s Wikipedian in Residence. The role will run until mid-December 2014, and is funded by the Wellcome Trust.

Part of my role here will be to work with the existing medical editors on Wikipedia to improve our articles on cancer topics, in particular those on four harder-to-treat cancers where there has been little improvement in survival rates in recent decades. These are cancers of the lung, pancreas, brain and oesophagus, which Cancer Research UK is giving particular focus to as part of its new research strategy.

But I will also be addressing other cancer-related content, for example for the Medical Translation Project that translates articles between the over-200 different language versions of Wikipedia.

Cancer Research UK has access, through its own staff and its access to other researchers and clinicians, to tremendous amounts of expertise, both in terms of science and the communication of science, where they have teams trained and experienced in communicating with a wide range of distinct audiences – including through its flagship patient information content.

I’ll be exploring a number of approaches to bringing all this expertise to bear on Wikipedia’s cancer content. The very large annual nerd-fest conference Wikimania 2014 is in the Barbican in London this year, about a mile from the charity’s HQ. This gives a great opportunity to bring Cancer Research UK and many medical Wikipedians together face to face.

Another aspect of the role is that we are planning to conduct research into the experiences of a range of different types of consumers of Wikipedia’s cancer content. There has been very little formal qualitative research into the experiences of Wikipedia’s readers – we hope this project will begin to address this gap, as well as encouraging others to carry out similar projects.

It’s important that we all work hard to reduce unreliability and make the accuracy less of a surprise in Wikipedia’s cancer articles. If you are curious, or interested in helping in any way, please do get in touch below or on my Wikipedia Talk Page. It would be sad if today’s media reporting put medical professionals off engaging with Wikipedia – the site, and the public, need your help.

John

Reference

  • Hasty R.T., Garbalosa R.C., Barbato V.A., Valdes P.J., Powers D.W., Hernandez E., John J.S., Suciu G., Qureshi F. & Popa-Radu M. & Wikipedia vs Peer-Reviewed Medical Literature for Information About the 10 Most Costly Medical Conditions., The Journal of the American Osteopathic Association, PMID:

by Richard Nevell at July 15, 2014 11:00 AM

Gerard Meijssen

#Wikidata - one million edits

Thanks to Magnus's tools, making a million edits is feasible. It has helped me that I have a plan. The plan is to gain functionality from the data that is included. Functionality that is available today, functionality that is not a mirage of what the future may hold.

The most important tool is Reasonator. It shows best what information exists for an item and it includes up to 500 items that refer to an item. It is so important because it provides me with instant gratification; you see things grow as they happen. The automagic is great; maps, timelines, the higher "classes" they pop up when they become applicable.

Very important is Autolist [1] and Autolist2 [2]. They are tools that add loads and loads of statements one at a time for me. It is important to restrict updates to the subclass or instance they should operate on.. For instance when adding "female", the item must be "human".

One aim of Wikidata is to be able to have information available for info-boxes of a Wikipedia. To make this possible it is a requirement that for each article there is an item. Creator is the tool that can create these missing items.

Obviously, all the harvesting needs to be done again for all those new items.  Toolscript makes this possible. I just have to figure out how to do this.

To put things in perspective, with one million edits only the surface has been scratched. There are some 15,259,555 items to operate on.. However, the effect is noticeable in the auto descriptions and in the Wikidata search results.. There are fewer items that have nothing to describe them.
Thanks,
     GerardM

[1] Categories with human that show what Wikidata thinks should be in them. Currently 337 of them
[2] Females on the Chinese Wikipedia that are not known as such. There were 1873 of them...


by Gerard Meijssen (noreply@blogger.com) at July 15, 2014 09:46 AM

Joseph Reagle

FOMO's etymology

As a new notion takes root in the zeitgeist one can find competing definitions circulating in popular culture and scholarly literature. This is especially so for what linguist Donna Gibbs (2006lc, p. 30) called cyberlanguage, with "its own brand of quirky logic, which evolves with unprecedented speed and variety and is heavily dependent on ingenuity and humor." One can see this evolution play out at Urban Dictionary (UD), a Web repository for (over eight million) definitions of contemporary popular culture, slang, and Internet memes. Submissions, which include a definition and optional examples of usage, can be made by anyone providing an email address; other contributors then vote upon whether a definition ought to be accepted. (One word can have multiple definitions; the term "Urban Dictionary" entry has hundreds (Lucy2005ud).) UD's earliest definition of FOMO as a type of fear is from 2005 and it simply expands the acronym and provides an example phrase "Jonny got the rep for being a fomo, but jake's a bigger one" (Justinas2005fmo). This example phrase is odd in that it is something one is rather than something one feels; in this, it is much closer to an older meaning of FOMO as a "fake homosexual." In any case, a better definition (and the most popular one) was posted in 2006: "The fear that if you miss a party or event you will miss out on something great" (Beaqon2006fmo). This definition and point in time marks the ascendancy of FOMO in popular culture: many more definitions would appear at UD and elsewhere in the following years.

Beyond penning definitions, lexicographers also attempt to find the origins and early exemplars of a term. For instance, the august Oxford Dictionary (2014fmo) locates FOMO's origins in the "early 21st century." While there's no evidence of single point of origin, I think we can be more precise than that. FOMO's usage, unsurprisingly, coincides with the launching of Facebook in 2004 and Twitter in 2006. For instance, Kathy Sierra, a popular tech blogger, wrote how Twitter fueled the fear in the year after its launch.

Ironically, services like Twitter are simultaneously leaving some people with a feeling of not being connected, by feeding the fear of not being in the loop. By elevating the importance of being "constantly updated," it amplifies the feeling of missing something if you're not checking Twitter (or Twittering) with enough frequency. (Sierra2007cpu)

(Apparently, at this point "tweeting" had not yet eclipsed "twittering.") In the same year, Lucy Jo Palladino (2007fyf) dedicated a section of her book on how to "defeat distraction and overload" to FOMO, though she focused on examples beyond social media, such as parents' anxiety that their children are falling behind. The earliest mention of the term on Usenet (the pre-Web fora of the Internet, which still muddles along) appears to be from 2008 (Bewdley2008hyg). By 2010, FOMO was being used and spoken of broadly and unambiguously tied to social media usage. By 2011, the phenomenon was something that others recognized they could take advantage of. A marketing report from JWT Intelligence ("converting cultural shifts into opportunities") recommended that "brands can focus on easing it, escalating it, making light of it or even turning it into a positive" (Miranda2011fm, p. 5, 17). Capping its seven year ascent, the notion was recognized by the Oxford dictionaries (OxfordWords2012bwa) as the "Anxiety that an exciting or interesting event may currently be happening elsewhere, often aroused by posts seen on a social media website."

In 2013, FOMO received its first scholarly attention from social psychologist Andrew Przybylskia (2013meb, p. 841) and his colleagues. They defined it as "a pervasive apprehension that others might be having rewarding experiences from which one is absent, FOMO is characterized by the desire to stay continually connected with what others are doing." In this definition we see a recognition of an emotion (i.e., anxiety) and a characteristic behavior. Similarly, contemporary discussion of FOMO invokes multiple, often tangled, references to varied emotions (e.g., anxiety) and behaviors (e.g., compulsive checking). Hence, it is worthwhile to further understand what it is that people are speaking of when the lament a fear of missing out.

by Joseph Reagle at July 15, 2014 04:00 AM

Wikimedia DC

Test

Wikipedia Summer of Monuments logo

This post is a test of our new platform management system. This will be deleted shortly.

by James at July 15, 2014 03:59 AM

July 14, 2014

Jeroen De Dauw

Some fun with iterators

Sometimes you need to loop over a big pile of stuff and execute an action for each item. In the Wikibase software, this for instance occurs when we want to rebuild or refresh a part of our secondary persistence, or when we want to create an export of sorts.

Historically we’ve created CLI scrips that build and executed calls to the database to get the data. As we’ve been paying more attention to separating concerns, such scripts have evolved into fetching the data via some persistence service. That avoids binding to a specific implementation to some extend. However, it still leads to the script knowing about the particular persistence service. In other words, it might not know the database layout, or that MySQL is used, it still things via the signature of the interface. And it’s entirely possible you want to use the code with a different source of the thing being iterated over that is in a format which the persistence interface is not suitable for.

All the code doing the iteration and invocation of the task needs to know is that there is a collection of a particular type it can loop over. This is what the Iterator interface is for. If you have the iteration code use an Iterator, you can implement and test most of your code without having the fetching part in place. You can simply feed in an ArrayIterator. This also demonstrates the script no longer knows if the data is already there or if (part of it) still needs to be retrieved.

tweetiterator

When iterating over a big or expensive to retrieve set of data, one often wants to apply batching. Having to create an iterator every time is annoying though, and putting the iteration logic together with the actual retrieval code is not very nice. After having done this a few times, I realized that part could be abstracted out, and created a tiny new library: BatchingIterator. You can read how to use it.

by Jeroen at July 14, 2014 11:06 PM

Wikimedia Foundation

Creating Safe Spaces

This morning I read an article entitled Ride like a girl. In it, the author describes how being a cyclist in a city is like being a woman: Welcome to being vulnerable to the people around you. Welcome to being the exception, not the rule. Welcome to not being in charge. The analogy may not be a perfect fit, but reading these words made me think of a tweet I favorited several weeks ago when #YesAllWomen was trending. A user who goes by the handle @Saradujour wrote: “If you don’t understand why safe spaces are important, the world is probably one big safe space to you.” As I continue interviewing women who edit Wikipedia and as I read through the latest threads on the Gendergap mailing list, I keep asking myself, “How can a community that values transparency create safe spaces? How can we talk about Wikipedia’s gender gap without alienating dissenting voices and potential allies?”

Ride like a girl?

Wikipedia’s gender gap has been widely publicized and documented both on and off Wiki (and on this blog since 1 February 2011). One of the reasons I was drawn to working on the gender gap as a research project was that, despite the generation of a great deal of conversation, there seem to be very few solutions. It is, what Rittel and Webber would call, a “wicked problem.” Even in the midst of the ongoing work of volunteers who spearhead and contribute to endeavors like WikiProject Women scientists, WikiWomen’s History Month, WikiProject Women’s sport and Meetup/ArtandFeminism (to name only a few), the gender gap is a wicked problem a lot of community members–even those dedicated to the topic–seem tired of discussing.

The Women and Wikipedia IEG project is designed to collect and then provide the Wikimedia community with aggregate qualitative and quantitative data that can be used to assess existing efforts to address the gender gap. This data may also be used to guide the design of future interventions or technology enhancements that seek to address the gap. The data may include but not be limited to:

  • Stories of active editors who self-identify as women;
  • Interviews with Wikipedians (including those who represent non-English communities) who have been planning and hosting editing events to address the gender gap;
  • Small focus groups with different genders who participate in events such as meet-ups, edit-a-thons, Wikimania, etc.;
  • Observations of co-located editing and mentoring events designed to address the gender gap–both those sponsored by Wikipedia and those not–such as meet-ups, workshops and edit-a-thons;
  • Participation in and observations of non co-located (e.g., online, virtual) editing and mentoring events designed to address the gender gap;
  • An online survey designed specifically with the gender gap in mind;
  • Longitudinal measures of the success (e.g., the ability to attract and retain new editors who self-identify as women; lasting content created by new editors who self-identify as women; user contribution tracking) of co-located and non co-located events);
  • Content analysis of internal documents (e.g., project pages, talk pages, gender gap mailing list archives, etc.) regarding the gender gap and efforts to address it.

How can a community that values transparency create safe spaces?

This past month I’ve been watching, reading and thinking. I’ve also been revisiting my goals. Now, the first goal I’d like to accomplish is to help reinvigorate the gender gap discussion by creating a central place where the international Wikipedia community can document all of the terrific ideas that have been shared, conversations that have taken place and work that has been done to address the gap. Currently, the conversations are, at times, disparate and dispersed. And, sometimes, they aren’t safe. Often the stakeholders–like cyclists and motorists–have such different goals and values that conflict is inevitable. However, as studies[1] have shown, conflict can be productive and collaborative when differing voices are respected, when policies are thoughtfully constructed and when power is shared.

In the next few weeks, I’ll be updating the Wikimedia Gender gap page with sources I’ve gathered during my literature review and with links to existing projects and conversations. I’ll also continue to recruit participants for interviews and focus groups. If you’d like to participate in any of this work, please let me know. Creating safe spaces is a truly collaborative effort.

Amanda Menking, 2014 Individual Engagement Grantee

  1. Travis Kriplean, Ivan Beschastnikh, David W. McDonald, and Scott A. Golder. 2007. Community, consensus, coercion, control: cs*w or how policy mediates mass participation. In Proceedings of the 2007 international ACM conference on Supporting group work (GROUP ’07). ACM, New York, NY, USA, 167-176. DOI=10.1145/1316624.1316648 http://doi.acm.org/10.1145/1316624.1316648

by Amanda Menking at July 14, 2014 06:19 PM

Brion Vibber

Testing ogv.js in MediaWiki

After many weekends of development on the ogv.js JavaScript Ogg Vorbis/Theora player I’ve started work on embedding it as a player into MediaWiki’s TimedMediaHandler extension.

The JavaScript version is functional in Safari and IE 10/11, though there’s some work yet to be done… See a live wiki at ogvjs-testing.wmflabs.org or the in-progress patch set.

Screen Shot 2014-07-13 at 8.43.34 PM

 

Meanwhile, stay tuned during the week for some demos of the soon-to-be-majorly-updated Wikipedia iOS app!

by brion at July 14, 2014 03:55 AM

Tech News

Tech News issue #29, 2014 (July 14, 2014)

TriangleArrow-Left.svgprevious 2014, week 29 (Monday 14 July 2014) nextTriangleArrow-Right.svg
Other languages:
العربية 100% • ‎বাংলা 67% • ‎čeština 100% • ‎English 100% • ‎Esperanto 29% • ‎suomi 5% • ‎français 100% • ‎עברית 100% • ‎日本語 95% • ‎occitan 29% • ‎polski 100% • ‎português do Brasil 29% • ‎தமிழ் 5% • ‎українська 100% • ‎中文 100%

July 14, 2014 12:00 AM

July 13, 2014

Tony Thomas

A simple test to detect a permanent bounce

There are n chances for an email to get bounced back after being rejected and discussing it all here is out of scope of this post, and are broadly categorized into permanent (hard) and temporary (soft) bounces. Jotting down some reasons for a temporary bounces: A server is unavailable or down -Network failure The server […]

by Tony Thomas at July 13, 2014 06:48 PM

Writing a Job queue to deal with load when POST-ing from exim to MediaWiki API

Last day, Tim Landscheidt from Wikimedia scribbled on my earlier post that I should use a job queue to handle load of the bounce handling API. I talked with Legoktm on this, and he said it was a great idea, as there can be a chance of multiple email bounces reaching the API simultaneously. I […]

by Tony Thomas at July 13, 2014 03:09 PM

Gerard Meijssen

#Wikidata guided tours

#Wikidata has guided tours. They are nice. They help newbies understand what it is all about and yet..

I find that knowing too much does not help. There are all those details that I want people to know about.. The #Babel effect on the number of languages with labels shown for instance.

In the guided tour descriptions are explained.. I HATE descriptions, they are vastly inferior to auto descriptions.. Check out the screen print; no description in sight but these auto descriptions do translate to all other languages.

Really, it pains me that I find fault at these very much needed guided tours. The truth is that I would not  do it differently.
Thanks,
    GerardM




by Gerard Meijssen (noreply@blogger.com) at July 13, 2014 11:12 AM

#Wikimetrics - What is in it for me?

When you are into #statistics, Wikimedia project statistics, Wikimetrics is a big thing. It is an open environment where you can poor over the collected data to your hearts content.

To make it even better, there will be training in three sessions introducing the tools and necessary skills.  Sweet.

However.. Is all the data in there?

There is a long standing request for information that shows where Wikipedia fails to deliver; what are our readers looking for that they cannot find. When such information is collected, it will be easy enough to use Wikimetrics for this as well. After many years the people who could know, the WMF statisticians, have not been able to say one way or another.

The official statistics for Wikidata do not include page reads at all. The motivation given at the time was nobody is using Wikidata. Maybe.. However, projects have started using Wikidata in templates. There are even categories for such templates.. Tools external to Wikidata like the Reasonator have their own statistics so it may be interesting to know how often what tools access Wikidata. For the Wikidata crowd it is nice to know what impact their work has.

The best thing about Wikimetrics is that it is there. It is wonderful that it gets support and even when more data could be added it is wonderful to see how the Wikimedia Foundation is opening up its data for further perusal by all comers.
Thanks,
     GerardM

by Gerard Meijssen (noreply@blogger.com) at July 13, 2014 10:14 AM

July 12, 2014

Jeroen De Dauw

Component design

This week I gave a presentation titled “Component design: Getting cohesion and coupling right” at Wikimedia Deutschland.

slides-0-first

Components are a level of organization, in between classes and layers. They are an
important mechanism in avoiding monolithic designs and big balls of mud. While
everyone can recognize a component when they see one, many people are unaware of
the basic principles that guide good component design.

This presentation is aimed at developers. It is suitable both for people new to the field
and those with many years of experience.

The topics covered include:

* What is a component?
* Which things go together?
* How do components relate to each other?
* How are common problems avoided?

You can view the slides, or look at the source.

slides-1-benefits slides-2-questions slides-3-adp slides-4-cat

by Jeroen at July 12, 2014 10:45 PM

Wikisorcery

Fafiation (n.)

I’ve been forced away from it all recently, with little editing on any wiki, missing a few wikimeets and no blogging. There was no one cause, just lots of little things that started to take up more time than usual, leading up to the most random of all: my chair breaking (it seems trivial but it’s very hard to type, or even comfortably use a computer, without it). I’ll have a new chair soon, so perhaps I’ll be able to dive back into things shortly.

One thing I did find, however, was that I had time during lunch breaks at work to make small edits on Wiktionary. I’ve defined a word or two in the past, mostly after checking unusual words on Wikisource, but this ironically turned out to be my biggest effort on the project.

It can be quite quick and easy to do, although I fear it’s developing into yet another personal project (or several). Spinning out of my interest in pulp magazines, early fandom and related media, I’ve been adding fanspeak terms of the era. For example:

 

fafiation (plural fafiations)

1. (dated, fandom slang) The act of fafiating; exiting involvement in fandom due to other obligations.

 

I own a dictionary of science fiction and SF fandom words, Brave New Words by Jeff Prucher (2007, Oxford University Press, ISBN 978-0-19-538706-3, FYI), which makes this both a touch easier and a touch more verifiable. Not to mention the other sources I’ve found over time on the internet, like a digital transcript of the 1944 Fancyclopedia that arguably started all of this and many transcribed fanzines of yesteryear.

I expect I’ll find more citations as I work on transcribing more pulp magazines. I think I’ll continue adding to Wiktionary even as I’m getting back on top of everything else.


by wikisorcery at July 12, 2014 01:48 PM

Gerard Meijssen

#Wikidata - Ahmed Sheikh Jama, Minister of Information of #Puntland

Mr Jama died on July 9th in #Garoowe, according to his article Garoowe is in Somalia and according to the article about Garoowe it is in Puntland.

Puntland is a break away region in the north of Somalia and Mr Jama was its minister of Information.

Puntland has all the trappings of a country but it is not recognised as a country. It is one of those subjects that can do with a lot of TLC so that people may know about it.
Thanks,
      GerardM

by Gerard Meijssen (noreply@blogger.com) at July 12, 2014 10:22 AM

July 11, 2014

Wikimedia Tech Blog

Digging for Data: How to Research Beyond Wikimetrics

The next virtual meet-up will point out research tools. Join!!

For Learning & Evaluation, Wikimetrics is a powerful tool for pulling data for wiki project user cohorts, such as edit counts, pages created and bytes added or removed. However, you may still have a variety of other questions, for instance:

How many members of WikiProject Medicine have edited a medicine-related article in the past three months?
How many new editors have played The Wikipedia Adventure?
What are the most-viewed and most-edited articles about Women Scientists?

Questions like these and many others regarding the content of Wikimedia projects and the activities of editors and readers can be answered using tools developed by Wikimedians all over the world. These gadgets, based on publicly available data, rely on databases and Application Programming Interfaces (APIs). They are maintained by volunteers and staff within our movement.

On July 16, Jonathan Morgan, research strategist for the Learning and Evaluation team and wiki-research veteran, will begin a three-part series to explore some of the different routes to accessing Wikimedia data. Building off several recent workshops including the Wiki Research Hackathon and a series of Community Data Science Workshops developed at the University of Washington, in Beyond Wikimetrics, Jonathan will guide participants on how to expand their wiki-research capabilities by accessing data directly through these tools.

Over the course of three virtual meet-ups, participants will:

  • Learn the basics of MySQL – A language used to pull data from the Wikimedia databases.
  • Create a Wikimedia Labs account.
  • Learn about features and limitations of community data resources and tools.
  • Access data resources directly through Wikimedia APIs.
  • Gather data from various Wikimedia APIs.

Whether you recently received an Individual Engagement Grant, coordinated programs for a chapter or user group, or just launched a new WikiProject, finding out how your initiative evolves might give you key information for success. If you are a Wikimedia program project leader who wants to evaluate efforts and impact, a Wikimedian researcher who has little or no previous programming experience – but need to work with data, or simply curious on how to explore data from Wikimedia page, then these webinars are for you!

Understand your way through data.

These online sessions are a great way to gain technical skills, both in quantitative research and programming basics. The sessions are set to allow participants to follow along with the host, so if you have some specific data questions you want to explore, be sure to have those ready for your personal exploration. No programming skills are needed!

Has this blog post raised new questions? Do you have topics in mind you would like to discuss? Share them on the event page! Join us for the first Beyond Wikimetrics meet-up on Wednesday, July 16, at 3 pm UTC. Sign up through the PE&D Google+ page and stay tuned to our News page for links to event recordings and dates for sessions in August and September!

For more info please visit:

María CruzCommunity Coordinator of Program Evaluation & Design

by María Cruz at July 11, 2014 06:12 PM