As of late, we have received several questions about the Wikimedia Foundation and Wikipedia’s affiliation with WT:Social. The recently launched WT:Social is related to WikiTribune, a venture independently initiated by Wikipedia founder Jimmy Wales.

Wikipedia and the Wikimedia Foundation are separate and independent from WT:Social. We have no connection to the social networking site.

The Wikimedia Foundation hosts and runs 11 free knowledge projects for anyone to learn from and contribute to, including Wikipedia, Wikimedia Commons, the collection of freely-licensed images, videos, and audio files, Wikidata, an open structured free knowledge database, and more.

Jimmy Wales is the founder of Wikipedia and continues to serve on the Wikimedia Foundation’s Board of Trustees. He has several other businesses and projects he’s started since founding Wikipedia, including WikiTribune, a for-profit collaborative news platform. Most recently, he relaunched WikiTribune as WT:Social—a paid social networking site based off the collaborative WikiTribune model. You can read more about WT:Social on the WikiTribune website. These projects are not overseen or affiliated with Wikipedia or the Wikimedia Foundation, which strictly runs nonprofit, free knowledge projects.

The word “wiki” refers to a website built using collaborative editing software. Hundreds of organizations and projects with no affiliation with Wikipedia or the Wikimedia Foundation also use the term, including wikiHow, Wikileaks, and WikiEducator.

Read more about how the Wikimedia Foundation supports Wikipedia and other Wikimedia projects.

Your own Wikidata Query Service, with no limits (part 1)

16:57, Tuesday, 19 2019 November UTC

The Wikidata Query Service allows anyone to use SPARQL to query the continuously evolving data contained within the Wikidata project, currently standing at nearly 65 millions data items (concepts) and over 7000 properties, which translates to roughly 8.4 billion triples.

Screenshot of the Wikidata Query Service home page including the example query which returns all Cats on Wikidata.

You can find a great write up introducing SPARQL, Wikidata, the query service and what it can do here. But this post will assume that you already know all of that.

Guide

Here we will focus on creating a copy of the query service using data from one of the regular TTL data dumps and the query service docker image provided by the wikibase-docker git repo supported by WMDE.

Somewhere to work

Firstly we need a machine to hold the data and do the needed processing. This blog post will use a “n1-highmem-16” (16 vCPUs, 104 GB memory) virtual machine on the Google Cloud Platform with 3 local SSDs held together with RAID 0.

This should provide us with enough fast storage to store the raw TTL data, munged TTL files (where extra triples are added) as well as the journal (JNL) file that the blazegraph query service uses to store its data.

This entire guide will work on any instance size with more than ~4GB memory and adequate disk space of any speed. For me this setup seemed to be the fastest for the entire process to run.

If you are using the cloud console UI to create an instance then you can use the following options:

  • Select Machine type n1-highmem-16 (16 vCPU, 104 GB memory)
  • Bootdisk: Google Drawfork Debian GNU / Linux 9
  • Firewall (optional): Allow HTTP and HTTPS traffic if you want to access your query service copy externally
  • Add disk: Local SSD scratch disk, NVMe (select 3 of them)

If you are creating the instance via the command line your command will look something like the one below. All other defaults should be fine as long as the user you are using can create instances, and you are in the correct project and region.

gcloud compute instances create wikidata-query-1 --machine-type=n1-highmem-16  --local-ssd=interface=NVME --local-ssd=interface=NVME --local-ssd=interface=NVME

Initial setup

Setting up my SSDs in RAID

As explained above I will be using some SSDs that need to be setup in a RAID, so I’ll start with that following the docs provided by GCE. If you have no need to RAID then skip this step.

sudo apt-get update
sudo apt-get install mdadm --no-install-recommends
sudo mdadm --create /dev/md0 --level=0 --raid-devices=3 /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3
sudo mkfs.ext4 -F /dev/md0
sudo mkdir -p /mnt/disks/ssddata
sudo mount /dev/md0 /mnt/disks/ssddata
sudo chmod a+w /mnt/disks/ssddata

Packages & User

Docker will be needed to run the wdqs docker image, so follow the instructions and install that.

This process will require some long running scripts so best to install tmux so that we don’t loose where they are.

sudo apt-get install tmux

Now lets create a user that is in the docker group for us to run our remaining commands via. We will also use the home directory of this user for file storage.

sudo adduser sparql
sudo usermod -aG docker sparql
sudo su sparql

Then start a tmux session with the following command

tmux

And download a copy of the wdqs docker image from docker hub.

docker pull wikibase/wdqs:0.3.6

Here we use version 0.3.6, but this guide should work for future versions just the same (maybe not for 0.4.x+ when that comes out).

Download the TTL dump

The main location to find the Wikidata dumps is on https://dumps.wikimedia.org partially buried in the wikidatawiki/entities/ directory. Unfortunately downloads from here have speed restrictions, so you are better of using one of the mirrors.

I found that the your.org mirror was the fastest from us-east4-c GCE region. To download simply type the following and wait. (At the end of 2018 this was a 47GB download).

Firstly make sure we are in the directory which is backed by our local SSD storage which was mounted above.

cd /mnt/disks/ssddata

And download the latest TTL dump file which can take around 30 minutes to an hour.

wget http://dumps.wikimedia.your.org/wikidatawiki/entities/latest-all.ttl.gz

Munge the data

Munging the data involves running a script which is packaged with the query service over the TTL dump that we have just downloaded. We will use Docker and the wdqs docker image to do this, starting a new container running bash.

docker run --entrypoint=/bin/bash -it --rm -v /mnt/disks/ssddata:/stuff wikibase/wdqs:0.3.6

And then running the munge script within the internal container bash. This will take around 20 hours and create many files.

./munge.sh -c 50000 -f /stuff/latest-all.ttl.gz -d /stuff/mungeOut

The -c option was introduced for use in this blog post and allows the chunk size to be selected. If the chunk size is too big you may run into import issues. 50000 is half of the default.

Once the munge has completed you can leave the container that we started and return to the virtual machine.

exit

Run the service

In order to populate the service we first need to run it using the below command. This mounts the directory containing the munged data as well as the directory for storing the service JNL file in. This will also expose the service on port 9999 which will be writable, so if you don’t want other people to access this check your firewall rules. Don’t worry if you don’t have a dockerData directory, as it will be created when you run this command.

docker run --entrypoint=/runBlazegraph.sh -d \
-v /mnt/disks/ssddata/dockerData:/wdqs/data \
-v /mnt/disks/ssddata:/mnt/disks/ssddata \
-e HEAP_SIZE="128g" \
-p 9999:9999 wikibase/wdqs:0.3.6

You can verify that the service is running with curl.

curl localhost:9999/bigdata/namespace/wdq/sparql

Load the data

Using the ID of the container that is now running, which you can find out by running “docker ps”, start a new bash shell alongside the service.

docker exec -it friendly_joliot bash

Now that we are in the container running our query service we can load the data using the loadData script and the previously munged files.

/wdqs/loadData.sh -n wdq -d /mnt/disks/ssddata/mungeOut

Once the the data appears to be loaded, you can try out a shorter version of the cats query from the service examples.

curl localhost:9999/bigdata/namespace/wdq/sparql?query=%23Cats%0ASELECT%20%3Fitem%20%3FitemLabel%20%0AWHERE%20%0A%7B%0A%20%20%3Fitem%20wdt%3AP31%20wd%3AQ146.%0A%20%20SERVICE%20wikibase%3Alabel%20%7B%20bd%3AserviceParam%20wikibase%3Alanguage%20%22en%2Cen%22.%20%7D%0A%7D

Timings

Loading data from a totally fresh TTL dump into a blank query service is not a quick task currently. In production (wikidata.org) it takes roughly a week, and I had a similar experience while trying to streamline the process as best I could on GCE.

For a dump taken at the end of 2018 the timings for each stage were as follows:

  • Data dump download: 2 hours
  • Data Munge: 20 hours
  • Data load: 4.5 days
  • Total time: ~5.5 days

Various parts of the process lead me to believe that this could be done faster as throughout CPU usage was pretty low and not all memory was utilized. The loading of the data into blazegraph was by far the slowest step, but digging into this would require someone that is more familiar with the blazegraph internals.

Blog post series

This blog post, and future blog posts on the topic of running a Wikidata query service on Google cloud, are supported by Google and written by WMDE.

Part 2 will use the docker images and a pre generated Blazegraph journal file to deploy a fully loaded Wikidata query service in a matter of minutes.

The post Your own Wikidata Query Service, with no limits (part 1) appeared first on Addshore.

Tech News issue #47, 2019 (November 18, 2019)

00:00, Monday, 18 2019 November UTC
This document has a planned publication deadline (link leads to timeanddate.com).
TriangleArrow-Left.svgprevious 2019, week 47 (Monday 18 November 2019) nextTriangleArrow-Right.svg
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎español • ‎français • ‎lietuvių • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎العربية • ‎ไทย • ‎中文 • ‎日本語 • ‎ꯃꯤꯇꯩ ꯂꯣꯟ

weeklyOSM 486

21:47, Sunday, 17 2019 November UTC

05/11/2019-11/11/2019

lead picture

Editing rate of over 100 million edits per month 1 | © Simon Poole – Map data © OpenStreetMap contributors

Mapping

  • Steve Coast suggests the creation of a SIG (Special Interest Group) on Addressing in OSM. He points to a prototype address QA tool. In the replies various people point out that OSM already has a rich range of tools to assist with address mapping.
  • Jan Michel suggests we should consider access rules and specific amenity values for small electric vehicles like bicycles and scooters as they are becoming increasingly common. He has created a proposal and asked for comments and suggestions.
  • In a tagging discussion about extremely large flood control features, John Willis shows an impressive example of Japanese flood management.
  • Neena2309 blogs about the MapRoulette challenge that aims to improve the OSM coverage of health facilities in India.
  • Kreuzschnabel luckily spotted (de) (automatic translation) a mass displacement of London roads, and quickly reverted the change. See various comments in the Q&A forum. Among other suggestions, Richard Fairhurst suspected it might be a OSM file being opened which has the same OSM IDs as the nodes in London and Oslo.

Community

  • The iD developers temporarily deactivated commenting on issues and pull requests, and the creation of new issues for everyone except collaborators. A few hours before, users complained again about iD transferring user data, without consent, to Facebook by loading brand logos. It now appears possible for non-collaborators to raise issues again.
  • Frederik Ramm had it with the behaviour of the iD development team. He thinks the relationship with OSM, which he calls “abusive”, should come to an end. According to his post, the iD team has shown that decisions to implement features are not driven by what the community wants or needs but by the editor developers alone. Therefore, he recommends replacing the default osm.org editor iD with a fork using the version from Frédéric Rodrigo. A heated discussion ensued.
  • Simon Poole noted a key milestone for OSM: a sustained editing rate of over 100 million edits per month. This is the highlight of his regular update of OSM editing statistics on the wiki.
  • TZorn reports (de) (automatic translation) about his user experience with JOSM on a Microsoft Surface Pro in tablet mode .
  • The recently launched OpenStreetMap Calendar simplifies publishing of OSM-related events, allows downloading events to your own calendar and gives people the possibility to indicate their participation without having to use the wiki. In addition, RSS feeds can be used to stay updated about events in your area.
  • Valeriy Trubin continues a series of interviews with Russian mappers. Sergei Sinitsyn (ru) (automatic translation) told how mapping in OSM is connected with discovering interesting life stories and Artem Svetlov (ru) (automatic translation) talks about shooting Mapillary panoramas and the use of OSM data.

Imports

  • Guillaume Rischard announced the import of administrative boundaries and addresses in Kosovo, which local mappers have received from authorities there.

OpenStreetMap Foundation

  • Frederik Ramm, Treasurer of the OSM Foundation, announced that the board will evaluate ways of working with Dorothea full-time, as her help on administrative tasks as well as on the State of the Map conference organisation is extremely helpful. Dorothea is the only paid person working for the OpenStreetMap Foundation.
  • The next OSMF board meeting will take place on 20 November 2019 at 19:00 London time. Everyone is invited to join the public meeting in Mumble. Some interesting items have been put on the agenda.
  • Frederik Ramm recommends that we consider “Newbs” (people who haven’t served on the board before) when voting in the upcoming OpenStreetMap Foundation board election.
  • The State of the Map Working Group is looking for volunteer supporters. The “job description” has a long list of ways that people can help which should include something for everyone.

Events

  • Ilya Zverev invites everyone to FOSDEM 2020. It is one of the major events for free and open source software, and it is on 1 and 2 February 2020 in Brussels, Belgium.
  • Pedja announced the first OSM Meetup in Serbia and hopes to attract a decent number of participants. The meeting will take place on 7 December 2019 in Belgrade.

Humanitarian OSM

  • Imperial College London reported about the help its students have provided to vulnerable countries by participating in a Mapathon hosted by the Friends of Médecins Sans Frontières. The area of focus was Bangassou in the Central African Republic, which is a blank area on Google Maps…

Education

  • A call is being made for articles for a special issue of the ISPRS International Journal of Geo-Information. The special issue is primarily aimed at collecting papers that extend the research works presented in the Academic Track at SotM 2019. However, other original submissions aligned with the area of research are also highly welcome. The deadline for submissions is 31 March 2020.

Maps

  • Frank Schmirler created “Lights of the Sea” (“mapa de balizas” in Spanish) adding the option to select as the menu language German, English or Spanish. It is really worth taking a look at these formidable maps and feel encouraged to map beacons to illuminate the coasts of the world that still appear dark.

switch2OSM

  • The report of a Tesla owner about the improved route guidance of his car’s Smart Summon application after adding parking lanes on OSM caused further discussions at Hacker News with mixed responses.

Open Data

  • The High Court in London has passed judgement on an important case with implications for geodata released under open licences, including INSPIRE. The start-up 77m tried to use open data from various sources to build “Matrix”, their own product of address polygons, but were then sued by the British Ordnance Survey (OSGB).77m lost the case, but neither OSGB, nor the cadastral agency (Land Registry), escaped censure by the judge. The full implications are not yet clear, but Jeni Tennison (CEO of the Open Data Institute) has tweeted some first thoughts and open data commentator Owen Boswarva has a more detailed blog post.Useful OSM background on some of the data involved can be read on the blogs of Chris Hill and SK53.

Software

  • Simon Poole, maintainer of Vespucci, informed the community about potential crashes of the mobile editor following a change in the data format provided by the Osmose API. This was resolved at the Osmose end (they undid their breaking change).

Programming

  • Gravitystorm went to great lengths to figure out if it would be possible to run openstreetmap.org on Heroku (a cloud computing service). This may be interesting for people who want to experiment with the setup, as it can be accomplished almost for free.
  • akashihi wants OSRM to penalise routes with embedded_rails=* for cyclists. The merits of the suggestion are discussed in the pull request.

Releases

  • We have overlooked recent developments of OpenLayers. Recent releases contain some interesting features such as the ability to compose layers with different renderer types, a number of vector tile rendering improvements, a lower memory footprint overall and many more. We encourage users of versions below the recently released OpenLayers v6.1.1 to check the release history.

Did you know …

  • Digital Egypt, which was founded in 2011 and specializes in GIS data and map production, will start a project to develop OSM Maps of Egypt and verify all data entered? (For example, Badr City.) The aim of the project is to increase the accuracy of the maps and add missing road names and geographical codes. OSM standards and the Egyptian Wiki Page will be strictly adhered to and they will work closely with the OSM community.

Other “geo” things

  • New York has a new attraction. The fifth highest viewing platform called “Edge” was built at a height of 335 m. A glass floor offers a special thrill. The F4 Map offers an approximate impression of the view over New York. The OSM mapping of the skyscraper “30 Hudson Yards” is not yet complete.
  • Morten Lind tweeted about Christiansø, the last populated place in Denmark to be assigned addresses. Christiansø was originally a military settlement on the island of Ertholmene, which explains the anomaly.
  • Geospatial World reported Google’s new initiative to offer free licences for Google Earth Engine worth US$ 3 million to support organisations and initiatives that make use of Google’s geospatial data.
  • The German Federal Armed Forces University has archived something that was previously only possible in mobile phone manufacturers’ advertisements. They were able to reach 1–2 cm accuracy with a commercial Android smartphone (placed on a choke ring platform to mitigate multipath effects) by using raw position data from the Android API, reports Inside GNSS. The details of the setup can be found at the University’s website.
  • The US Transportation Safety Board has, once again, issued a plea for online map providers in the USA to incorporate information on railway level crossings. Deaths at level crossings are still high (270 in 2018), and lack of driver awareness is still a contributory factor. Many map apps do not notify drivers of their presence. As an aside, the OSM Weekly team wondered how many were mapped on OSM: about 100,000, of which 5,000 lack railway=level_crossing.
  • The New York Times reported about a research project which says that many cities (such as Ho Chi Minh City, Bangkok, parts of Shanghai, Mumbai and Alexandria) could be flooded by 2050. 110 million people already live below sea level and by 2050 an additional 150 million people may be affected, requiring large investments in protective measures like seawalls and other barriers.

Upcoming Events

Where What When Country
Niš Missing Maps Mapathon Niš #1 2019-11-16 serbia
Istanbul Yer Çizenler 2019-11-16 turkey
Cologne Bonn Airport Bonner Stammtisch 2019-11-19 germany
Reading Reading Missing Maps Mapathon 2019-11-19 united kingdom
Lüneburg Lüneburger Mappertreffen 2019-11-19 germany
Derby Derby pub meetup 2019-11-19 united kingdom
Lansing Meetup at Kelly’s 2019-11-19 united states
Prešov Missing Maps Mapathon Prešov #4 2019-11-21 slovakia
Grand-Bassam State of the Map Africa 2019 2019-11-22-2019-11-24 ivory coast
Izmir Yer Çizenler Mapathon with HKMO-Izmir 2019-11-23 turkey
Izmir Yer Çizenler 2019-11-24 turkey
Bremen Bremer Mappertreffen 2019-11-25 germany
Salt Lake City OSM Utah Mapping Night 2019-11-26 united states
Rémire-Montjoly Réunion mensuelle OSM Guyane 2019-11-26 france
Zurich Missing Maps Mapathon Zürich 2019-11-27 switzerland
Düsseldorf Stammtisch 2019-11-27 germany
Singen Stammtisch Bodensee 2019-11-27 germany
Kilkenny Kilkenny Mapping Event 2019-11-30 ireland
Nantes Participation à « Nantes en sciences » 2019-11-30 france
Ivrea Incontro Mensile 2019-11-30 italy
London Missing Maps London 2019-12-03 united kingdom
Stuttgart Stuttgarter Stammtisch 2019-12-04 germany
Stuttgart Stuttgarter Stammtisch 2019-12-04 germany
Bochum Mappertreffen 2019-12-05 germany
San José Civic Hack & Map Night 2019-12-05 united states
Ulmer Alb Stammtisch Ulmer Alb 2019-12-05 germany
Dortmund Mappertreffen 2019-12-06 germany
Belgrade OSM Serbia Meetup 2019-12-07 serbia
AoA and other changes Voting on OSMF board elections 2019-12-07-2019-12-14 world
Cape Town State of the Map 2020 2020-07-03-2020-07-05 south africa

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Elizabete, Nakaner, NunoMASAzevedo, Polyglot, Rogehm, SK53, Silka123, SomeoneElse, SunCobalt, TheSwavu, YoViajo, derFred.

Semantic MediaWiki 3.1.1 released

09:37, Sunday, 17 2019 November UTC

November 17, 2019

Semantic MediaWiki 3.1.1 (SMW 3.1.1) has been released today as a new version of Semantic MediaWiki.

It is a release providing several bug fixes. Please refer to the help pages on installing or upgrading Semantic MediaWiki to get detailed instructions on how to do this.

Semantic MediaWiki 3.1.1 released

09:36, Sunday, 17 2019 November UTC

November 17, 2019

Semantic MediaWiki 3.1.1 (SMW 3.1.1) has been released today as a new version of Semantic MediaWiki.

It is a release providing several bug fixes. Please refer to the help pages on installing or upgrading Semantic MediaWiki to get detailed instructions on how to do this.

Winners of Wiki Loves Monuments 2019 in Iran

14:30, Saturday, 16 2019 November UTC

Between September 5 (۱۵ شهریور)  and October 7 (۱۵ مهر) of 2019,  close to  203 people participated in Wiki Loves Monuments in Iran. The local contest, now in its fifth year, has been a force to help enrich Wikipedia with photos of more than 26,000 nationally registered monuments of Iran. Today we share with you the top 10 nominations of Iran to compete at the international level as well as some statistics about the contest.

Nominations

The top 10 winners of Iran are announced thanks to Iran’s jury ( Diego Delso, Sam Javanrouh, Mohammad Majidi, Ali Parsa, Kimia Rahgozar) and volunteers who reviewed more than 2400 photos submitted to the national contest. These photos are now nominated to compete with nominations from up to 48 other campaigns at the Wiki Loves Monuments international level.

As always and before going to the winners: Although Wiki Loves Monuments is a photo competition, the story of the people behind the photos is as beautiful and important as the photos themselves. These are the people who have decided to share their knowledge and documentation of the world’s built cultural heritage with the rest of the world through Wikipedia and other Wikimedia projects. As a result, when we knew more about the photographer and the moment they captured the photo in, or if the jury had shared their comments about the photo, we have share more information with you.

First place. Ali Qapu Palace. A Safavid-era building in Naqsh-e Jahan Square, a UNESCO world heritage site, in Isfahan. (Amirpashaei, CC BY-SA 4.0)

Amir Pashaei, 35 year old, has been an amateur photographer for 15 years and a professional one for 3. For this year’s Wiki Loves Monuments he embarks on a 3-day trip to Isfahan to take more photos to participate in Wiki Loves Monuments after one of his earlier photos becomes a featured photo of Wikimedia Commons very quickly after upload. For this panoramic HDR, Amir takes 25 photos and combine to recreate the majesty and beauty of Ali Qapu in the the final minutes of the day. The jury describes the photo as one “with perfect framing and colours, mixed with long exposure to recreate mirror like reflection”, and “a pleasant and accurate depiction of the palace”.

Second place. The interior of Ali Qapu Palace; the staircases that take you to the second floor of the palace. (Alexander Popokov, CC BY-SA 4.0)

The jury describes the photo as one with a “unique angle and beautiful light” which “make for an invitation to venture down the spiral stairs”.

Third place. A lookout of Ayyoub Cave located at the elevation of more than 2,800 meters in Ayyoub Mountain in Kerman Province. The jury particularly appreciated the “great use of natural framing to demonstrate location”. (Morteza salehi70, CC BY-SA 4.0)

Fourth place. One of the prominent bridges of Isfahan, Khaju Bridge, viewed from top on a cold. A Safavid era monument that according to Pedram Forouzanfar, the professional photographer of this photo, is known by locals to have a flying eagle shape when viewed from the top. Pedram spent many hours and days to find the perfect lighting for this photo and captured the bridge at dawn on a cold fall day. What makes the photo particularly special is that it is taken at a rare occasion that after many years Zayanderud, the river that the bridge is on, had water again. (Pedram forouzanfar, CC BY-SA 4.0)

The jury appreciates the photo as a novel look at the bridge in Isfahan, thanks to the drone photography which gives the photographers a new tool.”

Fifth place. A frame in frame photo of Vakil Mosque in Shiraz by Ali Khavanin, a 29 year old amateur photographer, who lives in Shiraz and is deeply familiar with the monument. (Yare zaman2000, CC BY-SA 4.0)

The jury describes the photo as one with “Unique framing and beautiful layered lights” that create “depth and intrigue”.

Sixth place. The ceiling of Qom Bazaar’s courtyard (timche), a Qajar era building in Qom captured 45 minutes before sunset, when the light outside of the building is gentler and the combination of inside and outside light allows for picturing the beauty of the ceiling. (Amirpashaei, CC BY-SA 4.0)

Amir further describes himself as one who likes to go on photography walks alone, to be able to focus and ideate to capture the beauty of what surrounds him. This photo is the first photo Amir uploads as part of this year’s contest and the fact that the photo becomes a featured photo very quickly encourages him to take more photos.

Sixth place. The second sixth place photo goes to the ceilings inside of Sheikh Lotfollah mosque in Isfahan that are detailed and colorful. (Amirpashaei, CC BY-SA 4.0)

Sixth place. The vault of Shah Mosque in Isfahan The vibrant interior of Sheikh Lotfollah Mosque in Isfahan. (Amirpashaei, CC BY-SA 4.0)

The jury finds the framing of the photo “pleasing” and calls out the ” good mix of organic clouds textures with the geometric patterns” which together make a beautiful photograph.

Ninth place. ‌ An aerial view of Ganjali Khan Complex, a Safavid era complex located in the old part of the city of Kerman. The complex includes a school, a mosque, a bazaar, a bathhouse and more. (Morteza salehi70, CC BY-SA 4.0)

The jury appreciates the “good composition” and “colors” of the photo calling it “almost like an abstract painting”. The “novel look” at the monument is called out.

Tenth place. A view of the tomb of Hafez, the celebrated Persian poet, in Shiraz. (Mshayati, CC BY-SA 4.0)

Mohammad Sadegh Hayati, a 31 year old amateur photographer describes the night when he took this photo as a night that he and Hafez spent the time to ponder on the beauties surrounding them. Mohammad’s attention was on the turquoise colors of the monuments and the stars and their movements and how they can capture the passage of time.

The jury describes the photo as one with “Long exposure and careful framing” in a way that has helped the photographer “create a new view for this often photographed monument”.

Interested to see nominations by other Wiki Loves Monuments 2019 campaigns? Check them out!

Statistics

203 participants joined this year’s contest by uploading at least one photo of one of the 26000 nationally registered monuments in Iran. From this number, 73% of the participants are newcomers to Wikimedia projects. These participants uploaded 2,488 (and counting) photos to Wikimedia Commons, the media repository for Wikipedia. 10% of these photos are currently being used in one or more of the Wikimedia projects.

In terms of the number of participants and uploads, Iran’s campaign did not do as well as last year where we had more than 1200 participants uploading more than 13,600 photos. We will be looking into the reasons behind this significant change though we speculate the change being due to the fact that the contest was not advertised through Sitenotice on Persian Wikipedia (Learn more)

The number of uploads by the 2019 participants puts Iran in rank 19 at the international level. (Learn more about detailed statistics by country.)

@wikidata - I don't scale, help me scale

11:14, Thursday, 14 2019 November UTC
At Wikidata there is always more to do and as a volunteer you make the biggest impact when you concentrate on specific subjects. I do not scale enough to do everything I would like to do.

There are a few area's where I aim to make a difference; of particular concern is where we do not represent a body of knowledge/information in Wikidata. At this time the favour scientists particularly women, young scientists and scientists from Africa.

To make my work scale, I twitter and blog. I latch on to the great work done by Dr Jess Wade. She writes articles on well deserving scientists and I aim to add value for those scientists on Wikidata. Typically I add professions, alma maters, employers and awards. In addition I add "authorities" like ORCiD, Google Scholar and VIAF. This is important because it enables the linking of scholarly papers already in Wikidata or known at ORCiD. I can more or less keep up with Jess and, I happily add information for any and all scientists I come across on Twitter.

While doing this I learned of the Global Young Academy and as a side project started adding scientists who are member of the GYA or one of affiliated organisations to Wikidata. I am so pleased  I got into contact with Robert Lepenies. Robert is happy with the opportunity that a Scholia provides for an organisation like the GYA, for him and for all the young scientists involved. We collaborated on completing the lists on many wikipedias, Robert added many scientists to Wikidata and is now battling to keep the pictures of these young scientists on Commons...

What is crucially important for me is that Robert advocates an open ORCiD profile to scientists worldwide so that they may have their Scholia. Both Robert and I do not scale and what would help us most is an easy and obvious way that enables any scientists to start a process that will include all his papers from ORCiD, will update the known co-authors and instruct in what they can do to enrich their Wikidata / ORCiD / Scholia profile even more.

I am now working on African scientists and yes, I would appreciate some help.
Thanks,
     GerardM

PS my wife would like this scale to be enough for me

Wednesday evening

22:25, Wednesday, 13 2019 November UTC

Atlanta, Georgia, USA

Now is the time for a whisky, but instead I shall hack a bit and then head down to the bowling and beer. (There's bowling and beer?!)

As the protesters in Hong Kong continue to make their voices heard, society becomes increasingly aware of how important it is to educate ourselves on the changes and developments outside of our own countries. A protest in a country such as China or unincorporated territories such as Puerto Rico have a ripple effect that can impact countries on the other side of the world – or ones close by. This past spring students in Dr. Jennifer Chun’s class at UCLA chose to edit articles on the history of protest in Korea and how this has led to social changes – or raised awareness that change needs to occur.

On the first day of 1995 people began to gather at the Myeong-dong Cathedral. They came with the knowledge that they would be there for many days, as many as it would take to reach their goals. So began the 1995 Myeong-Dong migrant labor protest, which lasted a total of nine days and opposed the Industrial Trainee System (ITS), which they stated systematically produced a population of vulnerable, bottom-tier migrant workers in the labor market. Thirteen Nepalese migrant workers, who were previously contracted under the ITS, arrived in South Korea in hopes of escaping the poverty in their own country. However their hopes were dashed when employers withheld wages for over six months and then beat and abused them when the workers demanded to receive their wages directly. During the protest the demonstrators shackled their necks with iron chains, exposing their struggles as migrant laborers and drawing a parallel to slavery. They were soon joined by others, especially grassroots religious organizations, who protested in solidarity. In response to the protest the state acknowledged the systematic issues from the ITS and changed the Labor Standards Law to include migrant workers and industrial trainees contracted by the ITS in legislation regarding industrial accidents, medical insurances, and minimum wage arrangements. However it should be noted that this still did not address the issues of toxic and inhumane working conditions and the production cycle of unauthorized workers. This realization eventually led to the creation of the Migrant Workers’ Support Movement (MWSM) and Joint Committee for Migrant Workers in Korea (JCMK).

Along with migrant workers, women are also at risk of being exploited for labor – something not limited to any one particular country. Women have been organizing to address workplace issues such as unequal pay and workplace violence as early as the 1880s. In 2006, several women gathered together to join their male coworkers in the South Korean KTX Train Attendant Union Strike, which protested the hiring practices of irregular workers. The women also protested against sexual harassment they had experienced in their workplace. The majority of the men from the KRWU (the union for the KTX workers) stopped protesting after 4 days; however, the women continued their strike. Over the course of 12 years, many workers dropped out of the strike; however, 180 continued until 2018 when the Railway Workers’ Union and Korea Railroad Corporation came to an agreement in which these 180 of the crew members were reinstated.

The following years also included protests, as demonstrators gathered for both the Hyehwa Station Protest in 2016 as well as the Yellow Ribbon Campaign and Sewol Ferry Protest Movement in 2014. The Hyehwa Station Protest was formed to protest against the discrimination of women and crimes involving spy cameras, also known as molka. Many of these spy camera cases go unreported or undetected, and those that are reported typically do not lead to prison sentences. The Yellow Ribbon Campaign and Sewol Ferry Protest Movement occurred after the Sewol Ferry sinking, where about 63% of the people on board the ferry died after the ship capsized and several crew members abandoned it and its passengers. Many of these deaths occurred as a result of the crew ordering passengers to remain in their cabins and not alerting them to the evacuation of the ship. In the days following the sinking it was also discovered that the ship was in poor shape and was carrying over twice its maximum limit of cargo, which was also not secured properly. The regular ferry captain had warned the ship’s owners, Chonghaejin Marine, of this but was met with hostility and threats of losing his job. The yellow ribbon became a prevalent symbol in South Korea. Its significance evolved during the course of the protest, as people began to realize that many did not survive and the gathering focus turned from mourning and hopes of return to activism and democratization. In 2017, three years after the Sewol Ferry Sinking, the former president of South Korea, Park Geun-hye, was removed from office. During the months leading up to this event, the yellow symbols of Sewol commemoration were always present on political slogans and impeachment demonstrations.

Wikipedia has a wealth of knowledge, however the site cannot grow without users contributing and correcting information to the site. A Wikipedia writing assignment is a wonderful way to teach your students about technical writing, collaboration, and sourcing in a unique learning environment. If you are interested in using Wikipedia with your next class, visit teach.wikiedu.org to find out how you can gain access to tools, online trainings, and printed materials.


Header/thumbnail image by Republic of Korea Government, CC BY-SA 2.0 via Wikimedia Commons.

Who’s interested in learning to use Wikidata?

19:30, Tuesday, 12 2019 November UTC

We’re excited to announce the start of two new Wikidata courses, currently in progress. There is one intermediate course and one beginner course. We couldn’t be more pleased with the course participants. They have ambition, passion, and plenty of curiosity that will make for some exciting conversations. Learn more about them and their hopes for incorporating Wikidata into their professional work.

Our intermediate Wikidata course, Elevate Your Collections

  • Kelli Babcock is the Digital Initiatives Librarian at the University of Toronto. She is interested in creating a data model for digital objects. Her current project involves the illustrations of Canadian artist, Thoreau MacDonald.
  • David Heilbrun is a Metadata Librarian at George Mason University. They are taking this course to better understand how to contribute to Wikidata and use it as a linked data source to enhance bibliographic metadata.
  • Jake Kubrin is a Metadata Librarian at the Stanford Law Library. He is interested in learning about bulk updates to Wikidata records, visualizations, schema creation, and pushing Wikidata entries to multiple Wikipedia pages.
  • Miranda Marraccini is the Digital Pedagogy Librarian at the University of Michigan. In taking this course she wants to gain more experience with Wikidata as a resource to teach people how to edit and make use of the data.
  • Illya Moskvin is a Web Developer at the Art institute of Chicago. He is interested in learning to enhance that the Art Institute’s collection data with Wikidata. He’s also looking to explore the possibility of building channels to push the Art Institute’s data to Wikidata.
  • Caitlin Pollock is the Digital Scholarships Specialists at the University of Michigan. She is eager to increase Wikidata engagement and programming opportunities.
  • Kristen Reid is the Collections Management and Information Specialist at the Barack Obama Foundation. She is looking forward to getting more familiar with Wikidata in order to better understand additional tools for sharing collections data.
  • Elizabeth Roke is the Digital Archivist and Metadata Specialist at Emory University. She is interested in learning all about Wikidata items related to Emory’s collections. Specifically, she wants to better understand how to model people and archival collections.
  • Seth Schimmel is the Research and Data Support Specialist at the CUNY Graduate Center. He works with the Open Society Foundation as a data specialist and is interested in learning about the Wikidata frameworks for public policy issues, (inter)government(al) organizations, or possibly also for different fields of research science.
  • Jennifer Sutcliffe is an Educational Analyst at Emory University. She has experience with Wikipedia and is eager to learn how her institution can get more involved with Wikidata.

Beginner Course, Join the Open Data Movement

  • Lisa Barrier is a Digital Collections Associate at Carnegie Hall. She is interested in learning Wikidata’s query system to build SPARQL queries with Wikidata sources.
  • Megan Forbes is the Program Manager at LYRASIS. Coming from a program with a strong sharing ethos, she wants to make sure that she understands Wikidata/linked data well enough to support her projects and members.
  • Caroline Frick is the Director at the Texas Archive of the Moving Image. She is looking forward to learning more about linking Wikidata information to online streaming content.
  • Christina Gibbs manages the art database at the Detroit Institute of Art, specializing in systems integration, data integrations, data publishing and open content. She is looking forward to gaining practical knowledge of how Wikidata works and associated tools in preparation for the museum to publish collections to Wikidata under a grant funded project.
  • Kathryn Gronsbell is a Digital Collections Manager Archives at Carnegie Hall. She is excited for the opportunity to learn more about Wikidata structure and community practices and apply that knowledge to help expand/improve Carnegie Hall performance history data.
  • James Hanks is an archivist at the Detroit Institute of Art who is interested in using linked data for archival primary source materials.
  • Carolyn Jackson is the Agriculture & Life Sciences Librarian at Texas A&M University. She is eager to learn about the development of data within and linked to Agricultural Sciences.
  • Clara de Pablo is an American Women’s History Initiative Fellow at the National Museum of American History. She is interested in learning how to use Wikidata to process and use data that she collects.
  • Sarah Potvin is the Digital Scholarship Librarian at Texas A&M University. She is seeking to build an understanding of linked data and explore the potential for Wikipedia and Wikidata to extend libraries’ authority work and public data.
  • Thomas Raich is the Director of Technology at the Yale University Art Gallery. He is interested in working with museum collection data, contributing to Wikidata and extracting Wikidata authority IDs to feed them back to Yale’s Content Management System.
  • Jackie Sheih works on Descriptive Data Management at the Smithsonian Libraries. She is looking forward to learning about entities that are unique from the Smithsonian and the relationship these entities may have with other sources.
  • Shenee Simon is an independent researcher who works with the S.H.E. Collective. She is interested in data collection, data synthesis, and data collection with technology.
  • Peter Sleeth is a lecturer at Victoria University. He is interested in learning how Wikidata engages with Military History late 19th century to present, and the History of military medicine.
  • Juniper Starr is a Cataloging Specialist at the Johnson City Public Library. She views linked data as central to cataloging and is looking forward to applying this to her work.
  • Emma Thompson is Project Coordinator at Schoenberg Database of Manuscripts at UPenn. She is interested in learning more about using linked data for authority file management to better support the Schoenberg Database of Manuscripts project.
  • Yer Vang-Cohen is the Data and Database Administrator at the Yale University Art Gallery. She is looking forward to learning how to contribute their art collection to Wikidata items and also learn about ways to connect their entities to authority entities on Wikidata such as artist/makers/publisher authority IDs.
  • Thomas Whittaker is the Head of Media Cataloging at IU-Bloomington. He is eager to learn more about authority control on Wikidata and has participated in meetings with the LD4 initiative.
  • Jing Zhang is a Web Developer at the Center for Research Libraries. He is interested in learning how to pull data from Wikidata for archiving materials and workflows around those.

Interested in taking a virtual course about Wikidata? Visit our beginner or intermediate course pages to sign up to receive updates and course announcements. Or contact data@wikiedu.org with questions.

Instant gratification at @Wikidata

11:48, Tuesday, 12 2019 November UTC
As I write this, it is 11:46am at 09:26am I added papers to prof Hafida Merzouk. The edits are picked up by Reasonator but not by Scholia. In a similar way, edits done are not picked up by Listeria.

Instant gratification is now a thing of the past, the work done at Wikidata may eventually be picked up in a Scholia or Listeria but it is not funny. Can I tweet about the things I find or have done when Wikidata no longer reflects the relevant changes?

This may sound like trivial but it does mean that when I look back at my work that  there is no longer a timely way to do so.

Instant gratification motivates and it is a factor in maintaining quality. We are losing it.
Thanks,
      GerardM

This Month in GLAM: October 2019

09:05, Tuesday, 12 2019 November UTC

Tech News issue #46, 2019 (November 11, 2019)

00:00, Monday, 11 2019 November UTC
TriangleArrow-Left.svgprevious 2019, week 46 (Monday 11 November 2019) nextTriangleArrow-Right.svg
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎Nederlands • ‎español • ‎français • ‎lietuvių • ‎magyar • ‎polski • ‎português • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎ئۇيغۇرچە • ‎العربية • ‎हिन्दी • ‎ไทย • ‎中文 • ‎日本語

weeklyOSM 485

14:43, Sunday, 10 2019 November UTC

29/10/2019-04/11/2019

lead picture

Tesla’s navigation system and OSM 😉
1 | © Teslamotorsclub – Map data © OpenStreetMap contributors 🙂

Mapping

  • SelfishSeahorse proposes the tagging of lanes on roads which are explicitly marked for pedestrians.
  • StreetCred is a company founded by Randy Meech after MapZen folded. They now provide their POI data with an MVT-Tileserver, which can be used in iD as an extra custom layer.
  • Hanikatu asked (de) (automatic translation) how to tag objects that have more than one property. An example provided was multiple items attached to a single street light. Suggestions ranged from using multiple nodes through to using a type=node relation.
  • Amazon Logistics traces unearthed a short way, part of a major inner-city one-way system, which has had incorrect access tagging for nearly 11 years.
  • Opensnowmap.org now provides a more complete view of daily and weekly changes to ski pistes and winter sports-related elements in OpenStreetMap.
  • Voting concluded for the new feature proposal leisure=sunbathing_area. There were 18 votes in total and 2 non-voting comments. “Yes” got 13 votes or 72% which is just short of the 75% required for approval.
  • The tagging proposal contact:phone or phone missed the required three-quarters majority with 46 “yes” and 60 “no” votes. The proposal wanted to declare contact:phone=* obsolete in favour of the older and much more common phone=*. The topic has been controversial for a decade.
  • OpenStreetMap India pointed, on Twitter, to the success of OSM Kerala (India) in convincing the local government to support crowdsourced mapping by launching the Mapathon Keralam. The Hindu, India’s third-largest English newspaper, carries the story on their website.
  • Luzandro has created (automatic translation) a map of possible new or missing roads in Austria based on BEV’s register of addresses.
  • On Ilya Zverev’s suggestion (automatic translation) Russian border crossings are now being mapped. The mapping is being carried out with the help of a MapRoulette challenge.

Community

  • Tahira reported that Heidelberg’s first climathon event took place at EMBL with a 24 hour hackathon from 26 to 27 October as part of the “global Climathon”. Participants worked in teams on five challenges, all with an aspect of addressing climate change. First place was awarded to the BikeBuddy and Spring Up teams. Both winning teams comprised members from the GIScience research group Heidelberg University. The BikeBuddy team worked on a challenge, presented by HeiGIT.org, to identify attractive cycling routes so as to encourage people to travel by bike rather than car. This challenge was inspired by the research on pleasant routing at GIScience/HeiGIT. The main factor that the team focused on was the safety of cycling routes (street lights, etc.). The data was extracted from OpenStreetMap and integrated into a prototype using OpenRouteService.org.
  • Branko Kokanović published a blog post (sr) (automatic translation) about his progress using open data about apartments in Serbia.
  • OpenStreetMap US published Issue #1 of a brand new newsletter to share OpenStreetMap US activities and news from around the US and the world.

OpenStreetMap Foundation

  • Dorothea Kazazi informed the community about the procedure and the details of the upcoming OpenStreetMap Foundation board elections during the 13th Annual General Meeting. The deadline for putting yourself forward for election was 10 November 2019. If you want to vote, you have to become a member of the OSMF by 14 November 2019. At the time of writing, eight candidates have been nominated. The seats of Kate Chapman, Heather Leson, Mikel Maron and Frederik Ramm are available in this election with only Mikel Maron running for re-election. We encourage everyone to become member of the OSMF and participate. If you can’t afford the membership fee or you cannot transfer funds abroad, we’d like to point you to the OSMF Fee Waiver Program.
  • Frederik Ramm informed the members of the OpenStreetMap Foundation that the board suggests a couple of changes to the Articles of Association, mainly related to the mass signup of employees of one company ahead of last year’s board election. Besides the AoA changes, which are required to pass a quorum of 75 percent of OSMF’s Normal Members during the upcoming Annual General Meeting, it is also suggested that the current “financial hardship” fee waiver for OSMF membership be replaced by a general fee waiver for community members with a “sizeable contribution”. The OSMF Board provided a pdf-file with details of the changes.
  • Christoph Hormann (imagico) provides some advice for candidates for the upcoming OSMF board elections. The advice is particularly directed at the formulation of position statements and answering the general slate of questions posed to candidates.
  • Joost Schouppe wrote about his first year as an OSMF board member. Topics covered include: what worries him about OSM in general, how heavy the burden weighs on just seven volunteers, and missing procedures. An interesting read and an interesting discussion as a follow-up.

Humanitarian OSM

Education

  • The IDEAL-VGI project aims to focus on the challenges of volunteered geographic information for land use classification and big earth observation data. The project is a collaboration between Heidelberg University’s GIScience Research Group and the Technical University of Berlin.

Maps

  • Infos-Réseaux.com tweeted (fr) (automatic translation) that the milestone of 1000 circuits of the French electrical network mapped, was reached.
  • The 30 day Map Challenge, hashtag #30daymapchallenge, is running throughout November. Many participants are using OpenStreetMap data. Spanholz has linked to an imgur album of selected images. The initiator of the challenge is Topi Tjukanov.

switch2OSM

  • The Strava app, which was specially developed for runners and cyclists, uses OSM as map material. Strava has recently introduced a new map rendering that emphasises the details that runners and cyclists most want to see.
  • [1] The Tesla driver Army_1 noticed that the route guidance of his car’s Smart Summon application, which allows the Tesla to drive autonomously to the owner’s location in parking lots, changes when he edits parking lanes on OSM.

Open Data

  • Théophile Merlière, of the Etalab team, reports (fr) on the improvement and correction of errors in an official geolocated address database based on three walks of a few hours each and its publication on the BAN (Base Adresse Nationale) of France.
  • Dr Adrian Mallory has written an opinion piece calling for collective action to make big data a force for wider social good, not as fuel for the hyper-wealthy, secretive forms of manipulation and further social inequality.

Releases

  • JOSM 19.10 has been released. Major enhancements in this version are allowing zoom levels up to 24 in TMS layers, dropping support for https remote control, and adding GeoJSON importing.
  • Please watch the thread “Update der OSM Software List“, in the German Forum, to stay informed about release changes of all OSM Software. Wambacher tries to update this excerpt from the OSM Software Watchlist every weekend. For example the Rel. 2.0 of the App Windy Maps is listed, where OSM-based online and offline maps with many POI, hiking and cycling paths are displayed. Wiki, photo and excursion tips included. Now with improved routing with voice control and tracking.

OSM in the media

  • TV5Monde reports about the public transport mapping project in Bamako, Mali. Read more about the project on the wiki page (fr) (automatic translation).

Other “geo” things

  • Archie Archambault is a craft mapper who believes that maps should be made by surveying and “being within the parameters of the space”. Archie is working on a project to create artistic maps that give you the “gist” of cities around the world.
  • Victor shows (es) (automatic translation) how to load free maps onto your Garmin Fenix or EDGE.
  • Interested in attending a Wizards Unite event? Well, you’re too late for this one, but of interest to mappers is that the locations used were based on libraries mapped in OSM.
  • In response to the third “Tokyo Public Transportation Open Data Challenge” Akihiko Kusanagi has built a real-time 3D digital map of Tokyo’s public transport system.

Upcoming Events

Where What When Country
Kameoka 京都!街歩き!マッピングパーティ:第14回 鍬山神社 2019-11-10 japan
Budapest OSM Hungary Meetup reboot 2019-11-11 hungary
Taipei OSM x Wikidata #10 2019-11-11 taiwan
Zurich Stammtisch Zürich 2019-11-11 switzerland
Lyon Rencontre mensuelle pour tous 2019-11-12 france
Salt Lake City SLC Mappy Hour 2019-11-12 united states
Nitra Missing Maps Mapathon Nitra #4 2019-11-12 slovakia
Hamburg Hamburger Mappertreffen 2019-11-12 germany
Wellington FOSS4G SotM Oceania 2019 2019-11-12-2019-11-15 new zealand
Digne-les-Bains HÉRuDi : l’Histoire Étonnante des Rues de Digne 2019-11-12 france
Munich Münchner Stammtisch 2019-11-13 germany
Wuppertal OSM-Treffen Wuppertaler Stammtisch im Hutmacher 18 Uhr 2019-11-13 germany
Berlin 137. Berlin-Brandenburg Stammtisch 2019-11-14 germany
Nantes Réunion mensuelle 2019-11-14 france
Encarnación State of the Map Latam 2019 2019-11-14 paraguay
Niš Missing Maps Mapathon Niš #1 2019-11-16 serbia
Istanbul Yer Çizenler 2019-11-16 turkey
Cologne Bonn Airport Bonner Stammtisch 2019-11-19 germany
Reading Reading Missing Maps Mapathon 2019-11-19 united kingdom
Lüneburg Lüneburger Mappertreffen 2019-11-19 germany
Prešov Missing Maps Mapathon Prešov #4 2019-11-21 slovakia
Grand-Bassam State of the Map Africa 2019 2019-11-22-2019-11-24 ivory coast
Izmir Yer Çizenler Mapathon with HKMO-Izmir 2019-11-23 turkey
Bremen Bremer Mappertreffen 2019-11-25 germany
Salt Lake City OSM Utah Mapping Night 2019-11-26 united states
Düsseldorf Stammtisch 2019-11-27 germany
Singen Stammtisch Bodensee 2019-11-27 germany
Kilkenny Kilkenny Mapping Event 2019-11-30 ireland
Nantes Participation à « Nantes en sciences » 2019-11-30 france
Cape Town State of the Map 2020 2020-07-03-2020-07-05 south africa

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by NunoMASAzevedo, Polyglot, Rogehm, SK53, SunCobalt, TheSwavu, YoViajo, derFred, doktorpixel14, jinalfoflia, anonymus.

Put (modern) #science of #Africa on the map

09:39, Saturday, 09 2019 November UTC
A young African scholar commented that the info on websites of African scholarly organisations was all about its past. There is a point to recognizing those who did good and consequently making obvious that the science of today is rooted in the past.

African scientists as well as any other scientist have a place in Wikidata with their affiliations, papers, co-authors and also with their scholarly advisors. My proposal is for all scholars to check if they are on Wikidata, check if their doctoral thesis is on Wikidata. Then add their doctoral advisor to their item and reciprocate themselves as a doctoral student.

Do not forget to include where you studied and for what university you work(ed). Check if your ORCiD profile includes trusted organisations like CrossRef that will update your profile when appropriate. When many of you do this at Wikidata you will be surprised what the impact will be.
Thanks,
      GerardM

From the very beginning, we’ve had our champions — students, faculty, and institutions alike — who have recognized what a Wikipedia writing assignment can achieve for student learning and for the world. Thanks to word-of-mouth, as well as the great work these individuals have accomplished using our resources (work that speaks for itself), more and more faculty are getting involved in this open educational practice. Not only that, but institutions are now awarding their faculty for incorporating Wikipedia into their pedagogy.

It’s been a busy year for Wikipedia in pedagogy. And we couldn’t be more thrilled to hear about your successes. If you’ve been recognized for incorporating Wikipedia into your undergraduate or graduate curriculum, please let us know by mailing Wikipedia Student Program Manager Helaine Blumenthal at helaine@wikiedu.org.


If you want to incorporate a Wikipedia writing assignment into an upcoming course, visit teach.wikiedu.org for information about our free resources and systems of support.


Thumbnail image sources: MIT Open Learning; Mshemberger, CC BY-SA 4.0 via Wikimedia Commons.

Production Excellence: October 2019

09:04, Friday, 08 2019 November UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

📊 Month in numbers
  • 3 documented incidents. [1]
  • 33 new Wikimedia-prod-error reports. [2]
  • 30 Wikimedia-prod-error reports closed. [3]
  • 207 currently open Wikimedia-prod-error reports in total. [4]

There were three recorded incidents last month, which is slightly below our median of the past two years (Explore this data). To read more about these incidents, their investigations, and pending actionables; check Incident documentation § 2019.


*️⃣ To Log or not To Log

MediaWiki uses the PSR-3 compliant Monolog library to send messages to Logstash (via rsyslog and Kafka). These messages are used to automatically detect (by quantity) when the production cluster is in an unstable state. For example, due to an increase in application errors when deploying code, or if a backend system is failing. Two distinct issues hampered the storing of these messages this month, and both affected us simultaneously.

Elasticsearch mapping limit

The Elasticsearch storage behind Logstash optimises responses to Logstash queries with an index. This index has an upper limit to how many distinct fields (or columns) it can have. When reached, messages with fields not yet in the index are discarded. Our Logstash indexes are sharded by date and source (one for “mediawiki”, one for “syslog”, and one for everthing else).

This meant that error messages were only stored if they only contained fields used before, by other errors stored that day. Which in turn would only succeed if that day’s columns weren’t already fully taken. A seemingly random subset of error messages was then rejected for a full day. Each day it got a new chance at reserving its columns, so long as the specific kind of error is triggered early enough.

To unblock deployment automation and monitoring of MediaWiki, an interim solution was devised. The subset of messages from “mediawiki” that deal with application errors now have their own index shard. These error reports follow a consistent structure, and contain no free-form context fields. As such, this index (hopefully) can’t reach its mapping limit or suffer message loss.

The general index mapping limit was also raised from 1000 to 2000. For now that means we’re not dropping any non-critical/debug messages. More information about the incident at T234564. The general issue with accommodating debug messages in Logstash long-term, is tracked at T180051. Thanks @matmarex, @hashar, and @herron.

Crash handling

Wikimedia’s PHP configuration has a “crash handler” that kicks in if everything else fails. For example, when the memory limit or execution timeout is reached, or if some crucial part of MediaWiki fails very early on. In that case our crash handler renders a Wikimedia-branded system error page (separate from MediaWiki and its skins). It also increments a counter metric for monitoring purposes, and sends a detailed report to Logstash. In migrating the crash handler from HHVM to PHP7, one part of the puzzle was forgotten. Namely the Logstash configuration that forwards these reports from php-fpm’s syslog channel to the one for mediawiki.

As such, our deployment automation and several Logstash dashboards were blind to a subset of potential fatal errors for a few days. Regressions during that week were instead found by manually digging through the raw feed of the php-fpm channel instead. As a temporary measure, Scap was updated to consider the php-fpm’s channel as well in its automation that decides whether a deployment is “green”.

We’ve created new Logstash configurations that forward PHP7 crashes in a similar way as we did for HHVM in the past. Bookmarked MW dashboards/queries you have for Logstash now provide a complete picture once again. Thanks @jijiki and @colewhite! – T234283


📉 Outstanding reports

Take a look at the workboard and look for tasks that might need your help. The workboard lists error reports, grouped by the month in which they were first observed.

https://phabricator.wikimedia.org/tag/wikimedia-production-error/

Or help someone that’s already started with their patch:
Open prod-error tasks with a Patch-For-Review

Breakdown of recent months (past two weeks not included):

  • March: 1 report fixed. (3 of 10 reports left).
  • April: 8 of 14 reports left (unchanged). ⚠️
  • May: (All clear!)
  • June: 9 of 11 reports left (unchanged). ⚠️
  • July: 13 of 18 reports left (unchanged).
  • August: 2 reports were fixed! (6 of 14 reports left).
  • September: 2 reports were fixed! (10 of 12 new reports left).
  • October: 12 new reports survived the month of October.

🎉 Thanks!

Thank you, to everyone else who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


🌴“Gotta love crab. In time, too. I couldn't take much more of those coconuts. Coconut milk is a natural laxative. That's something Gilligan never told us.

Footnotes:

[1] Incidents. –
wikitech.wikimedia.org/wiki/Special:PrefixIndex?prefix=Incident…

[2] Tasks created. –
phabricator.wikimedia.org/maniphest/query…

[3] Tasks closed. –
phabricator.wikimedia.org/maniphest/query…

[4] Open tasks. –
phabricator.wikimedia.org/maniphest/query…

Copyright for Wikimedia photographers in the UK

17:07, Thursday, 07 2019 November UTC
A functional item: a 1971 Smalley 5 Mk II mini digger in waterway recovery group markings – image by Geni CC BY-SA 4.0

By UK Wikimedian and copyright enthusiast Geni

Spoiler:Its a terrible broken system which is part of the reason Wikipedia went copyleft in the first place. Please note that this is not legal advice. If you want that, ask a lawyer or, given the mess some bits of the law are in, 5 supreme court judges.

To be honest, this would be better titled “what you can upload photographs of without Commons rightfully complaining”.

The first thing you have to consider is: is the subject of the photo 3D or 2D? 2D and 3D subjects are dealt with very differently so see the relevant section for the subject you are interested in (objects with more than 3 dimensions are probably the same as 3D objects but there is no caselaw).

3D subjects

The good news is that the UK is one of the most liberal countries when it comes to photographing 3D objects (formal term freedom of panorama). If its on 3D and on permanent display in a public place then you are free to photograph it. This covers buildings, statues, dolls and basically anything else that’s 3D (although the exact reasons why may vary).

On permanent display: the Three rings sculpture by Jane Ackroyd in ocean village – image by Geni CC BY-SA 4.0

To break it down a bit, permanent display means no specific plans to remove the subject at some point. Public place means places the general public regularly has access to. So things like market squares, museums and the more open type of church would be fine but a private house or factory would not be (and yes this can result in the weird situation where you can’t upload a photo of a figurine taken in your home but could upload a picture of an identical one on display a museum).

One exception to all of this is that it assumes the item is under copyright in the first place. If the author has been dead for 70 years or the subject is in the public domain for some other reason then the location of the object and its permeance doesn’t matter. A second exception is that it only applies to items that qualify for copyright at all. Under UK law functional items (tools machinery, clothing etc) that don’t rise to the level of artistic craftsmanship don’t qualify for copyright. Unfortunately the term “artistic craftsmanship” is only defined through caselaw and even then pretty poorly. Of the nominal standards, the easiest to work to involves judging author intent. If it appears that the author was trying to create an artwork then the subject is more likely to qualify for copyright than if it did not. For example a highly decorative lampshade might well qualify for copyright but a standard fluorescent strip light mounting would not.

2D subjects

While the UK may be liberal with 3D objects its very much the reverse with 2D. Photos of paintings, murals and graffiti (and any other 2D work) are copyright infringements if the original work is under copyright (which unless the author has been dead for 70 years it usually would be).

Incidental inclusion – Vault in High Street in Bristol – image by Geni CC BY-SA 4.0

Incidental inclusion

The UK has an explicit allowance for incidental inclusion. Wikimedia Commons users tend to interpret this as meaning

that if something is not the subject of the photo you don’t need to worry about it. Graffiti on a building (where the photo is of the building) or a temporary sculpture in the corner of a village square are not an issue. Photos where those were the focus of the image would be a problem. Full details can be found at Commons:De minimis (the rough US equivalent).

The unreasonably difficult photo contest

If all this copyright stuff is boring or depressing then I can offer you the unreasonably difficult photo contest where none of the subjects present a copyright issue.

@Wikipedia talks about @Wikidata

08:05, Thursday, 07 2019 November UTC
"WD is unreliable. WP:V and WP:RS are completely ignored (from any editors). International NPOV is a problem too." It is so SMART, that the best I can do is ignore it. Then again it is an open invitation to talk about Wikipedia..  There is no Wikipedia there are over 300 Wikipedia language editions.. so even the acronyms are lost on me as there is no one Wikipedia to rule them all.. 

So forget about acronyms and lets talk Wikidata and by inference raise issues particularly for the English Wikipedia where appropriate. First, Wikidata includes more items than there are subjects raised in any and all Wikipedias. Its quality can be considered in many ways and verifiability is largely ensured because of the association with other "authorities" about a subject. Thanks to the increased use of open data, it is possible to verify that specific statements are shared, increasing the likelihood that they are correct. For some information like for scientists who are a member of the AAS Affiliates Programme, we have/may have references to the authoritative source. Such references may be on a project or on an item level, it makes verifiability easy and obvious. 

Wikidata has an issue with all kinds of gaps in its coverage. For many African countries no universities are known, there are hardly any scholars associated with them. Thanks to Listeria functionality we can monitor if and when data is added. Many a Wikipedia do not have such tools because of the aversion of Wikidata by some. At the same time projects like Women in Red rely on Listeria lists and by inference Wikidata to know what to work on.

In tools like Reasonator and Listeria lists are generated and, when you compare them with Wikipedia lists, the quality is measurably better. I published frequently in the past about the Polk award.. In its lists Wikipedia has a likely error rate of six percent. When they fudge the record by not linking at all, the quality of a Wikidata lists is even better because it is much better at linking items than Wikipedia is at linking red links.  There is a solution, it just requires a willingness by Wikipedians to cooperate. 

I understand what is meant by "international NPOV" and it is where Wikidata is by definition better than an individual Wikipedia. By definition because Wikidata represents data from ALL Wikipedias. Thanks to the people of DBpedia, there is a potential to highlight where Wikipedias differ and it is more likely that the fruit of their labour will enrich Wikidata than Wikipedias.

So a Wikidatan walks into a bar..
Thanks,
       GerardM

An introduction to WBStack

21:15, Wednesday, 06 2019 November UTC

WBStack is a project that I have been working on for a couple of years that finally saw the light of day at Wikidatacon 2019. It has gone through a couple of different names along the way, MWaas, WBaas, WikWiki, OpenCura and finally WBStack.

The idea behind the project is to provide Wikibase and surrounding services, such as a blazegraph query service, query service ui, quick statements, and others on a shared platform where installs, upgrades and maintenance are handeled centrally.

The initial release is very much an MVP (minimum viable product) aimed to get people talking about about the idea and to have a few keen users trying out the project. The registration is hidden behind an invite code system (if you want one then get in touch). Currently there are roughly 20 users on the project with 30 Wikibase installs, and some early bugs are being ironed out.

You can watch the demo of WBStack in its initial version in the video below at roughly 19:07. The video is also availbile here.

I’d like to thank Rhizome for supporting the project starting from this month by covering the initial infrastructure / hosting costs. You can read more about them on Wikipedia or by visiting their website. There is also a great blog post on the Wikimedia Foundation blog talking about their use of Wikibase.

I’m going to continue working on this project in my spare time, hopfully working on the features that matter most to the people that are already using it first, alongside any needed technical stuff.

Keep an eye out for more blog posts relating to WBStack. I’ll try to keep this blog updated.

The post An introduction to WBStack appeared first on Addshore.

Changing the concept URI of an existing Wikibase with data

19:03, Wednesday, 06 2019 November UTC

Many users of Wikibase find themselves in a position where they need to change the concept URI of an existing Wikibase for one or more reasons, such as a domain name update or desire to have https concept URIs instead of HTTP.

Below I walk through a minimal example of how this can be done using a small amount of data and the Wikibase Docker images. If you are not using the Docker images the steps should still work, but you do not need to worry about copying files into and out of containers or running commands inside containers.

Creating some test data

Firstly I need some test data, and for that data to exist in Wikibase and the Query service. I’ll go ahead with 1 property and 1 item, with some labels, descriptions and a statement.

# Create a string property with english label
curl -i -X POST \
   -H "Content-Type:application/x-www-form-urlencoded" \
   -d "data={\"type\":\"property\",\"datatype\":\"string\",\"labels\":{\"en\":{\"language\":\"en\",\"value\":\"String, property label EN\"}},\"descriptions\":{},\"aliases\":{},\"claims\":{}}" \
   -d "token=+\" \
 'http://localhost:8181/w/api.php?action=wbeditentity&new=property&format=json'

# Create an item with label, description and statement using above property
curl -i -X POST \
   -H "Content-Type:application/x-www-form-urlencoded" \
   -d "data={\"type\":\"item\",\"labels\":{\"en\":{\"language\":\"en\",\"value\":\"Some item\"}},\"descriptions\":{\"en\":{\"language\":\"en\",\"value\":\"Some item description\"}},\"aliases\":{},\"claims\":{\"P1\":[{\"mainsnak\":{\"snaktype\":\"value\",\"property\":\"P1\",\"datavalue\":{\"value\":\"Statement string value\",\"type\":\"string\"},\"datatype\":\"string\"},\"type\":\"statement\",\"references\":[]}]},\"sitelinks\":{}}" \
   -d "token=+\" \
 'http://localhost:8181/w/api.php?action=wbeditentity&new=item&format=json'

Once the updater has run (by default it sleeps for 10 seconds before checking for changes) the triples can be seen in blazegraph using the following SPARQL query.

SELECT * WHERE {?a ?b ?c}
Query results showing the concept URI as http://wikibase.svc/entity

The concept URI is clearly visible in the triples as the default ‘wikibase.svc’ provided by the docker-compose example for wikibase.

Running a new query service

You could choose to load the triples with a new concept URI into the same queryservice and namespace. However to simplify things, specifically, the cleanup of old triples, a clean and empty query service is also a good choice.

In my docker-compose file, I will specify a new wdqs service with a new name and altered set of environment variables, with the WIKIBASE_HOST environment variable changed to the new URI.

wdqs-new:
    image: wikibase/wdqs:0.3.2
    volumes:
      - query-service-data-new:/wdqs/data
    command: /runBlazegraph.sh
    networks:
      default:
        aliases:
         - wdqs-new.svc
    environment:
      - WIKIBASE_HOST=somFancyNewLocation.foo
      - WDQS_HOST=wdqs-new.svc
      - WDQS_PORT=9999
    expose:
      - 9999

This query service makes use of a new docker volume that I also need to define in my docker-compose.

volumes:
  < ....... <other volumes here> ....... >
  query-service-data-new:

As this URI is actually fake, and also in order to keep my updater requests within the local network I also need to add a new network alias to the existing wikibase service. After doing so my wikibase network section will look like this.

networks:
      default:
        aliases:
         - wikibase.svc
         - somFancyNewLocation.foo

To apply the changes I’ll restart the wikibase service and start the new updater service using the following commands.

$ docker-compose up -d --force-recreate --no-deps wikibase
Recreating wdqsconceptblog2019_wikibase_1      ... done
$ docker-compose up -d --force-recreate --no-deps wdqs-new
Creating wdqsconceptblog2019_wdqs-new_1 ... done

Now 2 blazegraph query services will be running, both controlled by docker-compose.

The published endpoint, via the wdqs-proxy, is still pointing at the old wdqs service, as is the updater that is currently running.

Dumping RDF from Wikibase

The dumpRdf.php maintenance script in Wikibase repo allows the dumping of all Items and properties as RDF for use in external services, such as the query service.

The default concept URI for Wikibase is determined from the wgServer MediaWiki global setting [code]. Before MediaWiki 1.34 wgServer was auto-detected [docs] in PHP.

Thus when running a maintenance script, wgServer is unknown, and will default to the hostname the wikibase container can see, for example, “b3a2e9156cc1”.

In order to avoid dumping data with this garbage concept URI one of the following must be done:

  • Wikibase repo conceptBaseUri setting must be set (to the new concept URI)
  • MediaWiki wgServer setting must be set (to the new concept URI)
  • –server <newConceptUriServerBase> must be provided to the dumpRdf.php script

So in order to generate a new RDF dump with the new concept URI, and store the RDF in a file run the following command.

$ docker-compose exec wikibase php ./extensions/Wikibase/repo/maintenance/dumpRdf.php --server http://somFancyNewLocation.foo --output /tmp/rdfOutput
Dumping entities of type item, property
Dumping shard 0/1
Processed 2 entities.

The generated file can then be copied from the wikibase container to the local filesystem using the docker cp command and the name of the wikibase container for your setup, which you can find using docker ps.

docker cp wdqsconceptblog2019_wikibase_1:/tmp/rdfOutput ./rdfOutput

Munging the dump

In order to munge the dump, first I’ll copy it into the new wdqs service with the following command.

docker cp ./rdfOutput wdqsconceptblog2019_wdqs-new_1:/tmp/rdfOutput

And then run the munge script over the dump, specifying the concept URI.

$ docker-compose exec wdqs-new ./munge.sh -f /tmp/rdfOutput -d /tmp/mungeOut -- --conceptUri http://somFancyNewLocation.foo
#logback.classic pattern: %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
16:21:31.082 [main] INFO  org.wikidata.query.rdf.tool.Munge - Switching to /tmp/mungeOut/wikidump-000000001.ttl.gz

The munge step will batch the data into a set of chunks based on a configured size. It also alters some of the triples along the way. The changes are documented here. If you have more data you may end up with more chunks.

Loading the new query service

Using the munged data and the loadData.sh script, the data can now be loaded directly into the query service.

$ docker-compose exec wdqs-new ./loadData.sh -n wdq -d /tmp/mungeOut
Processing wikidump-000000001.ttl.gz
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"><title>blazegraph™ by SYSTAP</title
></head
><body<p>totalElapsed=193ms, elapsed=75ms, connFlush=0ms, batchResolve=0, whereClause=0ms, deleteClause=0ms, insertClause=0ms</p
><hr><p>COMMIT: totalElapsed=323ms, commitTime=1572971213838, mutationCount=43</p
></html
>File wikidump-000000002.ttl.gz not found, terminating

A second updater

Currently, I have 2 query services running. The old one, which is public and still being updated by an updater, and the new one which is freshly loaded and slowly becoming out of date.

To create a second updater that will run alongside the old updater I define the following new service in my docker-compose file, which points to the new wikibase hostname and query service backend.

wdqs-updater-new:
    image: wikibase/wdqs:0.3.2
    command: /runUpdate.sh
    depends_on:
    - wdqs-new
    - wikibase
    networks:
      default:
        aliases:
         - wdqs-updater-new.svc
    environment:
      - WIKIBASE_HOST=somFancyNewLocation.foo
      - WDQS_HOST=wdqs-new.svc
      - WDQS_PORT=9999

Starting it with a command I have used a few times in this blog post.

$ docker-compose up -d --force-recreate --no-deps wdqs-updater-new
Creating wdqsconceptblog2019_wdqs-updater-new_1 ... done

I can confirm using ‘docker ps’ and also by looking at the container logs that the new updater is running.

docker-compose ps | grep wdqs-new
wdqsconceptblog2019_wdqs-new_1           /entrypoint.sh /runBlazegr ...   Up      9999/tcp

Using the new query service

You might want to check your query service before switching live traffic to it to make sure everything is OK, but I will skip that step.

In order to direct traffic to the newly loaded and now updating query service all that is needed is to reload the wdqs proxy with the new backend host using the wdqs-proxy docker image, this can be done with PROXY_PASS_HOST.

wdqs-proxy:
    image: wikibase/wdqs-proxy
    environment:
      - PROXY_PASS_HOST=wdqs-new.svc:9999
    ports:
     - "8989:80"
    depends_on:
    - wdqs-new
    networks:
      default:
        aliases:
         - wdqs-proxy.svc

And the service can be restarted with that same old command.

$ docker-compose up -d --force-recreate --no-deps wdqs-proxy
Recreating wdqsconceptblog2019_wdqs-proxy_1 ... done

Running the same query in the UI will now return results with the new concept URIs.

And if I make a new item (Q2) I can also see this appear in the new query service with the correct concept URI.

curl -i -X POST \
   -H "Content-Type:application/x-www-form-urlencoded" \
   -d "data={\"type\":\"item\",\"labels\":{\"en\":{\"language\":\"en\",\"value\":\"Some item created after migration\"}},\"descriptions\":{\"en\":{\"language\":\"en\",\"value\":\"Some item description\"}},\"aliases\":{},\"claims\":{\"P1\":[{\"mainsnak\":{\"snaktype\":\"value\",\"property\":\"P1\",\"datavalue\":{\"value\":\"Statement string value\",\"type\":\"string\"},\"datatype\":\"string\"},\"type\":\"statement\",\"references\":[]}]},\"sitelinks\":{}}" \
   -d "token=+\" \
 'http://localhost:8181/w/api.php?action=wbeditentity&new=item&format=json'
Result of: SELECT * WHERE { <http://somFancyNewLocation.foo/entity/Q2> ?b ?c. }

Cleanup

I left some things lying around that are no longer needed that I should cleanup. These include docker containers, docker volumes and files.

First the containers.

$ docker-compose stop wdqs-updater wdqs
Stopping wdqsconceptblog2019_wdqs-updater_1 ... done
Stopping wdqsconceptblog2019_wdqs_1         ... done
$ docker-compose rm wdqs-updater wdqs
Going to remove wdqsconceptblog2019_wdqs-updater_1, wdqsconceptblog2019_wdqs_1
Are you sure? [yN] y
Removing wdqsconceptblog2019_wdqs-updater_1 ... done
Removing wdqsconceptblog2019_wdqs_1         ... done

Then the volume. Note, this is a permanent removal of any data stored in the volume.

$ docker volume ls | grep query-service-data
local               wdqsconceptblog2019_query-service-data
local               wdqsconceptblog2019_query-service-data-new
$ docker volume rm wdqsconceptblog2019_query-service-data
wdqsconceptblog2019_query-service-data

And other files, also being permanently removed.

$ rm rdfOutput
$ docker-compose exec wikibase rm /tmp/rdfOutput
$ docker-compose exec wdqs-new rm /tmp/rdfOutput
$ docker-compose exec wdqs-new rm -rf /tmp/mungeOut

I then also removed these services and volumes from the docker-compose yml file.

Things to consider

  • This process will take longer on larger wikibases.
  • If not using docker, you will have to run each query service on a different port.
  • This post was using wdqs 0.3.2. Future versions will likely work in the same way, but past versions may not.

Reading

The post Changing the concept URI of an existing Wikibase with data appeared first on Addshore.

Now that we know all about what MediaInfo content looks like and how to request it from the api, let’s see how to add MediaInfo content to an image. If you don’t remember all that, have a look at the previous blog post for a refresher.

Adding captions

Captions are called ‘labels’ in Wikibase lingo, and so we’ll want to put together a string that represents the json text defining one or more labels. Each label can have a string defined for one or more languages. This then gets passed to the MediaWiki api to do the heavy lifting.

Here’s an example of the ‘data’ parameter to the api before url encoding:

data={"labels":{"en":{"language":"en","value":"Category:Mak Grgić"},
"sl":{"language":"sl","value":"Mak Grgic"}}}

You’ll need to pass a csrf token which you can get after logging in to MediaWiki, and the standard parameters to wbeditentity, namely:

action=wbeditentity
id=<Mxxx, the MediaInfo id associated with your image>
summary=<comment summarizing the edit>
token=<csrf token you got from logging in>

Since I spend most of my coding life in Python land, we love the requests module. Here’s the relevant code for that:

        params = {'action': 'wbeditentity',
                  'format': 'json',
                  'id': minfo_id,
                  'data': '{"labels":{"en":{"language":"en","value":"' + caption + '"}}}',
                  'summary': comment,
                  'token': self.args['creds']['commons']['token']}
        response = requests.post(self.args['wiki_api_url'], data=params,
                                 cookies=self.args['creds']['commons']['cookies'],
                                 headers={'User-Agent': self.args['agent']})

where variables like minfo_id, comment and so on should be self-explanatory.

You’ll get json back and if the request fails within MediaWiki, there will be an entry named ‘error’ in the response with some string describing the error.

You can have a look at the add_captions() method in https://github.com/apergos/misc-wmf-crap/blob/master/glyph-image-generator/generate_glyph_pngs.py for any missing details.

Since that’s all pretty straightfoward, let’s move on to…

Adding Depicts Statements

A ‘depicts’ statement is a Wikibase statement (or ‘claim’) that the image associated with the specified MediaInfo id depicts a certain subject. We specify this by using the Wikidata property id associated with ‘depicts’. For www.wikidata.org that is https://www.wikidata.org/wiki/Property:P180 and for the test version of Wikidata I work with at https://wikidata.beta.wmflabs.org it is https://wikidata.beta.wmflabs.org/wiki/Property:P245962 so you’ll need to tailor your script to the Wikibase source you’re using.

When we set a depicts statement via the api, existing statements are not touched, so it’s good to check that we don’t already have a depicts statement that refers to our subject. We can retrieve the existing MediaInfo content (see the previous blog post for instructions) and check that there is no such depicts statement in the content before continuing.

When we add a depicts or other statement, existing labels aren’t disturbed, so you can batch caption some images and then go on to batch add depicts statements without any worries.

The MediaInfo depicts statement, like any other Wikibase claim, has a ‘mainsnak‘, and a ‘snaktype‘ (see previous blog post for more info). Crucially, the value for the depicts property must be an existing item in the Wikidata repository used by your image repo; it cannot be a text string but must be an item id (Qnnn).

Here is an example of the ‘data’ parameter to the api before url encoding:

data={"labels":{"en":{"language":"en","value":"Template:Zoom"},
"fa":{"language":"fa","value":"الگو:Zoom"}},
"sitelinks":{"commonswiki":{"site":"commonswiki",
"title":"Template:Zoom"},"fawiki":{"site":"fawiki",
"title":"الگو:Zoom"}}}

For the requests module, you’ll have something like this:

        depicts = ('{"claims":[{"mainsnak":{"snaktype":"value","property":"' +
                   self.args['depicts'] +
                   '","datavalue":{"value":{"entity-type":"item","id":"' +
                   depicts_id + '"},' +
                   '"type":"wikibase-entityid"}},"type":"statement","rank":"normal"}]}')
        comment = 'add depicts statement'
        params = {'action': 'wbeditentity',
                  'format': 'json',
                  'id': minfo_id,
                  'data': depicts,
                  'summary': comment,
                  'token': self.args['creds']['commons']['token']}
        response = requests.post(self.args['wiki_api_url'], data=params,
                                 cookies=self.args['creds']['commons']['cookies'],
                                 headers={'User-Agent': self.args['agent']})

Note here that while we retrieve MediaInfo content from the api and these entries are called ‘statements’ in the output, when we submit them, they are called ‘claims’. Other than that, make sure that you have the right property id for ‘depicts’ and you should be good to go.

There are some details like the ‘<code>rank:normal</code>’ bit that you can learn about here https://commons.wikimedia.org/wiki/Commons:Depicts#Prominence  (TL;DR: if you use ‘rank:normal’ for now you won’t hurt anything.)

Again, the variables ought to be pretty self-explanatory. For more details you can look at the add_depicts method in the generate_glyph_pngs.py script.

You’ll get an ‘error’ item in the json response, if there’s a problem.

More about the sample script

The script is a quick tool I used to populate our testbed with a bunch of structured data; it’s not meant for production use! It doesn’t check for nor respect maxlag (the databases replication lag status), it doesn’t handle dropped connections, it does no retries, it doesn’t use clientlogin for non-bot scripts, etc. But it does illustrate how to add retrieve MediaInfo content, add captions, add items to Wikidata, and set depicts statements.

Much more is possible; this is just the beginning. Join the #wikimedia-commons-sd IRC channel on freenode.net for more!

Wiki Loves Monuments UK 2019 winners announced

12:06, Monday, 04 2019 November UTC
Perch Rock Lighthouse continues its run of winning entries in WLM UK competitions with this commended image by Mark Warren

Wiki Loves Monuments UK, part of the world’s biggest photographic contest, has announced the winners of this year’s competition. The UK competition is organised and voted on by members of the Wikimedia community in the UK, and seeks to encourage photographers to upload their images to Wikimedia Commons, the media-sharing sister site of Wikipedia, where content is shared on Creative Commons Open Licenses and is freely available to use by anybody.

Competition organiser Michael Maggs announced the winners on the Wiki Loves Monuments UK site over the weekend and explained the judges decisions to award the main prizes.

First prize

Kilchurn Castle at sunrise by MHoser – image CC BY-SA 4.0

Michael Maggs: “The judges appreciated the wonderful colour-palette that the photographer has captured with the early-morning light, and the real skill and care that is evident in the composition.”

Second prize

File:Bass Rock with lighthouse and gannets.jpg

Bass Rock with Lighthouse by Ellievking – image CC BY-SA 4.0

Michael Maggs: “Although Bass Rock is a well-photographed subject, the judges picked this image out for its unusual and varied lighting which brings out the details of the upper rock surface, the clouds of birds in flight, and the photographic angle which allows the lighthouse to stand out clearly.

Third prize

Sun Setting on Commando Memorial by Jock in Northumberland – image CC BY-SA 4.0

Michael Maggs: “The judges liked the use of a low camera angle and late afternoon sunshine to enhance the presence of this powerful monument. They also appreciated the photographer choosing a lesser-known site.”

Commended

Arnol Blackhouse

Arnol Blackhouse by Castlehunter (David C. Weinczok) – image CC BY-SA 4.0

Clifton Suspension Bridge and the Observatory in Bristol, England

Clifton Suspension Bridge and the Observatory in Bristol England by Chris Lathom-Sharpimage CC BY-SA 4.0

High tide at Newport transporter Bridge

High tide at Newport Transporter Bridge by Andy Perkinsimage CC BY-SA 4.0

Leasowe Lighthouse Frozen Fields

Leasowe Lighthouse with Frozen Fields by Mark Warren 1973image CC BY-SA 4.0

Perch Rock Lighthouse Gold

Perch Rock Lighthouse by Mark Warren 1973  – image CC BY-SA 4.0

This year the judges have awarded only five commendations, as they did not feel there were sufficient images of prize-winning quality to award the usual seven. We are accordingly submitting a total of eight images this time.

If you want to see all 10,438 images submitted to Wiki Loves Monuments this year, you can find them in this category on Wikimedia Commons.

Congratulations to the winners of the top prizes, and especially to MHoser, whose winning entry receives a prize of £250. Wiki Loves Monuments will return in September 2020, and we strongly encourage photographers to consider taking photos of monuments throughout the year which they can submit next September.

 

Tech News issue #45, 2019 (November 4, 2019)

00:00, Monday, 04 2019 November UTC
TriangleArrow-Left.svgprevious 2019, week 45 (Monday 04 November 2019) nextTriangleArrow-Right.svg
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎español • ‎français • ‎lietuvių • ‎magyar • ‎polski • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎Ελληνικά • ‎русский • ‎српски / srpski • ‎українська • ‎עברית • ‎العربية • ‎ไทย • ‎中文 • ‎日本語

weeklyOSM 484

12:24, Sunday, 03 2019 November UTC

22/10/2019-28/10/2019

lead picture

OSM-data + Blender + QGIS + … + creativity by Dolly Andriatsiferana
1 | © Dolly Andriatsiferana(@privatemajory) – Map data © OpenStreetMap contributors

Mapping

  • Jean-Louis Zimmerman poses an ontological challenge for OSM tagging. How do you describe a tourism direction sign (fingerpost) with dynamic digital panels which rotate?
  • The tag highway=mini_roundabout has been used nearly 52,000 times and was documented 11 years ago. Florian Lohoff came across a mini_roundabaout in OSM and suggests deprecating the tag, one which he hadn’t used before. This started a lengthy discussion thread on the tagging mailing list where you can learn the background and a lot of country-specific regulations.
  • The tagging of markers for utility services, such as gas and water pipes, service valves, power lines, hydrants and many more, has been approved. As usual the documentation moved from the proposal site to Key:marker in the OSM wiki.
  • Mateusz Konieczny filed an issue on GitHub for the iD editor to reverse the disputed decision of the iD maintainers to let the iD validator recommend replacing crossing=zebra with crossing=marked. This is a particular issue in the UK, where the tag originated, and where it has a precise legal meaning. The GitHub issue was closed and further comments were ignored.
  • Andrew Wiseman of Apple announces on several mailing lists that they had refreshed MapRoulette challenges with new data, and added more countries.
  • Nuno Caldeira writes about how Portugal is validating parks with a MapRoulette mission to check for fake parks added by Pokémon GO players. It turns out that mis-translation and differences between Portuguese spoken in Brazil and Portugal have also contributed to mistagging of parks.
  • Fanfouer’s “Line management” proposal aims to add information on how lines such as power lines are arranged on a support, such as a pole.
  • Blog.dedj has given a visual explanation of how to use the Tasking Manager (fr) (automatic translation) to manage a project and break down group work.
  • Access tags, the use of the value “yes” vs “designated” and the “last mile” mapping of several logistic companies are causing discussions around the world. SomeoneElse explains his personal view on these topics in his user diary, which is — as he points out — very “England and Wales” centric.

Community

  • Kathmandu Living Labs reported the findings of their Digital Internship and Leadership (DIAL) program in an article in the Journal of Open Geospatial Data, Software and Standards. The program engaged undergraduate students and recent graduates from Nepal in a remote internship program to map rural Nepal.
  • The responsive style “Air3” created by user Negreheb has been set as the default style by the administrator of the OSM forum, so visitors can also view the forum on their smartphones. Feedback is highly appreciated. Registered users can select the Air3-style at Profile->Display->Styles.
  • The Malian website WoManager interviewed (fr) (automatic translation) Nathalie Sidibé from OpenStreetMap Mali. In the interview Nathalie shares challenges she came across in her young life as a Malian woman, and her future ambitions.
  • Youthmappers have published their third quarterly newsletter of 2019. They welcome nine new chapters from universities in Bangladesh, Tanzania, Ethiopia, USA and Uganda, and share news of several mapping projects done by Youthmappers chapters all over the world.

OpenStreetMap Foundation

  • A draft working document on the OSMF Microgrant scheme has been made available. Michael Reichert has already made some detailed comments on this draft.
  • OSMF’s Membership Working Group explains the possibilities for those who can not afford the OSMF membership fee or alternatively have no way to send the OSMF membership fee. The post points to available information on the Fee Waiver Program and also to ways you can help people from your community become members of the Foundation.

Events

  • If you have not yet attended a State of the Map this year, you still have the chance to catch up with a trip to New Zealand. The FOSS4G SotM Oceania will take place on 12 to 15 November 2019 in Wellington.
  • On his WhatOSM blog, Ilya Zveryev reflects (automatic translation) on aspects of his talk at State of the Map Southeast Europe, “OpenStreetMapS”, on the consequences of regional variability in tagging approaches.
  • OpenStreetMap DRC and MapBox held training (fr) (automatic translation) about MAPBOX ATLAS on 17 and 18 October. MAPBOX ATLAS is an offline mapping system for certain MapBox mapping tools. Local OpenStreetMap members Claire Halleux and David Kapay participated, as did two employees of the Ministry of Health who will use the tools in the fight against epidemics.
  • The State of the Map Africa organising committee shares details about the conference in their blog. State of the Map Africa 2019 will take place in Abidjan and Grand-Bassam, Ivory Coast from the 22 to 24 November.
  • Also State of the Map LatAm 2019 is looming. Latin Amercia’s main OSM event will take place from 14 to 16 November 2019 in Encarnación, Paraguay. Further details can be found in the OSM wiki (es) (automatic translation).

Humanitarian OSM

  • Russell Deffner announced this year’s GeoWeek during the week of 11 to 16 November 2019. With partners such as the Climate Center, International Federation of Red Cross and Red Crescent Societies (IFRC) reference center, the events will be about completing tasks for mappers and validators related to climate change and extreme weather events.
  • Ramani Huria, a community-based mapping project in Dar es Salaam, Tanzania that creates highly accurate maps of the most flood-prone areas of the city, features in the Collective Intelligence Design Playbook written by Nesta as an example of taking action to combat climate change and impact through collective intelligence.
  • Nicholas Marchio highlights in an article in phys.org that the creation of the millionneighborhoods map “is a significant step toward locating where critical urban services are needed most”.

Education

  • At DINAcon, the conference for digital sustainability, the Dinacon Awards presented five outstanding projects on Friday, 18 October in Berne. OpenSchoolMaps won in the category “Best Education Project”. A decisive point for the jury was obviously the possibility of feedback. OpenSchoolMaps is a small project to promote open maps and map data, including OpenStreetMap. It was founded by Prof. Stefan Keller. Also, computer science students of the university for technology Rapperswil helped.

Maps

  • [1] Dolly Andriatsiferana presented a self-made map of their hometown, Fianarantsoa, on Twitter.
  • The Transport & Development Policy Institute (ITDP) and the Brazilian Cyclists Union (UCB), two civil society organisations, have jointly created the CicloMapa platform to view cycling maps of Brazilian cities, with data mapped in OSM.

Open Data

Licences

  • Élisée Reclus tweets: “The @DeutschePostDHL uses #OpenStreetMap uniquely in its location search, but attributes #HERE”. (de) He can recognise it because of a trap street accidentally created by himself. OSMF and FOSSGIS are now required to refer to our licence conditions.

Programming

  • Tomas Kasparek wants to create a map for the Czech Republic with old OSM data. While he has no issues with data back to October 2007, he has problems dealing with data before and asks for help.

Releases

  • The iD editor has been updated to v2.16.0. The release notes highlight support for objects which were detected in Mapillary images and you can track changes while editing. You can now use this version on the main map.
  • Joseph Eisenberg announced the release of v4.24.0 of the OpenStreetMap Carto stylesheet. Waterways as well as river and canal areas are now a bit darker. The new version also fixes the rendering of water body labels on nodes and deprecate the rendering of waterway=wadi. There are more changes which he mentioned in his blog post or can be found in GitHub.
  • Version 3.10 of QGIS is now available for download. This version shows the 3D length for an identified 3D linestring, brings several labelling, symbology and rendering improvements and adds new 3D features, including a 3D On-Screen Navigation. The full list of improvements can be found in the changelog below the list of sponsors.

Did you know …

  • Datawrapper offers an Enriched Map service using OSM in the background. This map inserted in a news item by Radio-Canada shows OSM attribution. We also reported about the Datawrapper Map Locator service in weeklyOSM 430.
  • … the project resiliencymaps.org which harnesses the richness of OpenStreetMap for earthquake risk modelling.
  • Taginfo, the site which allows exploration of which keys and tags are in use on OSM. A number of local versions exist too.

Other “geo” things

  • The 118-year-old listed Danish lighthouse Rubjerg Knude Fyr threatened to slide into the sea. The lighthouse was moved on rails and brought to safety by an elaborate procedure. The public interest was great. The move was, of course, recorded on OSM.
  • No translation
  • OpenStreetMap Cameroun calls on twitter for applications for the GeOsm project. The selected applicants will lead the deployment of national spatial data infrastructure based on OpenStreetMap, like Geocameroun does in Cameroon. The project is searching for representatives from all five subregions of the continent of Africa.
  • Unexpected mobile roaming charges hit Russian scientists studying bird movements. SMS messages from a tracking device attached to an eagle were queued and dispatched en mass when the bird crossed borders into Iran and Khazakstan.
  • It’s not just OSMers who nit-pick about the difference between bars and pubs. El País English edition carries an interview with Amadeo Lázaro owner of the Casa Amadeo “Los Caracoles” pub in Madrid. In his view, what separates a bar from a pub in Spain is that bars have chairs, pubs only stools.
  • Climbing Uluru, a monolith in the desert south of Alice Springs, Australia, is no longer allowed. Violating both the sacred place of indigenous people and the consequent ban is now illegal.

Upcoming Events

Where What When Country
Dhaka State of the Map Asia 2019 2019-11-01-2019-11-02 bangladesh
Brno State of the Map CZ+SK 2019 2019-11-02-2019-11-03 czech republic
Brno Brněnský listopadový Missing maps mapathon na konferenci OpenAlt 2019-11-02 czech republic
Toronto Toronto Mappy Hour 2019-11-04 canada
Grenoble Atelier Contribuer avec Mapillary 2019-11-04 france
London London Missing Maps Mapathon 2019-11-05 united kingdom
Stuttgart Stuttgarter Stammtisch 2019-11-06 germany
Helsinki Missing Maps Mapathon at Finnish Red Cross – Nov 2019 2019-11-07 finland
Bochum Mappertreffen 2019-11-07 germany
San José Civic Hack & Map Night 2019-11-07 united states
Montrouge Rencontre mensuelle des contributeurs de Montrouge et alentours 2019-11-07 france
Ulmer Alb Stammtisch Ulmer Alb 2019-11-07 germany
Dortmund Mappertreffen 2019-11-08 germany
Kameoka 京都!街歩き!マッピングパーティ:第14回 鍬山神社 2019-11-10 japan
Budapest OSM Hungary Meetup reboot 2019-11-11 hungary
Taipei OSM x Wikidata #10 2019-11-11 taiwan
Lyon Rencontre mensuelle pour tous 2019-11-12 france
Salt Lake City SLC Mappy Hour 2019-11-12 united states
Nitra Missing Maps Mapathon Nitra #4 2019-11-12 slovakia
Wellington FOSS4G SotM Oceania 2019 2019-11-12-2019-11-15 new zealand
Hamburg Hamburger Mappertreffen 2019-11-12 germany
Munich Münchner Stammtisch 2019-11-13 germany
Berlin 137. Berlin-Brandenburg Stammtisch 2019-11-14 germany
Nantes Réunion mensuelle 2019-11-14 france
Encarnación State of the Map Latam 2019 2019-11-14 paraguay
Niš Missing Maps Mapathon Niš #1 2019-11-16 serbia
Hanover Stammtisch 2019-11-16 germany
Cologne Bonn Airport Bonner Stammtisch 2019-11-19 germany
Reading Reading Missing Maps Mapathon 2019-11-19 united kingdom
Lüneburg Lüneburger Mappertreffen 2019-11-19 germany
Prešov Missing Maps Mapathon Prešov #4 2019-11-21 slovakia
Grand-Bassam State of the Map Africa 2019 2019-11-22-2019-11-24 ivory coast
Cape Town State of the Map 2020 2020-07-03-2020-07-05 south africa

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Elizabete, Jorieke V, Nakaner, NunoMASAzevedo, PierZen, Polyglot, Rogehm, SK53, Softgrow, SunCobalt, TheSwavu, YoViajo, derFred.

Production Excellence: September 2019

14:04, Tuesday, 29 2019 October UTC

How’d we do in our strive for operational excellence last month? Read on to find out!

📊 Month in numbers
  • 5 documented incidents. [1]
  • 22 new errors reported. [2]
  • 31 error reports closed. [3]
  • 213 currently open Wikimedia-prod-error reports in total. [4]

There were five recorded incidents last month, equal to the median for this and last year. – Explore this data.

To read more about these incidents, their investigations, and pending actionables; check Incident documentation § 2019.


*️⃣ A Tale of Three Great Upgrades

This month saw three major upgrades across the MediaWiki stack.

Migrate from HHVM to PHP 7.2

The client-side switch to toggle between HHVM and PHP 7.2 saw its final push — from the 50% it was at previously, to 100% of page view sessions on 17 September. The switch further solidified on 24 September when static MediaWiki traffic followed suit (e.g. API and ResourceLoader). Thanks @jijiki and @Joe for the final push. – More details at T219150 and T176370.

Drop support for IE6 and IE7

The RFC to discontinue basic compatibility for the IE6 and IE7 browsers entered Last Call on 18 September. It was approved on 2 Oct (T232563). Thanks to @Volker_E for leading the sprint to optimise our CSS payloads by removing now-redundant style rules for IE6-7 compat. – More at T234582.

Transition from PHPUnit 4/6 to PHPUnit 8

With HHVM behind us, our Composer configuration no longer needs to be compatible with a “PHP 5.6 like” run-time. Support for the real PHP 5.6 was dropped over 2 years ago, and the HHVM engine supports PHP 7 features. But, the HHVM engine identifies as “PHP 5.6.999-hhvm”. As such, Composer refused to install PHPUnit 6 (which requires PHP 7.0+). Instead, Composer could only install PHPUnit 4 under HHVM (as for PHP 5.6). Our unit tests have had to remain compatible with both PHPUnit 4 and PHPUnit 6 simultaneously.

Now that we’re fully on PHP 7.2+, our Composer configuration effectively drops PHP 5.6, 7.0 and 7.1 all at once. This means that we no longer run PHPUnit tests on multiple PHPUnit versions (PHPUnit 6 only). The upgrade to PHPUnit 8 (PHP 7.2+) is also unlocked! Thanks @MaxSem, @Jdforrester-WMF and @Daimona for leading this transition. – T192167


📉 Outstanding reports

Take a look at the workboard and look for tasks that might need your help. The workboard lists error reports, grouped by the month in which they were first observed.

https://phabricator.wikimedia.org/tag/wikimedia-production-error/

Or help someone that’s already started with their patch:
Open prod-error tasks with a Patch-For-Review

Breakdown of recent months (past two weeks not included):

  • February: 1 report was closed. (1 / 5 reports left).
  • March: 4 / 10 reports left (unchanged).
  • April: 8 / 14 reports left (unchanged). ⚠️
  • May: The last 4 reports were resolved. Done! ❇️
  • June: 9 of 11 reports left (unchanged). ⚠️
  • July: 4 reports were fixed! (13 / 18 reports left).
  • August: 6 reports were fixed! (8 / 14 reports left).
  • September: 12 new reports survived the month of September.

🎉 Thanks!

Thank you, to everyone else who helped by reporting, investigating, or resolving problems in Wikimedia production. Thanks!

Until next time,

– Timo Tijhof


📖“I'm not crazy about reality, but it's still the only place to get a decent meal.

Footnotes:

[1] Incidents. –
wikitech.wikimedia.org/wiki/Special:PrefixIndex?prefix=Incident…

[2] Tasks created. –
phabricator.wikimedia.org/maniphest/query…

[3] Tasks closed. –
phabricator.wikimedia.org/maniphest/query…

[4] Open tasks. –
phabricator.wikimedia.org/maniphest/query…

New MediaWiki blog

05:22, Tuesday, 29 2019 October UTC

I’m happy to announce a brand new blog dedicated to MediaWiki and all things enterprise wiki related.

Half a year ago I launched Professional Wiki together with Karsten Hoffmeyer. Professional Wiki is, as the name suggests, a company providing professional wiki services. We help companies create and manage wikis, we provide training and support and we offer fully managed wiki hosting.

Today we published our new blog featuring a first post on installing MediaWiki extensions with Composer. This blog will contain both wiki news and longer lived articles, such as the one about Composer.

In recent years I was hesitant to post MediaWiki specific content on this blog (EntropyWins) because it had evolved a focus on software design. The new Professional Wiki blog solves this problem, so you can expect more MediaWiki related posts from me again.

The post New MediaWiki blog appeared first on Entropy Wins.

Tech News issue #44, 2019 (October 28, 2019)

00:00, Monday, 28 2019 October UTC
TriangleArrow-Left.svgprevious 2019, week 44 (Monday 28 October 2019) nextTriangleArrow-Right.svg
Other languages:
Bahasa Indonesia • ‎Deutsch • ‎English • ‎français • ‎lietuvių • ‎magyar • ‎polski • ‎português • ‎português do Brasil • ‎suomi • ‎svenska • ‎čeština • ‎Ελληνικά • ‎русский • ‎українська • ‎עברית • ‎العربية • ‎ไทย • ‎中文 • ‎日本語 • ‎ꯃꯤꯇꯩ ꯂꯣꯟ

weeklyOSM 483

20:43, Sunday, 27 2019 October UTC

15/10/2019-21/10/2019

Mapping

  • On regio-osm.de there is now a house number evaluator for Kosovo.
  • Florian Lohoff wrote in his blog about the issues arising when access=private is used on service roads. Usually this choice means no routers will use these highways; more nuanced access options should be chosen instead.
  • Adam Franco created two videos about working with multipolygons using JOSM. The first one is about creating multipolygons for adjoining areas with shared ways. Please note that this type of mapping is being discouraged in many parts of the world. In his second video he explains how to identify and bugfix multipolygons.
  • Valor Naram’s proposal to deprecate the tag contact:phone= in favour of phone= is up for voting until 5 November 2019.
  • Voting also started for Vɑdɪm’s proposal to tag outdoor locations designated for sunbathing. The poll ends on 5 November 2019.
  • OpenStreetMap Croatia started their own instance of Tasking Manager (automatic translation) to better coordinate mapping and imagery recently made available from state sources. The first project is naming all the bays on the coast.

Community

  • [1] Pascal Neis added some “leaderboards” to his OSMstats. Exactly 19 users contributed and mapped every single day to OSM last year, as reported in “Activity”. It is also worthwhile to have a look at the other reports: Map changes, Discussions, Traces and Notes.
  • Heather Leson points to a survey concerning people in FLO (Free/Libre/Open) communities.
  • RebeccaF provides, in an OSM diary entry, a long list of potential action points for increasing diversity with OSM, HOT and State of the Map. These points came out of the session at SotM Heidelberg.
  • OSM plays a key role in a dynamic earthquake risk model that processes updates on buildings every 60 seconds.

Imports

  • Data from the Estonian cadastre has been imported into OSM since 2013. OSM user SviMik has now developed a convenient and simple tool to maintain updates. He also created a validator that tells you where to add new and missing roads in Estonia. An overview of statistics and all tools is available on his personal site (ru); there is still scope for help from the community.

Events

  • Betaslb wrote (automatic translation) in her user diary about her experience at the SotM 2019, including some inner thoughts from someone who is not an expert in OSM, but a beginner.
  • The registration for SotM Asia 2019 is open. SotM Asia takes place in Dhaka, Bangladesh 1 and 2 November 2019 .

Humanitarian OSM

  • Riley Champine, graphics editor at National Geographic, tweeted a link to slides of his presentation at NAICS 2019 on Mapping Refugees with Open Data. A bonus is the link to a Google Spreadsheet of a range of Overpass queries he used.

Education

  • Russian user Pavel Gavrilov wrote (ru) a tutorial on OSM which can be useful for beginners. He talks about the project basis.

Maps

  • Russian user Nikolay Petrov launched an online project OpenRecycleMap (ru). This is a map which helps in finding recycling collection points. The service also allows users to add new waste collection points. Data is taken from OSM and then added to it.
  • maptiler.com published a blog post about the new offering of OSM in WGS84, French Lambert and Dutch Rijksdriehoekstelsel map projections. The article also provides a brief overview of map projections and links to Tobias Jung’s interesting comparison of selected map projections.
  • The Ministry of Industry and Trade of the Russian Federation has developed the “Geoinformation system of industrial parks, technoparks and clusters of the Russian Federation”, which uses OSM as a basemap.

switch2OSM

  • The Russian portal E1 reports (automatic translation) that new smart traffic lights will start operating in Yekaterinburg in November 2019. They will give priority to public transport. On one of the photographs you can see that the traffic control system uses OSM as a map.

Open Data

Licences

  • The discussion about Facebook and missing attribution is still going on. On the one hand Christoph emphasises that the discussion and development of a new community guideline does not happen in the open. On the other hand, Nuno Caldeira details the licence infringement and supplies some detailed examples.

Software

  • Jody Garnett asks via Twitter whether members of fossgis_eV would be willing to support GeoServer with OGC recertification measures.

Programming

  • Shay Strong of EagleView wrote, on the KD Nuggets site, about an approach to using machine learning to identify unmapped buildings on OpenStreetMap. Update: Her model is bootstrapped with known buildings on OSM.
  • Jochen Topf reported about his experiences with the Hetzner cloud, into which he has moved data.openstreetmap.de (formerly openstreetmapdata.com).
  • Andy Allan wrote on his blog about progress being made in refactoring the core code of OpenStreetMap to allow supporting multiple API versions at the same time.

Releases

  • Version 3.0.0 of PostGIS has been released.
  • Version 2.3.0 of the “Sight Safari” mobile application has been released (ru). Now you can share routes with other users right from the app and also create intermediate points on the route.

Did you know …

  • … of the website of Russian user AMDmi3? You can find various OSM renders there.

Other “geo” things

  • The Guardian has an amusing article where a journalist confronts teenagers with outmoded technologies of the 1980s, including rotary dial telephones, a Sony Walkman, and a transistor radio. One of the examples is a printed street atlas of London (the “A-Z”).
  • We reported last week on the demise of paper maps from Geoscience Australia. Now The Guardian has an in-depth reflection on this announcement.
  • Jens Jackowski reported (sv) (automatic translation) from Sweden and says: “interesting, now the Swedish land surveying office is already calling on citizens to improve the official Swedish maps and, for example, to report missing hiking trails.” We say: “Then we’d better work with OpenStreetMap right now.”
  • James Macfarlane believes that the increased use of navigation apps like Waze, Apple Maps, and Google Maps is multiplying chaos and making it harder to manage traffic.
  • An article in futurezone.at shows (de) (automatic translation) that map errors do not always have to be seen negatively. Due to an error in Google Maps a drug gang was caught.
  • El Pais noted (es) (automatic translation) that military maps of Spain, compiled during World War II by the Americans and British, have finally been de-classified.
  • Google announced that in the future it will be possible to report traffic incidents such as accidents and breakdowns on Google Maps. Google bought Waze six years ago and adding these features to Google Maps raises questions about Waze’s future.
  • A paper in Ecography demonstrates how textual analysis of scientific papers can be used to create maps of where studies of various insect pollinators (typically bees) have been carried out.

Upcoming Events

Where What When Country
Prizren State of the Map Southeast Europe 2019-10-25-2019-10-27 kosovo
Rapperswil 11. Micro Mapping Party Rapperswil (OpenStreetMap Mapathon) 2019-10-25 switzerland
Yosano-chō 京都!街歩き!マッピングパーティ:第13回 ちりめん街道 2019-10-27 japan
Bremen Bremer Mappertreffen 2019-10-28 germany
Zurich Missing Maps Mapathon Zürich 2019-10-30 switzerland
Düsseldorf Stammtisch 2019-10-30 germany
Ulmer Alb Stammtisch Ulmer Alb 2019-10-31 germany
Dhaka State of the Map Asia 2019 2019-11-01-2019-11-02 bangladesh
Brno State of the Map CZ+SK 2019 2019-11-02-2019-11-03 czech republic
London London Missing Maps Mapathon 2019-11-05 united kingdom
Stuttgart Stuttgarter Stammtisch 2019-11-06 germany
Bochum Mappertreffen 2019-11-07 germany
San José Civic Hack & Map Night 2019-11-07 united states
Montrouge Rencontre mensuelle des contributeurs de Montrouge et alentours 2019-11-07 france
Dortmund Mappertreffen 2019-11-08 germany
Budapest OSM Hungary Meetup reboot 2019-11-11 hungary
Taipei OSM x Wikidata #10 2019-11-11 taiwan
Lyon Rencontre mensuelle pour tous 2019-11-12 france
Salt Lake City SLC Mappy Hour 2019-11-12 united states
Nitra Missing Maps Mapathon Nitra #4 2019-11-12 slovakia
Wellington FOSS4G SotM Oceania 2019 2019-11-12-2019-11-15 new zealand
Hamburg Hamburger Mappertreffen 2019-11-12 germany
Munich Münchner Stammtisch 2019-11-13 germany
Prešov Missing Maps Mapathon Prešov #4 2019-11-13 slovakia
Berlin 137. Berlin-Brandenburg Stammtisch 2019-11-14 germany
Nantes Réunion mensuelle 2019-11-14 france
Encarnación State of the Map Latam 2019 2019-11-14 paraguay
Grand-Bassam State of the Map Africa 2019 2019-11-22-2019-11-24 ivory coast
Cape Town State of the Map 2020 2020-07-03-2020-07-05 south africa

Note: If you like to see your event here, please put it into the calendar. Only data which is there, will appear in weeklyOSM. Please check your event in our public calendar preview and correct it, where appropriate.

This weeklyOSM was produced by Elizabete, Nakaner, Polyglot, Rogehm, SK53, Silka123, SunCobalt, TheSwavu, YoViajo, derFred, geologist, jinalfoflia.

Older blog entries