Wikimedia Indonesia offers a unique internship program for students who are passionate about free knowledge and open collaboration. Interns can apply to various thematic areas such as Galleries, Libraries, Archives, and Museums (GLAM), Wikidata Indonesia, Education, and Communication. Among these, the Wikidata Indonesia internship program focuses on structured data and linked knowledge in Wikidata, and how people could use them. Interns in this program dive deep into the Wikimedia ecosystem—learning how to contribute across different Wiki projects while using Wikidata as the central pivot. Their journey culminates in a final written project that synthesizes knowledge from Wikidata and related platforms. Along the way, they gain hands-on experience collaborating with Wikimedia communities across Indonesia, including participating in communal events, organizing outreach programs, and conducting research together. In this article, we will explore the experiences of two interns in the Wikidata Indonesia program: Ulya, a final-year Information Technology undergraduate student at Universitas Andalas in Padang, and Nita, a final-year Mathematics undergraduate student at the Institut Teknologi Bandung (ITB) in Bandung.

Why Wikimedia Indonesia?

For both Nita and Ulya, choosing Wikimedia Indonesia as their internship destination was a deliberate decision shaped by curiosity and alignment with their academic interests. Wikimedia Indonesia’s commitment to open knowledge and structured data presented an ideal learning ground for aspiring data professionals. Ulya, a student of Information Technology, was drawn to the organization by her desire to understand how large-scale platforms like Wikipedia manage and present knowledge to the public. With a strong interest in data and technology, she saw the internship not only as a chance to gain work experience but also as an opportunity to broaden her understanding of knowledge infrastructure and the tools behind it.

Meanwhile, Nita, a Mathematics student, was particularly excited by the Wikidata project due to its close connection with data science and analysis—fields she’s passionate about. The use of open data at scale and exposure to tools like SPARQL and Python appealed to her technical curiosity. She also felt inspired by Wikimedia’s mission to provide free and equitable access to knowledge. For Nita, the chance to contribute to such a mission while sharpening her skills in handling structured, open data made Wikimedia Indonesia a compelling and meaningful choice for her internship.

Expectations and first impressions

Ulya presenting her final project, CC BY-SA 4.0

Before beginning their internships, both Nita and Ulya held high hopes for what they would learn and experience at Wikimedia Indonesia. Nita anticipated being introduced to the broader Wikimedia ecosystem, not just from a technical standpoint, but also in terms of its community-driven culture. She hoped to deepen her understanding of how volunteers and staff collaborate across various Wiki projects. Additionally, she looked forward to enhancing her technical abilities—particularly in writing SPARQL queries and working with structured data—skills that are essential for a future career in data science. While she did find the internship intellectually fulfilling, Nita later realized that the workload distribution was uneven: the early phase felt too light, while the final weeks became unexpectedly intense, leaving her feeling a bit overwhelmed.

Ulya, on the other hand, expected an open and collaborative environment where learning and experimentation were encouraged. She was eager to not only build on her existing skills in data analysis and communication but also to understand how Wikimedia runs its open knowledge projects behind the scenes. What stood out to her was the prospect of working in a professional yet inclusive setting—where interns weren’t just assigned tasks, but were also welcomed as contributors whose input and curiosity were valued. Though enthusiastic, she initially underestimated the complexity of Wikidata’s structure and struggled to grasp the principles of linked data and SPARQL queries. Still, these early challenges became stepping stones in her learning process and ultimately aligned well with her expectations of growth.

Day-to-day activities and key projects

Throughout their internships, both Nita and Ulya immersed themselves in a range of activities centered around Wikidata and the broader Wikimedia projects. One of the recurring tasks they took part in was WikiLatih Wikidata, Wikimedia Indonesia’s public workshop session designed to introduce new contributors to Wikidata. Nita was responsible for leading one of these sessions, where she taught participants how to contribute data and use Wikidata tools. Ulya, meanwhile, assisted in several WikiLatih events, supporting both mentors and participants. These sessions helped them strengthen their understanding of Wikidata’s structure while also building confidence in public communication and knowledge-sharing.

Beyond public events, both interns worked on individual tasks that involved data entry, query writing with SPARQL, and supporting the planning of events such as Datathons. Ulya took on a longer-term project mapping and visualizing programming languages in Wikidata—an effort that required her to classify languages, analyze trends, and present insights visually. Nita focused more on understanding data referencing, editing entity entries, and ultimately building a project that explored the differences between “actors or actresses” and “celebrities” in Indonesia. These assignments not only challenged their technical capabilities but also offered real-world contexts to apply data science principles in an open knowledge environment.

Challenges and how they overcame them

Like most meaningful learning experiences, Nita and Ulya’s internships were not without challenges—particularly when dealing with the technical and conceptual demands of Wikidata. For Ulya, one of the main difficulties was understanding linked data structures and SPARQL, the query language used to extract and manipulate data in Wikidata. Coming from a background in general data processing, she found the logic of SPARQL and the RDF-based structure of Wikidata initially confusing. However, she approached this challenge methodically: reading official documentation, experimenting with simple queries, and asking her mentor for guidance. Gradually, she was able to build complex queries and even use them to visualize programming language data—a milestone that marked her growth in both skill and confidence.

Nita, too, faced hurdles—particularly in distinguishing valid and appropriate references when adding or editing data. The openness of Wikidata meant that contributors had to be discerning and meticulous about the sources they used, which was not always easy for newcomers. She often second-guessed herself during the early stages of her contribution, unsure whether her edits met community standards. Fortunately, through the materials and discussions provided in the WikiLatih Daring (online) sessions, she became more comfortable navigating Wikidata’s reference guidelines. Another initial challenge was overcoming her hesitation to ask questions, a mindset she eventually outgrew as she realized that seeking clarification accelerated her learning. Both interns also had to adapt to the dynamics of remote collaboration—learning how to communicate progress clearly, provide updates, and coordinate effectively in a distributed team setting.

Highlights and Takeaways

For both Nita and Ulya, their time at Wikimedia Indonesia left a lasting impression—not only in terms of technical growth, but also in personal and professional development. One of the most meaningful takeaways for Ulya was her newfound confidence in using SPARQL. What began as an intimidating language eventually became a powerful tool in her hands, enabling her to extract and visualize meaningful patterns from structured data. She also gained a deeper understanding of the principles behind linked data, something she had only encountered in theory before. Beyond technical skills, participating as a facilitator in WikiLatih sessions helped her develop the ability to communicate technical concepts clearly, especially to newcomers—an experience that fostered both empathy and leadership.

Nita’s most memorable lessons were rooted in adapting to a professional work rhythm and learning how to thrive in a collaborative setting. She recounted how working five-hour days in the office challenged her to stay disciplined, especially during long stretches of quiet focus. This routine helped her build stamina, time management, and a sense of responsibility. She also learned the importance of professional etiquette, such as notifying the team about absences—small but significant habits that shape workplace culture. Perhaps most importantly, Nita discovered the value of asking questions early and often, a mindset shift that made her more effective and confident. For both interns, the internship was more than a checklist of tasks—it was a space to experiment, reflect, and grow.

The final project: A culmination of learning

Functional Programming Languanges in Wikidata, by Ulya, CC BY-SA 4.0

At the end of their internships, both Nita and Ulya undertook final projects that showcased the knowledge and skills they had developed throughout their time with Wikimedia Indonesia. Ulya focused on mapping and visualizing programming languages data stored in Wikidata, exploring how these languages are categorized (e.g., general-purpose vs. special-purpose), their release years, and their use cases. She used SPARQL to query the data, created charts to illustrate trends and examined inconsistencies in the dataset. The project not only honed her technical skills but also deepened her appreciation for the complexities of open data. From navigating incomplete or inconsistent entries to crafting precise queries, she learned the importance of critical thinking and contextual understanding in data analysis.

Understanding Indonesian Celebrities, by Nita, CC BY-SA 4.0.

Nita’s final assignment took a more narrative and cultural direction. She explored the distinction between actors or actresses and celebrities’ career paths in Indonesia, drawing from Wikidata and various references to craft both a written essay and a visual poster. Her project began by analyzing definitions, comparing career trajectories, and categorizing individuals based on occupation and age of first acting contract. She even attempted to forecast the future number of actors in Indonesia by examining trends in media and entertainment. One of her biggest challenges was selecting a topic that was both personally engaging and socially relevant—a process that involved lengthy brainstorming and a timely suggestion from her mentor. Through this project, Nita not only refined her writing and data interpretation skills, but also learned how to present structured information in a way that speaks to broader audiences.

Reflections and recommendations

Looking back on their internship experiences, both Nita and Ulya expressed a deep sense of gratitude and fulfillment. The internship not only enriched their technical skills but also gave them a clearer picture of how open knowledge platforms like Wikidata function in real-world contexts. Ulya appreciated how the program fostered a supportive and inclusive learning environment, encouraging interns to explore unfamiliar tools and contribute meaningfully. For her, the chance to work closely with linked data and the Wikimedia community was both challenging and empowering. Nita, too, valued the balance between autonomy and guidance, where interns were trusted with impactful work yet supported at every step. She especially highlighted how the internship helped her transition from a student mindset to a more professional rhythm, learning soft skills such as time management, workplace etiquette, and team communication.

Still, their experiences also revealed a few subtle mismatches between expectations and reality—offering insights for future improvement. Nita found the early phase of her internship too light and the final stretch unexpectedly intense, suggesting that a more evenly distributed workload could help interns maintain a steady learning pace. Ulya, meanwhile, realized that she had underestimated the complexity of Wikidata’s structure and SPARQL, which initially slowed her progress. These moments of friction, however, became valuable learning points—emphasizing the importance of adaptability and proactive communication. For future interns, both Nita and Ulya recommend staying curious, asking questions early, and fully embracing the collaborative spirit of Wikimedia. For Wikimedia Indonesia, they suggest ensuring clearer onboarding around workload expectations and possibly incorporating cross-divisional intern projects to build more interaction and peer learning. Overall, their journeys reflect the power of internships not just as training grounds, but as spaces of transformation.

Final note

As Wikimedia Indonesia continues to foster a culture of open knowledge and collaborative learning, its internship program remains a vital entry point for young talents eager to make an impact. Through their experiences, Nita and Ulya exemplify how immersive, hands-on engagement with data and community can shape not only professional capabilities but also a deeper appreciation for knowledge equity. Their stories highlight the importance of intentional mentorship, reflective practice, and a willingness to grow through challenge. For future cohorts, their journeys serve as both inspiration and guide—a reminder that internships are not just about completing tasks, but about discovering one’s place in a broader movement for free and open knowledge.

For future references or inquiries about internship opportunities in the Wikidata Indonesia program, please contact the Data and Technology division at Wikimedia Indonesia via email at datateknologi@wikimedia.or.id.

This Month in GLAM: June 2025

Saturday, 12 July 2025 15:40 UTC

Wikipedia:Administrators' newsletter/2025/8

Saturday, 12 July 2025 14:04 UTC

News and updates for administrators from the past month (July 2025).

Administrator changes

added
readded
removed ·

CheckUser changes

removed ST47

Guideline and policy news

Technical news

Arbitration

Miscellaneous


Archives
2017: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2018: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2019: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2020: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2021: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2022: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2023: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2024: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2025: 01, 02, 03, 04, 05, 06, 07


<<  Previous Archive    —    Current Archive    —    Next Archive  >>

PEN America is a non-profit organization that stands at the intersection of literature and human rights to protect free expression in the United States and worldwide. Through its digital safety programming, PEN America equips communities who require a public presence online to do their work—including journalists, writers, and researchers— with tools and strategies to protect themselves and one another. 

Multiple studies have shown that when ill-intentioned actors are able to take advantage of digital vulnerabilities to intimidate and harass users, critical voices are forced off of platforms and out of public discourse. Online abuse disproportionately affects individuals from marginalized backgrounds and individuals working on potentially controversial topics, including Wikimedians. As the digital landscape has evolved, online privacy and security have become increasingly vital safeguards that enable users like Wikimedians to continue to participate in digital spaces.

We respect and appreciate the countless Wikimedia volunteers around the world who give their time to produce and share reliable information in hundreds of languages. While Wikimedians share a commitment to the principles of openness and transparency online, it’s also important to understand the risks of being active online and steps you can take to strengthen your digital safety.

Here are six tips for bolstering your security and privacy online as a Wikimedian: 

  1. Make a private list of your top ten most sensitive accounts, which may include your email accounts, social media accounts, cloud storage, communication platforms like Whatsapp and Signal, and/or bank accounts. 
  2. For each of these accounts, set up a long, unique password that is more than 16 characters. Consider using a password manager, such as 1Password, Bitwarden, or Dashlane, to help you generate strong passwords and keep track of them. Think of your password as the main lock on your front door.
  3. For each of these accounts, also set up two-factor authentication (i.e., 2FA). You can think of 2FA as the secondary lock or deadbolt on your door: 2FA provides a fail safe if your password is compromised. You can set up 2FA using an authentication app (such as Authy, Duo, or Google Authenticator) or a security key (such as YubiKey), rather than using your cell phone number to receive confirmation codes, in order to avoid sim-jacking
  4. Consider how your Wikipedia username fits into your broader online presence. Online spaces are increasingly interconnected, and using the same or similar usernames across platforms (such as Instagram, X, Signal, or email) can make it easier for someone to piece together your identity or follow you across the web. Think about whether your Wikimedia username contains personal details, like your real name or birthdate, or overlaps with usernames you use elsewhere. Compartmentalizing your online identities is an important digital safety practice. Learn more in this blog
  5. Take some time to tighten the safety and privacy settings on all of the social media platforms you use. This could include making your accounts private, narrowing down who can message you or interact with your content, and restricting what data platforms can collect about you and your online habits. For platform-by-platform guidance, check out PEN America’s Digital Safety Checklist and the New York Times’ Self-Doxxing Guide

For more information on preparing for, navigating, and coping with online abuse and bolstering your digital safety, check out our resources and those of our trusted partners below: 

And be sure to also check out related blogs from the Wikimedia Foundation: 


Early June, among fifteen other fellows, I was privileged to be offered the African Wikipedian Alliance (AWA) Inclusion and Climate Justice Fellowship 2025 under Code for Africa (CfA) in partnership with the Norwegian Embassy, focused on closing content gaps and amplifying African voices on Wikipedia, Wikidata and Wikimedia Commons, by creating or improving topics relating to gender equity, climate awareness and other sustainability efforts in Namibia, Mozambique, and Zimbabwe.

Screenshot


The fellowship began with a series of onboarding sessions for three days from June 18 to June 20, 2025. The onboarding sessions were all interactive and provided useful resources for a good start and a perfect finish, with emphasis on our duties and expected monthly deliverables as fellows. The official general virtual launch was held on June 25, 2025 with over 100 participants in attendance.

I proceeded to create an event page, my work plan, and dashboard to track all contributions during the project period. I recruited 13 participants who would work closely with me to achieve the project goals. After conducting a needs assessment to ascertain the editing history, strength and possible needs of my participants, it helped inform my decision in organizing training sessions, edit-a-thons and office hours. A number of the recruited participants were experienced Wikimedia editors, with just a few new editors. So far, I have hosted three training sessions between June 27 to June 30, on the basics of Wikipedia and Wikidata, core policies and guidelines, and how to contribute effectively. The monthly edit-a-thon is ongoing, and the contributions from editors so far have been encouraging.

One month down, five more to go! I’m excited to see how much we would achieve together in shaping the narrative around climate justice, gender equity, and LGBTQ+ rights in Southern Africa.

The EduWiki Conference took place from May 28 to June 1, 2025, in Bogotá, Colombia, South America. The city’s weather was quite comfortable and the human atmosphere was warm. Participants came from different parts of the world: Africa, Asia, North America, Europe, and of course Latin America. About thirty people came from different parts of Colombia, the country that hosted the event.

The experiences at the event explored the intersection between the Wikimedia movement and education at all levels – from formal education to community and indigenous cases, as well as informal and activist perspectives. Several connections between AI, education and Wikimedia projects were presented.

I would like to highlight the conversation about low-connectivity areas in Colombia and what they represent for teachers and Wikimedia education activists promoting the Wikimedia environment in those areas. People from Tanzania, Nigeria and Ghana shared similar challenges, and we created an opportunity to learn from each other about possible tools for addressing these issues.

My favorite workshop occurred when we had the opportunity to build with LEGO blocks in different teams. This activity made us think about the challenges we are facing in the AI era, Generation Alpha, the Wikimedia environment, and the connections between participants. We meet in groups of five to six participants,, all from Colombia. The first insight was to build individual structure seeking the opportunities or risk using artificial inteligence and Wikipedia with students or communities. After individual pieces were made, we merge all our pieces together creating a narrative convincing enough to share with parciopapants in a final pannel. In the LEGO blocks session, we built “The Gates of Hell” (Les Portes de l’Enfer), an artistic piece by Mr. Rodin. It represented the challenges we face with Wikipedia, education and Artificial Intelligence. 

Later, in further meetings, we realized that these “gates of hell” could be faced  if we all work together as a large network. The magic happened when we discovered that our colleagues in Africa face the same educational problems as we do in Latin America: access, representation, redistribution, and more. Building with blocks, facing similar challenges, and dancing as part of knowledge sharing were all wonderful opportunities to learn together. 

We will continue these discussions at Wikimania 2025 to connect our communities with opportunities through the tools and solutions we can build together. Thus, the EduWiki Conference allowed us to connect, even with the disconnected.

Thanks for the conference to WIkimedia Colombia, the EduWIki User Group, and Wikimedia Foundation for the conference. 

CC BY-SA 4.0 by Wikihacedor

A collage featuring the following images related to May's policy stories. The images include: the United Nations Geneva Council Chamber; Rebecca MacKinnon on stage during UNESCO's World Press Freedom Day event; the logo of the #ConhecimentoÉDireito campaign; the Royal Courts of Justice in London; and a screenshot of a presentation titled "Breaking Barriers: Universal Acceptance and Multilingualism as Gateways to Digital Access"

Image collage for the May 2025 issue of ‘Don’t Blink.’ Image by the Wikimedia Foundation, CC BY-SA 4.0, via Wikimedia Commons.

Welcome to “Don’t Blink”! Every month we share developments from around the world that shape people’s ability to participate in the free knowledge movement. In case you blinked last month, here are the most important public policy advocacy topics that have kept the Wikimedia Foundation busy.

The Global Advocacy team works to advocate laws and government policies that protect the volunteer community-led Wikimedia model, Wikimedia’s people, and the Wikimedia movement’s core values. To learn more about us and the work we do with the rest of the Foundation, visit our Meta-Wiki webpage, follow us on LinkedIn or on X (formerly Twitter), and sign up to our quarterly newsletter or Wikimedia public policy mailing list.

________

Discussing why freedom of expression is vital for a healthy information ecosystem
[Watch our panel discussion during the UN World Press Freedom Day celebration]

World Press Freedom Day observes the importance of freedom of the press and of free expression as fundamental rights enshrined in the Universal Declaration of Human Rights worldwide. During the UN Educational, Scientific and Cultural Organization’s (UNESCO) celebration for World Press Freedom Day, members of the Global Advocacy team joined several conversations to discuss the role that Wikimedia projects play in a technological landscape rapidly changing with the development of AI.

Rebecca MacKinnon (Vice President of Global Advocacy) spoke at a high-level kick-off session on “Information as a Public Good in the Age of AI.” During that conversation, Rebecca explained that we can only ensure information integrity if public policies protect freedom of expression and privacy—online as well as offline. Even in times of new technologies, fundamental rights must be promoted and protected. She also talked about the impact of AI on Wikipedia, from overwhelming scraping bot traffic to the rise of AI-generated disinformation. Rebecca made the point that to ensure information integrity, it is more important now than ever to protect the rights of people who produce trustworthy information: journalists, researchers, and members of community-led open-source platforms like Wikipedia.  

Speaking at a side event at World Press Freedom Day about Strategic Lawsuits Against Public Participation (SLAPPs), Rebecca shared more about the threats platforms like Wikipedia face, and explained how powerful people often use expensive lawsuits to force smaller entities and platforms to suppress factual information. Using Wikipedia as an example, she highlighted recent SLAPP efforts against the Foundation, including a case in Portugal where the person bringing the lawsuit sought to force the Foundation to turn over personal data about Wikimedians who worked on the article in dispute. To combat abuses like this, she suggested UN Member States adopt the privacy protections for anonymous public participation in the Council of Europe’s Recommendation on countering SLAPPs.

Watch the panel discussion where Rebecca participated on UNESCO’s YouTube channel. You can read more about the Foundation’s litigation efforts, including SLAPP lawsuits, on Diff. 

Collaborating with UNESCO to promote public interest AI
[Read our blog post on Medium to find out about the AI for Good Summit 2025, and learn more about the Foundation’s AI Strategy on our website and on Meta-Wiki]

Amalia Toledo (Lead Policy Specialist for Latin America and the Caribbean) spoke about AI and information integrity at another World Press Freedom Day celebration side event, which was organized by UNESCO and the Organisation for Economic Co-operation and Development (OECD). Amalia discussed why the Foundation’s approach to AI can serve as an example of how this technology can be developed to support the public good. One crucial aspect of this approach, she highlighted, is a focus on keeping people at the center of technological development. The Foundation does so by creating tools that help the people who volunteer their time to curate and expand the Wikimedia projects do their work more efficiently. Amalia also shared how our commitments to transparency, community governance, multilingualism, and equity ensure we remain dedicated to the public interest while adapting any new technologies.

Later in the month, UNESCO held a dialogue on the topic of AI governance with representatives from the governments of Malaysia and Indonesia. Co-hosted by Engage Media, participants also included government officials, think-tank, journalists, digital rights groups, and legal aid groups in the region. During this discussion on responsible use of AI and how AI governance can support its positive uses in the public interest, Rachel Judhistari (Lead Public Policy Specialist for Asia) presented the Foundation’s new AI Strategy. Wikimedia’s approach was lauded as a leading example of how open, community-driven approaches can shape AI for the common good. Discussions like these—alongside UNESCO tools like the Recommendation on the Ethics of Artificial Intelligence and UNESCO AI Readiness Assessment—are critical to the future of equitable AI development and governance in Southeast Asia—and elsewhere in the globe.

Members of the Global Advocacy team continue to show up in these important conversations about AI, including during several events being held throughout June and July. We published a blog post highlighting where we are headed, which includes the AI for Good Global Summit 2025, in order to discuss how AI technologies can support global goals and solve global challenges.

Read our blog post on Medium to find out about the AI for Good Summit 2025, and learn more about the Foundation’s AI Strategy on our website and on Meta-Wiki.

Celebrating the launch of the Coalition on Digital Impact (CODI)
[Watch Rebecca’s fireside chat on YouTube and learn more about the Coalition on Digital Impact]

Members of the Global Advocacy team recently attended the launch of the Coalition on Digital Impact, a group empowering communities to access and navigate the internet in their native languages. The launch, which coincided with Universal Acceptance Day 2025, focused on how to break barriers to digital access through more multilingualism online. Rebecca MacKinnon spoke at a fireside chat and discussed how the Wikimedia projects advance online multilingualism as a digital public good. Rebecca gave examples of how Wikimedia empowers communities to share their knowledge online in their own language. She highlighted a recently announced collaboration between Rising Voices and five Wikimedia affiliates, which will host a series of Language Digital Activism workshops to help members from five language communities experiment with new digital tools and strategies. The launch of the Coalition on Digital Impact is an important moment to recognize how multilingualism online can contribute to a thriving and truly global internet that supports everyone’s ability to access information online.

Watch Rebecca’s fireside chat on YouTube and learn more about the Coalition on Digital Impact.

Challenging the United Kingdom’s Online Safety Act Categorization requirements
[Read about our legal challenge on Diff and on Medium]

In May 2025, the Foundation announced that it is filing a legal challenge to the lawfulness of a new element of the UK’s Online Safety Act (OSA) that determines what duties a website has under the law. This comes after years of the Foundation sharing our concerns with UK policymakers—concerns that remain unaddressed.

 This element of the law would designate certain services as “Category 1” services and would impose the most burdensome requirements on platforms that receive that designation. These requirements, aimed at holding the riskiest commercial and social media platforms accountable for harmful or abusive content, could interfere with how the Wikimedia projects work, and potentially even compromise the safety of Wikimedia volunteer editors. This includes a requirement that the Foundation offer to verify users’ identities, and block all unverified users from fixing or removing content that UK users post on Wikipedia. This would break Wikipedia’s collaborative editing model and threaten the privacy of everyone who contributes to the projects globally. 

For these reasons, we have acted to legally challenge the OSA’s overly broad criteria for deciding how a service is sorted into Category 1, standing up for Wikimedians and public interest projects everywhere. Since announcing the challenge, the High Court has agreed to expedite the challenge, and set a date for a two day-trial this upcoming 22–23 July. A long-term UK-based Wikimedian, User:Zzuuzz, has also been added as a joint claimant in the case and will play a pivotal role in articulating its human rights implications, including the rights to privacy, safety, free speech, and association. It is extremely rare—if not unheard of—for a website user to join the website’s host in bringing a legal challenge, and we thank User:Zzuuzz for volunteering to take this extraordinary step. 
Read about our legal challenge on Diff and on Medium

Spotlighting the Wikimedia community’s advocacy across the globe
[Check out Wikimedia Brasil’s copyright campaign and joint statement about local internet governance, as well as Wikimedia Europe’s submission on the European Democracy Shield]

Wikimedians are often in the best position to identify local public policies that may have an impact on how people in their country or region access and contribute to the Wikimedia projects. Many Wikimedia affiliates and user groups track important laws related to digital rights, access to knowledge, and the internet in general, and advocate policies that protect public interest platforms like the projects. Wikimedians from across the world were busy last month, sharing lessons from their work and advocating a better internet for everyone, everywhere.

Wikimedia Brasil published a summary of their #ConhecimentoÉDireito (#KnowledgeIsDirect in English) campaign, which aims to modernize Brazilian copyright law so as to better protect freedom of expression. The campaign launched last February by the Coalizão Direitos na Rede, a group that brings together academic and civil society organizations to defend digital rights. Wikimedia Brazil explained the work that went into various stages of the campaign, from the initial fact-finding to creating the materials that helped get the word out through print media and podcasts. For example, an ebook promoted free culture and the use of Creative Commons, and detailed a collaboration between the Wikimedia chapter, InternetLab, and the Secretariat of Culture of the State of Espírito Santo in Brazil that led to 4000 images being added to Wikimedia Commons. Comprehensively reviewing  the work that has gone into influencing copyright law and practices in Brazil is an excellent example of how other Wikimedia affiliates can share and learn from each other’s successes.

Read the full Diff blog post (in Portuguese) for more insights.

On the topic of internet governance more generally, Wikimedia Brazil and the Foundation also released a statement supporting the local multistakeholder internet governance model, which is guided by the Brazilian Internet Steering Committee (CGI.br). This model, internationally recognized for its achievements, is threatened by changes in two new bills that could significantly alter the CGI.br’s oversight powers. Our statement calls for a robust, comprehensive, and participatory discussion in the Brazilian National Congress to ensure that important perspectives, such as those of Wikimedians and other online communities, are represented in determining the future of internet governance in the country.

Read the full statement (in Portuguese) on Diff.

Wikimedians from across the East and Southeast Asia and the Pacific region came together in the Philippines for the annual ESEAP Strategy Summit. The Summit covered many topics, among them the advocacy work of regional affiliates and user groups. Rachel Judhistari led two sessions about public policy advocacy. The first explored the landscape of advocacy in ESEAP and beyond Rachel, alongside Wikimedia Australia, Wikimedia Indonesia, the Wikimedia Community User Group Malaysia and Shared Knowledge Asia Pacific (SKAP), presented global and regional policy trends, like those around child safety regulations. Rachel also led a session to help develop action plans for advocacy in the region, with a focus on explaining the Wikimedia model and sharing tactics and resources for advocacy.

Learn more about the ESEAP Strategy Summit on Meta-Wiki

Finally, Wikimedia Europe (WMEU) drafted a submission to the European Commission’s call for evidence about an initiative called the European Democracy Shield. This initiative would, in the Commission’s own words, “address the most severe risks to democracy in the EU.” In their submission, WMEU highlighted how protecting the Wikimedia projects and volunteer communities would help this effort to strengthen democracy in the EU, and offered suggestions for high-impact actions that could be taken under the initiative. These include: recognising and promoting the Wikimedia projects’ role in the safeguarding information integrity and strengthening digital and media literacy skills; helping to counter false information by increasing access to public broadcasting materials; and, protecting anonymity online by crafting safeguards for user identity and data in strategic lawsuits meant to silence people sharing truthful information online (i.e., SLAPPs).

Find the submission on Wikimedia Commons.

________

Follow us on LinkedIn or on X (formerly Twitter), visit our Meta-Wiki webpage, sign up for our quarterly newsletter to receive updates, and join our Wikipedia policy mailing list. We hope to see you there!

In June 2025, I had the honour to be selected as a Fellow of the AWA Digitalise Youth Project. It is a fellowship organised by Code for Africa, the European Partnership for Democracy (EPD), AfricTivistes, CFI Media Development, the World Scout Bureau Africa Regional Office, and the Kofi Annan Foundation (KAF), with contributions from the Netherlands Institute for Multiparty Democracy (NIMD),which offered stipend-based fellowships to Six (6) African Wikipedians-in-Residence under the African Wikipedian Alliance (AWA) Digitalise Youth initiative.

Overview

This report summarizes my activities, achievements, and lessons learned for June 2025 as an African Wikipedian Alliance Fellow. This month, I took part in the onboarding of new Fellows, recruited contributors,  including two new female editors ,  expanded content which focused on topics such as the African Union Convention on Cybersecurity and Personal Data Protection, e-democracy, digital mobilisation in social movements, SDG16, freedom of expression, and human rights. Additionally, the roles of governments, NGOs, and grassroots organisations in addressing censorship, surveillance, and online disinformation in West Africa particularly Benin and Guinea through article translation and Wikidata contributions, and strengthened community participation.

Key Activities

a. Fellowship Onboarding & Community Engagement

  • Participated actively in onboarding sessions for the new Fellows cohort.
  • Engaged new participants by providing guidance on account creation, editing basics, and community norms.
  • Successfully supported two new female editors, who made their first contributions to Wikipedia.

b. Content Creation & Translation

  • Translated articles into Ghanaian Pidgin, Fante, and Hausa, broadening access to local knowledge in multiple African languages.
  • Created and improved contents across Wikipedia and Wikidata by adding new content, fixing errors, and enriching existing pages.

c. Wikidata Contributions

  • Improved data for African-related topics on Wikidata, contributing to the overall articles/items edited.
  • Enhanced structured data with reliable references and updated statements.

d. Capacity Building & Support

  • Mentored new editors during and after onboarding sessions.
  • Shared resources, provided technical support, and encouraged continued participation.
  • Registered 32 editors on the program dashboard.

Achievements & Impact

  • Expanded local-language content with translated articles.
  • Improved articles/items on Wikipedia and Wikidata
  • Added new content and references, ensuring verifiability and quality.
  • Achieved 80 total page views as of July 1, 2025 — a good early indicator of growing reach.
  • Onboarded 32 new editors, including two women who made their first edits.

Challenges

  • Unstable internet connectivity during onboarding sessions affected participation for some.
  • Retention of first-time editors remains an ongoing challenge and priority.

Lessons Learned

  • Translation is an effective strategy to bridge language gaps and engage local communities.
  • Combining onboarding with hands-on mentoring increases new editors’ confidence.
  • Small, diverse groups with focused follow-up are key to sustaining editor retention.
  • Quality references and reliable sources build trust in African content.

Plans for July

  • Continue mentoring new editors and providing follow-up support.
  • Organize an edit-a-thon focused on the theme of the fellowship to create, improve and translate content.
  • Increase the number of improved Wikidata items and translated articles.
  • Strengthen partnerships with local organizations to reach more diverse audiences.
  • Monitor page views and retention to measure impact.

What’s it like to learn to edit Wikipedia? Librarian Kelly Omodt at the University of Idaho Library reflects on her path from newcomer to contributor as a Wiki Scholars course participant.

My introduction to Wikipedia, like most school kids, was a warning to never use the site for serious research. Imagine my surprise while reading an email invitation from my university’s provost office to join the Idaho OPAL Fellows: Wiki Scholars 2024 cohort. At the time, I was serving on the Open Strategies Team for my university library and seeing the words “open pedagogy and advocacy” caught my eye before the phrase “Wiki Scholar”. I saw this as an opportunity to learn more about open pedagogy in practice and found myself thinking back to those teachers who warned against Wikipedia, feeling kind of like a rebel. 

Looking for something to edit in Wikipedia was overwhelming, and more than a bit intimidating. I didn’t know what subject area to search through and find an article to work on. During the first classes of the Wiki Scholars course, we were warned not to choose a topic too dear to our hearts, in case unconscious bias began inserting itself into our edits. Choosing a fandom I knew little about felt safe, as I could read the content and look for minor edits, commas, run-on sentences, etc. A great deal of time was spent skimming through recommended articles on my editor homepage. Unfortunately, I kept clicking on the blue links within each Wikipedia article and found myself diving deeper into a fandom’s lore rather than making edits. I found my first edit by chance, slipped among the countless titles of my university’s library stacks. 

While on the reference desk a student had asked for the English translation of Huraki Murakami’s, Norwegian Wood. While shelf-reading for the book, I came across an epistolary title, Postcards to Donald Evans. The book’s cover had soft colors, and a creative design that looked like it could be a prop in the film, Under the Tuscan Sun, and high-quality page paper; the book felt brand new, barely a crease in its softback cover. I am a fan of books written in letter format, and so on a whim I plucked it from the shelf and checked it out.

It was compiled over a decade ago, but the ‘postcards’ were written by the author, in the 1980s, to an already deceased artist, as part of a poet in residence program in the Midwest. I had no connection to the books’ author, but I loved the style of his book and eventually saw that Takashi Hiraide had a minimal Wikipedia article in English (the German Wikipedia article is much more in depth).

And so, I thought to myself, “why not, a lot of the things related to this individual already had Wikipedia articles with substantial content, I should be able to link to other related articles.” It was a purely chance moment, and I ended up adding a significant amount of content to Takashi Hiraide’s Wikipedia page. Having no relationship to the article content except for this random book helped me to keep flowery language and bias out of my writing. I felt extra pressure due to the fact Takashi Hiraide is still alive. I found myself diving into the weeds of Takashi Hiraide’s life and accomplishments, feeling like an old-time private investigator. Though the article is still listed as a “Stub Class” in Wikipedia, I am pleased to have been able to add so much information.

Having substantially edited this first article I was ready to write my own. Again, I came across the topic by chance, with help from one of my patrons. A student looking for information on Bigfoot led me to think of the cryptid, supposedly located in my hometown. To my delight, I discovered that it had yet to have its own Wikipedia article. I began researching with gusto. Once our Wiki Scholars instructor, Will, had looked it over, I published what I had written and it’s now a “C-Class” article!

Lake Pend Oreille Paddler
Pend Oreille Paddler, a cryptid in Lake Pend Oreille in North Idaho. Image by Jay Mock, photographer, via Wikimedia Commons

I’m rather proud of myself for writing it, since at the beginning of the course I was hesitant to even delete or add a comma to someone else’s article. Learning about the ins and outs of Wikipedia and accompanying sites; Wikidata, Wikimedia Commons, etc. was not something I had planned for in my career as an academic librarian, but I am grateful that the opportunity arose. I find myself using Wikipedia more often for general searches, and I definitely mention it more in my information literacy classes. I hope to continue editing, as I believe more topics will come upon me by chance, just as it did with Takashi Hiraide’s Postcards to Donald Evans

My greatest takeaway as an OPAL Fellow was marveling at how accessible it was to become an editor and learning how seriously editors review content. The process of learning how to become an editor, adding content, or creating new articles could be utilized by fledgling researchers at any academic institution.

Since the very first Wiki Education session, I can see the potential of Wikipedia in a variety of college coursework. I share with the students who pass through my information literacy classes about Wikipedia as a great launching point and crowdsourcing tool for their research papers. I have used Wikipedia, outside of the OPAL Fellow courses, more in the last year, than in my entire life and I’ve been inspired by my fellow Wiki Scholars and our course instructor to continue to contribute this open resource.  

“Good evening, kak1, we are currently planning on an outreaching program to Lombok to develop Wikipedia basa Sasak project. Are you interested in joining the program?”

I got a message from Education Team in Wikimedia Indonesia, in early May 2025. Previously, they contacted me about their interest in developing local community in central and eastern side of Indonesia, including in Lombok. Without hesitation, I accepted the offer as I always wanted to have some kind of local Wikimedia community in my hometown, developing Wikimedia project in my local language.

The program is a collaboration between Wikimedia Indonesia and Komunitas Wikimedia Denpasar, a local Wikimedia community in Denpasar, Bali. It is an outreach program that happen to be exactly inline with my dream: Wiki training and a chance to build lokal Wikimedia community in my hometown.

We then prepare for the program. We decided that the program will be in three days: a day to do final preparation, a day for WikiLatih (a wiki training program), a day for Kopi Darat (gathering program, usually used to edit specific Wikimedia project). We also decided that we are partnering bachelor students from Universitas Mataram, specifically students of Indonesian Language and Literature Education Study Program.

WikiLatih

The trainers and the participants of Wikilatih Wikipedia in Bastrindo Unram. The participants includes the students and lecturers of Indonesian Language and Literature Education Study Program of Universitas Mataram.
The trainers and the participants of Wikilatih Wikipedia in Bastrindo Unram.

On 30 May 2025, the first event that we did was WikiLatih Wikipedia, a Wikipedia training to students and lecturers of Indonesian Language and Literature Education Study Program (Bastrindo). We were helped by one of the volunteer of Wikipedia that happen to stay in Lombok at the time. From the first time I met the participants of the event, I instantly feels their enthusiasm. We started with a basic: to build their user page, edit a page, then to create one article in Wikipedia Indonesia. Both the students and the lecturers enthusiastically follow all the training event. We did short, light quiz and question and answer session. And at the end we took a picture together.

The warm I felt, the spirit, and the excitement as the trainer really feels strengthening to me. I wish I can stay and edit longer with them. Oh, I actually did! The next day, some of the participants join the Kopi Darat event to introduce and to edit Incubator Wikipedia Sasak.

Kopi Darat

The participants of the Kopi Darat program to introduce and to edit Incubator Wikipedia Sasak. The participants includes the some students from previous WikiLatih, general language enthusiast, and journalist.
The participants of the Kopi Darat program to introduce and to edit Incubator Wikipedia Sasak.

Kopi Darat is an event for us local Wikimedia to gather and to edit a Wikimedia project together. In this day, instead of exclusive for Bastrindo lecturers and students, the participants also include general interested parties, including language enthusiast, volunteers, and a journalist. We introduce Incubator Wikipedia Sasak to the participants and encourage them to try and to improve the project. Besides, we also talked about the opportunity to build local Wikimedia community in Mataram to build and to improve Wikimedia project in Sasak, including Wikipedia, Wiktionary, and also recently Wikibooks. It was a fun experience to finally talk about Wikimedia project and mission to my people, to the people that speak my language.

What’s Next?

As of now, we are progressing in preparing the necessary things for the community, such as the logo, social media account, the administrative stuff, and figuring the standard that we wanted to use for our project. We also talked to several established local Wikimedia community for advice in developing our community, including Wikimedia Indonesia, Komunitas Wikimedia Denpasar, Komunitas Wikimedia Jakarta, and Komunitas Wiktionary Indonesia. We would also like to request for supports from everyone, including the global Wikimedia community for our small community. I personally hope this small community can develop, grow, and continue preserving Sasak language and culture. Tampi asih!2

Footnotes

  1. Friendly greetings in Indonesia, literally means older brother or sistes. ↩︎
  2. Thank you in Sasak. ↩︎

The Pilipinas Panorama Community (PPC), a thematic Wikimedia organization based in Manila, hosted a meet-up on June 28, 2025. This is already the 33rd meet-up of Wikimedians here in Metro Manila. Ever since the foundation of PPC in 2022, the metropolitan area of Manila has witnessed numerous Wikimedia meet-ups and events which brought vibrancy to the GLAM and cultural heritage resources of the Philippines.

Pilipinas Panorama Community (PPC) members with CCPI President Jose Luis U. Yulo Jr. (Ralff Nestor Nacor, CC BY-SA 4.0, via Wikimedia Commons)

We visited the Chamber of Commerce of the Philippine Islands (CCPI), located along Magallanes Drive in Intramuros, Manila. We were honored to have been welcomed by the President of CCPI, Jose Luis U. Yulo Jr. He gave us a heritage tour at the Chamber Building which is the landmark of the country’s oldest business institution. He discussed the contributions of the chamber to Philippine history and economy. Among the topics was one of the former Presidents of CCPI, Juan B. Alegre, which became the chamber’s head from 1920 to 1921. He is also the grandfather of Sir Johnny Alegre, a veteran Wikimedian and also our organization’s founding head.

The Chamber Building of the Chamber of Commerce of the Philippine Islands – a cultural heritage site and official headquarters of the PPC (Ralff Nestor Nacor, CC BY-SA 4.0, via Wikimedia Commons)

We are delighted to see that PPC has now established partnerships with multiple GLAM institutions like the CCPI. Also known as the La Cámara de Comercio de las Islas Filipinas, the CCPI is considered as the oldest business institution in the Philippines. It was founded in 1886 through a royal decree by the Queen Regent of Spain, Maria Cristina. The National Historical Commission of the Philippines (NHCP) recognized the institution’s contribution to the Filipino economy and history through the historical markers outside the Chamber Building in three (3) languages: Spanish, English, and Filipino.

A group photo of some of the members of PPC and staff of CCPI
Pilipinas Panorama Community (PPC) members and CCPI staff with the three (3) historical markers of CCPI in Spanish, English, and Filipino (Ralff Nestor Nacor, CC BY-SA 4.0, via Wikimedia Commons)

The Chamber Building of the Chamber of Commerce of the Philippine Islands, a cultural heritage site, will now be the official headquarters of the PPC in its own Legacy Heritage Room. After the heritage tour in the morning, we held an internal meeting in which we discussed organizational matters, learnings from the ESEAP Strategy Summit 2025, future projects, and goals of each member. We also conducted a Basic Wiki Editing Workshop for PPC members to further learn editing on Wikipedia, contributing photographs to Wikimedia Commons, creating project pages in Meta-Wiki, and creating translations. We are excited having the Chamber Building to host future meetings, conferences, summits and edit-a-thons for Wikimedians here in Metro Manila.

About the authors

RALFF NESTOR S. NACOR, a.k.a. User:Ralffralff, is the current President of the Pilipinas Panorama Community (PPC) since 2024. As a Wikimedia contributor, he leads the cultural heritage initiative Philippine Panorama Project and is also an administrator/sysop of Bikol Wikipedia. Outside Wikimedia, he is a licensed Chemical Engineer and R&D Scientist in the Philippines.

ERNEST MALSIN, a.k.a. User:PhiliptheNumber1, is currently a regular member of the Pilipinas Panorama Community (PPC). As a Wikimedia contributor, he contributes photographs around Metro Manila and creates translations in Meta-Wiki. He also became the youngest participant at the ESEAP Strategy Summit 2025 held in Manila as an event volunteer. He is currently based in Quezon City, Philippines.

I am excited to share that I have been selected as one of six Wikipedians-in-Residence under the African Wikipedian Alliance (AWA) Digitalize Youth Project 2025, a continent-wide initiative supporting youth participation in digital knowledge creation and civic engagement. This project is led by Code for Africa in partnership with the European Partnership for Democracy (EPD), AfricTivistes, CFI Media Development, the World Scout Bureau Africa Regional Office, and the Kofi Annan Foundation (KAF), with contributions from the Netherlands Institute for Multiparty Democracy (NIMD). Together, these organizations are supporting Wikipedians-in-Residence across Africa to advance open knowledge and youth engagement through Wikimedia platforms.

As a fellow, I will be coordinating activities focused on Niger and South Sudan, two key countries within the project’s regional scope that includes the Sahel, West Africa, and the Horn of Africa. The main goal is to improve and create content on Wikipedia and Wikidata related to civic engagement, digital rights, and governance, with special attention to themes such as:

  • SDG 16: Peace, justice, and strong institutions
  • E-democracy and digital mobilisation
  • Freedom of expression and human rights
  • The African Union Convention on Cybersecurity and Personal Data Protection
  • The role of governments and grassroots organisations in combating censorship and online disinformation

Planned Activities

Over the course of the project, I will host up to 14 virtual sessions to train and support both new and existing contributors. Activities include:

  1. Virtual capacity-building sessions on creating and improving Wikipedia articles and Wikidata items about:
    • Freedom of expression
    • E-democracy
    • Human rights
    • Digital governance in the Sahel, West Africa, and the Horn of Africa
  2. Edit-a-thons: Monthly collaborative editing sessions focused on topics related to youth engagement, digital rights, and governance. Participants who meet monthly deliverables will receive a reimbursement for internet/data costs.
  3. Data-a-thons: Wikidata-focused sessions contributing structured data about relevant laws, organizations, and policies in the region.
  4. Content Translation training: Training participants on using the Wikipedia Content Translation tool to bridge the gap between English content and local African languages.

June 2025 Highlights

Here are my key deliverables for June to set the groundwork for the project:

  • Created the Meta page to document all project activities, topic lists, dashboard, reports, and criteria for participants’ contributions.
  • Attended the three onboarding sessions for all fellows on June 18, 19, and 20, and participated in the virtual lunch on June 25
  • Onboarded 10 editors, created a dedicated WhatsApp group, and set up the project’s Outreach Dashboard to track participants’ contributions.
  • Shared a Needs Assessment form with participants to evaluate their current knowledge and identify training needs
  • Scheduled a three-day virtual training for experienced editors on June 27, 28, and 29 covering project virtual onboarding, Wikipedia essentials, and Wikidata best practices for experienced editors. New editors will have their separate sessions in July.

June Sessions

The June sessions were specifically designed for experienced editors. This approach was informed by the findings of the needs assessment I conducted, which indicated that experienced editors required minimal in-depth training to begin contributing. What they primarily needed was a clear understanding of the project scope and guidance on how to contribute to Wikipedia and Wikidata within the framework of the project. As a result, I organized three targeted sessions for them in June, with plans to hold specialized sessions for new editors in July.

Virtual Onboarding for Experienced Editors: On 27 June 2025, six experienced editors were onboarded via a virtual session on Google Meet to participate in the AWA Digitalize Youth Project 2025. Their contributions will focus on topics related to Niger and South Sudan. As the project fellow coordinating activities in both countries, I facilitated the session and walked participants through the project’s objectives, contribution guidelines, timelines, and monthly deliverables. The one-hour session concluded with a virtual photo session.

Wikipedia Essentials for Experienced Editors: This session provided participants with essential information on how to contribute to Wikipedia as part of the project. It covered topics such as why Wikipedia matters in the context of the project, participants’ roles on the platform, key focus areas, types of articles to contribute, monthly Wikipedia-related deliverables, strategies for finding topics, and available tools and resources. The session was held on 28 June.

Wikidata Best Practices for Experienced Editors: During this session, participants learned how to effectively contribute to Wikidata as part of the project. The session covered key topics including understanding the project scope on Wikidata, best practices for contributing, monthly contribution expectations, and an overview of relevant tools and resources. Held on 29 June, it marked the conclusion of both the onboarding process for experienced editors and the series of sessions held in June.

Looking Ahead

Starting in July, participants will begin contributing content with support from me. They are expected to attend at least three sessions monthly and submit their monthly contributions using a reporting form. Monthly internet support of $10 will be provided to those who meet participation requirements. I am currently working to onboard new participants, particularly from Niger and South Sudan, to ensure broader representation from our focus countries. Anyone with a skill that could support the goals of the project is welcome to propose a training session for the group.

Join Us

Want to follow our progress or get involved? Here are three quick steps:

  1. Fill out the Needs Assessment Form to let us know what skills or support you need.
  2. Register on the Event Meta page.
  3. Join the Outreach Dashboard to track your contributions.
  4. You can also reach out to me via Meta: User:Ridzaina

Together, we are building a stronger ecosystem of open knowledge that supports civic engagement and youth empowerment in Africa.

The 72 Hours Virtual Marathon Edit-a-thon 2.0 in the Igbo Wikimedians User Group was a continuation of the pilot held in April 2025, which recorded huge success from collaborative efforts in improving existing articles and pages on the Igbo Wikipedia, as well as language localization.

In this second edition, participants edited again under 72 hours, from 28 May 2025 to 31 May 2025, by fixing existing Igbo Wikipedia articles with reference errors, fixing pages with broken files, and also further strengthened localization efforts on translatewiki. At the opening session of the marathon, Hilary Ogali facilitated a session on “File licensing and usage in Wikimedia projects.” This session focused on exploring the various file licenses available and their use cases on Wikimedia projects. Participants were also encouraged to only upload high quality files, and add categories for proper documentation.

At the end of the marathon, 590 articles with reference errors were corrected, and 190 pages with broken files were fixed. Remarkable progress was also made on translatewiki. We also had a few enthusiastic editors who joined the team and contributed to the ongoing efforts.

These contributions, no matter how small, are helping to shape the future of open knowledge and the Igbo indigenous language on Wikipedia and other sister projects. Many thanks to all the participants and ofcourse my co-organizers, Lucy Iwuala, Mark Lapang and Hilary Ogali, for being supportive and dedicated. I’m optimistic and excited to see how much more we would achieve together in the future editions!

As we mark the publication of 100,000 articles on Swahili Wikipedia since its inception, we also pause to honor the memory of one of its most influential contributors, the late Ingo Koll (User:Kipala), who passed away two years ago.

Kipala was a pillar of strength who upheld the Swahili platform for decades. His legacy will never be forgotten. As a small community, we continue to grieve the loss of our elder. May he rest in peace.

Though his passing remains a sorrowful chapter, the achievement on 23 June 2025—when Swahili Wikipedia reached 100,000 articles—comes as uplifting news. We embrace this milestone as a symbolic tear-wiper for our grief.

Young contributors worked tirelessly, day and night, to make this happen. The momentum since January has been impressive. Let’s take a quick look at the timeline:

📅 Date ✅ Article Count
6 January 91,000
13 January 92,000
22 January 93,000
31 January 94,000
6 February 95,000
18 February 96,000
25 February 97,000
9 March 98,000
4 June 99,000
23 June 100,000

We extend heartfelt congratulations to everyone who contributed toward reaching the 100K mark—especially through various editathons that sparked this success.

Muddyb

11th July, 2025
Chanika, Dar es Salaam,
Tanzania.

Tracking memory issue in a Java application

Thursday, 10 July 2025 22:51 UTC

One of the critical pieces of our infrastructure is Gerrit. It hosts most of our git repositories and is the primary code review interface. Gerrit is written in the Java programming language which runs in the Java Virtual Machine (JVM). For a couple years we have been struggling with memory issues which eventually led to an unresponsive service and unattended restarts. The symptoms were the usual ones: the application responses being slower and degrading until server side errors render the service unusable. Eventually the JVM terminates with:

java.lang.OutOfMemoryError: Java heap space

This post is my journey toward identifying the root cause and having it fixed up by the upstream developers. Given I barely knew anything about Java and much less about its ecosystem and tooling, I have learned more than a few things on the road and felt like it was worth sharing.

Prior work

The first meaningful task was in June 2019 (T225166) which over several months has led us to:

  • replace aging underlying hardware
  • tuning the memory garbage collector and switching to the G1 garbage collector
  • raising the amount of memory allocated to the JVM (the heap)
  • upgraded the Debian operating system by two major release (Jessie Stretch Buster)
  • conduct a major upgrade of Gerrit (June 2020, Gerrit 2.15 3.2)
  • bots crawling the repositories get moved to a replica
  • fixing lack of cache in a MediaWiki extension querying Gerrit more than it should have

All of those were sane operations that are part of any application life-cycle, some were meant to address other issues. Raising the maximum heap size (20G to 32G) definitely reduced the frequency of crashes.

Still, we had memory filing over and over. The graph below shows the memory usage from September 2019 to September 2020. The increase of maximum heap usage in October 2020 is the JVM heap being raised from 20G to 32G. Each of the "little green hills" correspond to memory filing up until we either restarted Gerrit or the JVM unattended crash:

Zooming on a week, it is clearly seen the memory was almost entirely filled until we had to restart:

This had to stop. Complaints about Gerrit being unresponsive, SRE having to respond to java.lang.OutOfMemoryError: Java heap space or us having to "proactively" restart before a week-end. They were not good practices. Back and fresh from vacations, I filed a new task T263008 in September 2020 and started to tackle the problem on my spare time. Would I be able to find my way in an ecosystem totally unknown to me?

Challenge accepted!

stuff learned

  • Routine maintenance are definitely a need
  • Don't expect things to magically solve but commit to thoroughly identify the root cause instead of hoping.

Looking at memory

Since the JVM runs out of memory, lets look at memory allocation. The JDK provides several utilities to interact with a running JVM. Be it to attach a debugger, writing a copy of the whole heap or sending admin commands to the JVM.

jmap lets one take a full capture of the memory used by a Java virtual machine. It has to run as the same user as the application (we use Unix username gerrit2) and when having multiple JDKs installed, one has to make sure to invoke the jmap that is provided by the Java version running the targeted JVM.

Dumping the memory is then a magic:

sudo -u gerrit2 /usr/lib/jvm/java-8-openjdk-amd64/bin/jmap \
  -dump:live,format=b,file=/var/lib/gerrit-202009170755.hprof <pid of java process here>

It takes a few minutes depending on the number of objects. The resulting .hprof file is a binary format, which can be interpreted by various tools.

jhat, a Java heap analyzer, is provided by the JDK along jmap. I ran it disabling tracking of of object allocations (-stack false) as well as references to object (|-refs false) since even with 64G of RAM and 32 core it took a few hours and eventually crashed. That is due to the insane amount of live objects. On the server I thus ran:

/usr/lib/jvm/java-8-openjdk-amd64/bin/jhat -stack false -refs false gerrit-202009170755.hprof

It spawns a web service which I can reach from my machine over ssh using some port redirection and open a web browser for it:

ssh  -C -L 8080:ip6-localhost:7000 gerrit1001.wikimedia.org &
xdg-open http://ip6-localhost:8080/

Instance Counts for All Classes (excluding native types)

2237744 instances of class org.eclipse.jgit.lib.ObjectId
2128766 instances of class org.eclipse.jgit.lib.ObjectIdRef$PeeledNonTag
735294 instances of class org.eclipse.jetty.util.thread.Locker
735294 instances of class org.eclipse.jetty.util.thread.Locker$Lock
735283 instances of class org.eclipse.jetty.server.session.Session
...

And an other view shows 3.5G of byte arrays.

I got pointed to https://heaphero.io/ however the file is too large to upload and it contains sensitive information (credentials, users personal information) which we can not share with a third party.

Nothing really conclusive at this point, the heap dump has been taken shortly after a restart and Gerrit was not in trouble.

Eventually I found Javamelody has a view providing the exact same information without all the trouble of figuring out jmap, jhat and ssh proper set of parameters. Just browse to the monitoring page and:

stuff learned

  • jmap to issue commands to the jvm including taking a heap dump
  • jhat to run analysis with some options required to make it workable
  • Use JavaMelody instead

JVM handling of out of memory error

An idea was to take a heap dump whenever the JVM encounters an out of memory error. That can be turned on by passing the extended option HeapDumpOnOutOfMemoryError to the JVM and specifying where the dump will be written to with HeapDumpPath:

java \
  -XX:+HeapDumpOnOutOfMemoryError \
  -XX:HeapDumpPath=/srv/gerrit \
  -jar gerrit.war ...

And surely next time it ran out of memory:

Nov 07 13:43:35 gerrit2001 java[30197]: java.lang.OutOfMemoryError: Java heap space
Nov 07 13:43:35 gerrit2001 java[30197]: Dumping heap to /srv/gerrit/java_pid30197.hprof ...
Nov 07 13:47:02 gerrit2001 java[30197]: Heap dump file created [35616147146 bytes in 206.962 secs]

Which results in a 34GB dump file which was not convenient for a full analysis. Even with 16G of heap for the analyze and a couple hours of CPU churning it was not any helpful

And at this point the JVM is still around, the java process is still there and thus systemd does not restart the service for us even though we have instructed it to do so:

/lib/systemd/system/gerrit.service
[Service]
ExecStart=java -jar gerrit.war
Restart=always
RestartSec=2s

That lead to our Gerrit replica being down for a whole weekend with no alarm whatsoever (T267517). I imagine the reason for the JVM not exiting on an OutOfMemoryError is to let one investigate the reason. Just like heap dump, the behavior can be configured via the ExitOnOutOfMemoryError extended option:

java -XX:+ExitOnOutOfMemoryError

Next time the JVM will exit causing systemd to notice the service went away and so it will happily restart it again.

stuff learned

  • automatic heap dumping with the JVM for future analysis
  • Be sure to have the JVM exit when running out of memory so systemd will restart the service
  • Process can be up while still not serving its purpose

Side track to jgit cache

When I filed the task, I suspected enabling git protocol version 2 (J199) on CI might have been the root cause. That eventually lead me to look at how Gerrit caches git operations. Being a Java application it does not use the regular git command but a pure Java implementation jgit, a project started by the same author as Gerrit (Shawn Pearce).

To speed up operations, jgit keeps git objects in memory with various tuning settings. You can read more about it at T263008#6601490 , but in the end it was of no use for the problem. @thcipriani would later point out that jgit cache does not overgrow past its limit:

The investigation was not a good lead, but surely it prompted us to have a better view as to what is going on in the jgit cache. But to do so we would need to expose historical metrics of the status of the cache.

Stuff learned

  • Jgit has in memory caches to hold frequently accessed repositories / objects in the JVM memory speeding up access to them.

Metrics collection

We always had trouble determining whether our jgit cache was properly sized and tuned it randomly with little information. Eventually I found out that Gerrit does have a wide range of metrics available which are described at https://gerrit.wikimedia.org/r/Documentation/metrics.html . I always wondered how we could access them without having to write a plugin.

The first step was to add the metrics-reporter-jmx plugin. It registers all the metrics with JMX, a Java system to manage resources. That is then exposed by JavaMelody and at least let us browse the metrics:

I long had a task to get those metrics exposed (T184086) but never had a strong enough incentive to work it. The idea was to expose those metrics to the Prometheus monitoring system which would scrape them and make them available in Grafana. They can be exposed using the metrics-reporter-prometheus plugin. There is some configuration required to create an authentication token that lets Prometheus scrape the metrics and it is then all set and collected.

In Grafana, discovering which metrics are of interest might be daunting. Surely for the jgit cache it is only a few metrics we are interested in and crafting a basic dashboard for it is simple enough. But since we now collect all those metrics, surely we should have dashboards for anything that could be of interest to us.

While browsing the Gerrit upstream repositories, I found an unadvertised repository: gerrit/gerrit-monitoring. The project aims at deploying to Kubernetes a monitoring stack for Gerrit composed of Grafana, Loki, Prometheus and Promtail. While browsing the code, I found out they already had a Grafana template which I could import to our Grafana instance with some little modifications.

During the Gerrit Virtual Summit I raised that as a potentially interesting project for the whole community and surely a few days later:

In the end we have a few useful Grafana dashboards, the ones imported from the gerrit-monitoring repo are suffixed with (upstream): https://grafana.wikimedia.org/dashboards/f/5AnaHr2Mk/gerrit

And I crafted one dedicated to jgit cache: https://grafana.wikimedia.org/d/8YPId9hGz/jgit-block-cache

Stuff learned

  • Prometheus scraping system with auth token
  • Querying Prometheus metrics in Grafana and its vector selection mechanism
  • Other Gerrit administrators already created Vizualization
  • Raising our reuse prompted upstream to further advertise their solution which hopefully has led to more adoption of their solution.

Despair

After a couple months, there was no good lead. The issue has been around for a while, in a programming language I don't know with assisting tooling completely alien to me. I even found jcmd to issue commands to the JVM, such as dumping a class histogram, the same view provided by JavaMelody:

$ sudo -u gerrit2 jcmd 2347 GC.class_histogram
num     #instances         #bytes  class name
3      ----------------------------------------------
4         5:      10042773     1205132760  org.eclipse.jetty.server.session.SessionData
5         8:      10042773      883764024  org.eclipse.jetty.server.session.Session
6        11:      10042773      482053104  org.eclipse.jetty.server.session.Session$SessionInactivityTimer$1
7        13:      10042779      321368928  org.eclipse.jetty.util.thread.Locker
8        14:      10042773      321368736  org.eclipse.jetty.server.session.Session$SessionInactivityTimer
9        17:      10042779      241026696  org.eclipse.jetty.util.thread.Locker$Lock

That is quite handy when already in a terminal, saves a few click to switch to a browser, head to JavaMelody and find the link.

But it is the last week of work of the year.

Christmas is in two days.

Kids are messing up all around the home office since we are under lockdown.

Despair.

Out of rage I just stall the task shamelessly hoping for Java 11 and Gerrit 3.3 upgrades to solve this. Much like we hoped the system would be fixed by upgrading.

Wait..

1 million?

ONE MILLION ??

TEN TO THE POWER OF SIX ???

WHY IS THERE A MILLION HTTP SESSIONS HELD IN GERRIT !!!!!!?11??!!??

10042773  org.eclipse.jetty.server.session.SessionData

There. Right there. It was there since the start. In plain sight. And surely 19 hours later Gerrit had created 500k sessions for 56 MBytes of memory. It is slowly but surely leaking memory.

stuff learned

  • Everything clears up once one has found the root cause

When upstream saves you

At this point it was just an intuition, albeit a strong one. I know not much about Java or Gerrit internals and went to invoke upstream developers for further assistance. But first, I had to reproduce the issue and investigate a bit more to give as many details as possible when filing a bug report.

Reproduction

I copied a small heap dump I took just a few minutes after Gerrit got restarted, it had a manageable size making it easier to investigate. Since I am not that familiar with the Java debugging tools, I went with what I call a clickodrome interface, a UI that lets you interact solely with mouse clicks: https://visualvm.github.io/

Once the heap dump is loaded, I could easily access objects. Notably the org.eclipse.jetty.server.session.Session objects had a property expiry=0, often an indication of no expiry at all. Expired sessions are cleared by Jetty via a HouseKeeper thread which inspects sessions and deletes expired ones. I have confirmed it does run every 600 seconds, but since sessions are set to not expire, they pile up leading to the memory leak.

On December 24th, a day before Christmas, I filed a private security issue to upstream (now public): https://bugs.chromium.org/p/gerrit/issues/detail?id=13858

After the Christmas and weekend break upstream acknowledged and I did more investigating to pinpoint the source of the issue. The sessions are created by a SessionHandler and debug logs show: dftMaxIdleSec=-1 or Default maximum idle seconds set to -1, which means that by default the sessions are created without any expiry. The Jetty debug log then gave a bit more insight:

DEBUG org.eclipse.jetty.server.session : Session xxxx is immortal && no inactivity eviction

It is immortal and is thus never picked up by the session cleaner:

DEBUG org.eclipse.jetty.server.session : org.eclipse.jetty.server.session.SessionHandler
==dftMaxIdleSec=-1 scavenging session ids []
                                          ^^^ --- empty array

Our Gerrit instance has several plugins and the leak can potentially come from one of them. I then booted a dummy Gerrit on my machine (java -jar gerrit-3.3.war) cloned the built-in All-Projects.git repository repeatedly and observed objects with VisualVM. Jetty sessions with no expiry were created, which rules out plugins and point at Gerrit itself. Upstream developer Luca Milanesio pointed out that Gerrit creates a Jetty session which is intended for plugins. I have also narrowed down the leak to only be triggered by git operations made over HTTP. Eventually, by commenting out a single line of Gerrit code, I eliminated the memory leak and upstream pointed at a change released a few versions ago that may have been the cause.

Upstream then went on to reproduce on their side, took some measurement before and after commenting out and confirmed the leak (750 bytes for each git request made over HTTP). Given the amount of traffic we received from humans, systems or bots, it is not surprising we ended up hitting the JVM memory limit rather quickly.

Eventually the fix got released and new Gerrit versions were released. We upgraded to the new release and haven't restarted Gerrit since then. Problem solved!

Stuff learned

  • Even with no knowledge about a programming language, if you can build and run it, you can still debug using print or the universal optimization operator: //.
  • Quickly acknowledge upstream hints, ideas and recommendations. Even if it is to dismiss one of their leads.
  • Write a report, this blog.

Thank you upstream developers Luca Milanesio and David Ostrovsky for fixing the issue!

Thank you @dancy for the added clarifications as well as typos and grammar fixes.

References

Wikipedia is one of the most widely used digital public goods, the only nonprofit website among the world’s top ten most visited ones, and a critical source of training data for artificial intelligence (AI) systems. The Foundation’s experience and perspective are essential to ensuring that global copyright policy reflects the public interest

Our goal remains unchanged: to contribute to WIPO’s discussions with 25 years of practical experience in hosting free knowledge for everyone, everywhere, and fostering open and flexible copyright policies

As a recognized observer at the UN Economic and Social Council (ECOSOC) since 2022, Wikimedia has contributed to shaping the Global Digital Compact, entrenching Wikipedia’s role as a valuable public interest platform in global policymaking. This commitment was reinforced this year when Wikipedia was officially recognized as a digital public good by the UN-endorsed Digital Public Goods Alliance in acknowledgment of its openness, support for the Sustainable Development Goals (SDGs), and public value.  In addition, Wikimedia Commons — another Wikimedia project — continues to establish itself as the world’s largest repository of free and open digital media online. 

China misrepresented Wikipedia’s volunteer-driven policies and practices, all of which are rooted in accuracy and neutrality and help effectively counter misinformation and disinformation online.

“Blocking Wikimedia’s participation at WIPO means overlooking how knowledge is actually created, shared, and reused at a global scale. This is particularly important as copyright rules evolve to meet the demands of AI,” said Stephen LaPorte, General Counsel at the Wikimedia Foundation. “In a digital world driven by for-profit interests, Wikimedia brings a much-needed public voice to help ensure knowledge remains human and copyright supports access to knowledge. We regret this decision and remain committed to seeking a constructive path forward.” 

Wikimedia’s global volunteer community brings extensive, hands-on experience in moderating large-scale user-generated content. This expertise enables them to skillfully balance creators’ rights with public access to information. This community, along with the Wikimedia Foundation, offers valuable insights for shaping IP frameworks that promote both innovation and equitable access to information. China, as the only country to oppose the Wikimedia Foundation’s request for observer status, can still reconsider its position in the interest of global knowledge sharing and concerted efforts to realize the SDGs.

We urge WIPO Member States and leadership to help resolve this political impasse and reiterate the Foundation’s commitment to engaging in WIPO’s work. What is at stake are the global policies that can ensure that the future of the internet is a positive one: where everyone, everywhere, can freely and openly participate in the sum of all human knowledge. Wikimedia will continue to pursue accreditation and represent the people and communities who make free and open knowledge possible.


Stay informed on digital policy, Wikipedia, and the future of the internet: Subscribe to our quarterly Global Advocacy newsletter.

For media inquiries, please contact press@wikimedia.org

About the Wikimedia Foundation

The Wikimedia Foundation is the nonprofit organization that operates Wikipedia and other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge and that everyone should be able to access that knowledge freely. We host Wikipedia and the Wikimedia projects; build software experiences for reading, contributing, and sharing Wikimedia content; support the volunteer communities and partners who make Wikimedia possible. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The post For fifth time, China blocks Wikimedia Foundation as permanent observer to the World Intellectual Property Organization (WIPO) appeared first on Wikimedia Foundation.

A Wiki minute

Wednesday, 9 July 2025 12:00 UTC


Get the facts about Wikipedia, Wikimedia, and more, in only a minute, on this series from the Wikimedia Foundation.
, Ali Smith.

The Wikimedia Foundation has introduced 'Wiki Minute', a series of short explainer videos designed to make the world of Wikimedia more accessible to everyone. These videos aim to connect a diverse range of communities worldwide while shedding light on the broader Wikimedia movement, which encompasses much more than just Wikipedia.

Currently, there are sixteen videos tackling important questions like "How does Wikipedia work?" and "Can you really trust what's on Wikipedia?" With straightforward and engaging explanations, these videos are designed to help viewers move from a basic understanding of Wikipedia to a more comprehensive grasp of the entire Wikimedia ecosystem.

The response to these videos has been very positive! You can watch and share these 'Wiki Minute' videos on social media platforms such as YouTube and they are downloadable from Wikimedia Commons, and they are available in multiple languages, including English, Arabic, German, and French.

This initiative seeks to inspire people worldwide not only to use Wikipedia but also to contribute to it and promote it within their communities. Please reach out to Wikimedia Australia if you would like to organise more in-depth training on Wikipedia or other sister projects, are interested in partnering with us or have any questions!

Related links:[edit | edit source]

Images: Wikimedia Foundation, CC BY-SA 4.0, via Wikimedia Commons

Wikipedia:Administrators' newsletter/2025/7

Monday, 7 July 2025 17:43 UTC

News and updates for administrators from the past month (June 2025).

Administrator changes

removed NuclearWarfare

Interface administrator changes

added L235

Guideline and policy news

Miscellaneous

  • The 2025 Developing Countries WikiContest will run from 1 July to 30 September. Sign up now!
  • Administrator elections will take place this month. Administrator elections are an alternative to RFA that is a gentler process for candidates due to secret voting and multiple people running together. The call for candidates is July 9–15, the discussion phase is July 18–22, and the voting phase is July 23–29. Get ready to submit your candidacy, or (with their consent) to nominate a talented candidate!

Archives
2017: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2018: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2019: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2020: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2021: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2022: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2023: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2024: 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12
2025: 01, 02, 03, 04, 05, 06, 07


<<  Previous Archive    —    Current Archive    —    Next Archive  >>

When I ask people to give me feedback, I’d like them to work with whatever format or app is most convenient to them. I write everything in markdown, often in Sublime Text and sometimes in Obsidian. Many people prefer reviewing in Word or GDocs. Using pandoc, I can create most any file format, but getting others’ annotations back into my markdown source files has never been easy. This task is now easier with AI.

  1. I use pandoc to generate a Word docx version, emailed or placed on Google Drive.

  2. The reviewer annotates using Word, Google Docs, OpenOffice, etc.

  3. I use the docx2md_add_comment.lua filter to convert the annotated docx file back to markdown.

  4. I Ask Claude Opus 4 Thinking to port the comments from the feedback file to my source file, which takes 5–10 minutes, with this prompt:

    Someone added html/markdown comments to the file 06-ai-advice-feedback-smith.md I need you to find Smith’s comments and port them to the original markdown file 06-ai-advice.md (keeping them as markdown comments).

To make sure I don’t miss any comments, it’s easy to count the number of comments or to diff the files.

Round-tripping a reviewers’ granular edits would be more difficult, especially if I sent them a version of the document with formatting, citations, and footnotes rendered. It wouldn’t be as bad if I sent them my source markdown plunked into a docx file, but I don’t think AI is up to the task to tracking small diffs between a source and rendered version of a file. For annotations, however, the above workflow works well!

weeklyOSM 780

Sunday, 6 July 2025 11:01 UTC

26/06/2025-02/07/2025

lead picture

[1] GitHub – OSM project map | © anvaka

Mapping

  • A request for comments has been made for social:*=*; aiming to separate contact methods from social media in OSM.
  • A request for comments has been made for the proposed tag developer=*. This tag aims to complement the existing engineer=* and architect=* tags and to identify the company or organisation that developed a building.
  • Voting for power=circuit routing is open until Thursday 10 July. Power routing aims to document what actual paths power can follow over a physical network, mainly between actual substations and along power lines.
  • Voting on the proposed road marking revision is open until Thursday 10 July.
  • The tagging scheme for windmills and watermills can be voted on till Saturday 19 July.

Community

  • Koreller proposed making OpenStreetMap diaries more discoverable and engaging by adding features like keyword categorisation, upvotes, and search filters, aiming to highlight valuable content and foster community interaction.
  • Krittin reflected on approaching 10,000 changesets as a deeply personal journey of mapping Bangkok, blending nostalgia, identity, and a passionate belief in open geographic data as a human right.
  • adreamy is conducting two surveys for the OpenStreetMap community. The first aims to assess levels of active participation and opinion-sharing within local communities. The second focuses on how actively OSM members engage with the global community, what barriers they face, and how these can be addressed.
  • Negreheb updated the OpenStreetMap community on their improved 360° image capture in Salzburg, highlighting a switch to bike-mounted cameras for better safety and image consistency while continuing to enhance OSM data through Mapillary and Panoramax.
  • UrbanRoaming shared how Strava’s heatmap can enhance OpenStreetMap accuracy by revealing misaligned, missing, or obscured paths, while cautioning against blindly mapping informal, private, or temporary tracks without further on-ground verification.

Local chapter news

  • FOSSGIS e.V. reported on its participation in KonGeoS 2025, a biannual geodesy student conference held at the University of Bonn.

Education

  • Pablo Sanxiao has written Fina and the Maps, a book aimed at inspiring young readers to explore collaborative cartography in the hope that some will participate in the OpenStreetMap project. The story follows Fina, a tech-savvy girl who bikes to visit her grandmother for stories and cookies. When a power outage leads her to an old atlas, she learns how maps were once drawn by hand. Inspired, Fina discovers OpenStreetMap, begins mapping locations with her grandmother, and becomes a digital cartographer.

OSM research

  • Professor Stefan Keller’s team at OST has developed ‘Vampire-Routing’, a footpath navigation mode on routing.osm.ch that suggests shadow-rich walking routes in Zürich to help users avoid the heat, using OpenStreetMap data and 3D shading models from swisstopo.

OSM in action

  • Smoggy3D has developed Map2Model, a web-based application that converts OpenStreetMap and elevation data into 3D-printable meshes right in the browser. Users can select an area on the map, adjust the settings to their preferences, and export ready-to-print files in STL or 3MF format.
  • The basemap.world Web Vector dataset outside Germany was updated with data from OSM as at 1 July and has switched to international place names in Latin script.

Software

  • Following the discontinuation of Bing Imagery services, Vespucci announced that Microsoft has updated its access key, allowing continued use of the service until sometime next year.
  • CoMaps, the community-driven fork of Organic Maps has just announced its first official release (we reported earlier), on the Google Play Store, Apple App Store, and F-Droid.
  • In a recent interview, Robin Cole spoke with Shahab Jozdani about Chat2Geo, a web-based application that streamlines remote sensing and geospatial analysis through an intuitive, chatbot-style interface.
  • The OSM-based climbing app OpenClimbing.org has officially launched out of beta. Designed as an open platform for climbing guides and maps, the service stores information and photos on collaborative projects such as OpenStreetMap and Wikipedia, enabling users worldwide to contribute and edit content.
  • The OpenStreetMap website now displays element version navigation using a pagination system.

Programming

  • Mateus de Souza Junior has extracted data related to urban parks and squares in a region of Florianópolis (Brazil), using the OSM API and GeoPandas. The code is available on GitHub.
  • Kaxtillo demonstrated how to create walking distance isochrones with the Python packages NetworkX, a tool for graph analysis, and OSMnx, which integrates OpenStreetMap data with NetworkX.
  • There is an initiative to offer thematic layers of OSM data in Parquet and flatgeobuf formats (we reported earlier). You can contribute on GitHub.
  • Valentin announced that Fedikarte.de, a website that displayed a map allowing Fediverse users to pin their locations, is now seeking at least two new maintainers to keep the project running.

Releases

  • The June 2025 MapLibre newsletter highlighted: the new features in MapLibre Native and Web, Compose Playground’s 1.0 release, GitHub Codespaces support for workshops, and FOSS4G Europe event details and community meeting times.
  • OsmAnd version 5.1 has been released on both the iOS and Android platforms.
  • Every Door version 6.0 has been released, featuring a new plugin system that enables custom imagery, presets, fields, and workflows.
  • OsmAPP version 1.7.0 has been released, introducing a new relation editor, support for indoor maps, multi-destination routing, and a preview feature for Wikimedia Commons categories.

Did you know that …

  • [1] … that the Map of GitHub, developed by Andrei Kashcha (aka anvaka), of course, has a territory with projects related to OpenStreetMap?
  • … you can follow the Everest trail from a drone? Or via the North Col if you prefer.
  • … you can now select your preferred language directly on the OpenStreetMap website?

OSM in the media

  • OpenCage interviewed Jochen Topf, the developer behind some key OpenStreetMap tools including Osmium, osm2pgsql, and taginfo.

Other “geo” things

  • Richard Harries has built a History of Cycling Maps website, which delves into the evolution of maps designed primarily for cyclists in the British Isles, covering the period from around 1870 to 1970. It covers over 130 restored extracts, detailed commentary on publishers, and resources for dating maps, structured as an informative, non-interactive digital reference book.
  • Here & There detailed how iOverlander’s transition from a free, community-driven app to a paid subscription model triggered a backlash due to user expectations and unresolved UX issues, highlighting the complex tension between sustainability and user trust in grassroots tech platforms.
  • The US National Oceanic and Atmospheric Administration announced that access to key data, used in hurricane forecasting, will be cut by the end of July, as the US Department of Defense will stop providing data from the Defense Meteorological Satellite Program.

Upcoming Events

Country Where Venue What Online When
flag Pforzheim VPE Verkehrsverbund Pforzheim-Enzkreis, 3. OSMS meets VPE 2025-07-05
flag Ghaziabad Vaishali OSM India monthly mapping party (online) 2025-07-06
flag Hamburg Voraussichtlich: "Variable", Karolinenstraße 23 Hamburger Mappertreffen 2025-07-08
flag München Echardinger Einkehr Münchner OSM-Treffen 2025-07-08
flag San Jose Online South Bay Map Night 2025-07-09
UN Mappers Community Discussion: Ambassadors Pilot Initiative Plan 2025-07-10
flag Online OpenStreetMap Midwest Meetup 2025-07-11
OSMF Engineering Working Group meeting 2025-07-11
flag Berlin Parzelle III/23b, Kleingartenkolonie Johannisberg, OSM-Stammtisch Berlin-Brandenburg 2025-07-11
flag Gaborone Online OSM Africa July 2025 Mapathon 2025-07-12
flag København Cafe Mellemrummet OSMmapperCPH 2025-07-13
flag Ghaziabad Vaishali 18th OpenStreetMap Delhi Mapping Party 2025-07-13
flag MZ Centar II FOSS4G Europe 2025 2025-07-14 – 2025-07-20
flag 臺北市 MozSpace Taipei OpenStreetMap x Wikidata Taipei #78 2025-07-14
flag Salt Lake City Woodbine Food Hall OSM Utah Monthly Map Night 2025-07-16
Missing Maps London: (Online) Mid-Month Mapathon [eng] 2025-07-15
flag City of Edinburgh Guildford Arms OSM Edinburgh Social Meet-up 2025-07-15
flag Online Lüneburger Mappertreffen 2025-07-15
flag Karlsruhe Chiang Mai Stammtisch Karlsruhe 2025-07-16
flag Heidelberg Berliner Str. 45, 69120 Heidelberg Missing Maps Mapathon in Heidelberg MSF 2025-07-17

Note:
If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.

This weeklyOSM was produced by 115c7a5fac, MatthiasMatthias, PierZen, Raquel Dezidério Souto, Strubbl, Andrew Davidson, TrickyFoxy, barefootstache, derFred, mcliquid.
We welcome link suggestions for the next issue via this form and look forward to your contributions.

Rumour has it that I might be a bit of a train nerd. At least I want to collect various nerdy data about my travels. Historically that data has lived in manual form in several places,1 but over the past year and a half I've been working on a toy project to collect most of that information into a custom tool.

That toy project2 uses various sources to get information about trains to fill up its database: for example, in Finland Fintraffic, the organization responsible for railway traffic management, publishes very comprehensive open data about almost everything that's moving on the Finnish railway network. Unfortunately, I cannot be on all of the trains.3 Thus I need to tell the system details about my journeys.

The obvious solution is to make a form that lets me save that data. Which I did, but I got very quickly bored of filling out that form, and as regular readers of this blog know, there is no reason to settle for a simple but boring solution when the alternative is to make something that is ridiculously overengineered.

Parsing data out of my train tickets

Finnish long-distance trains generally require train-specific seat reservations, which means VR (the train company) knows which trains I am on. We just need to find a way to extract that information in some machine-readable format. So my plan for the ridiculously overengineered solution was to parse the booking emails to get the details I need.

Now, VR ticket emails include the data I want in a couple of different formats: they're included as text in the HTML email body, they're in the embedded calendar invite, as text in the included PDF ticket, and encoded in the Aztec Code in the included PDF ticket. I chose to parse the last option with the hopes of building something that could be ported to parse other operators' tickets with relative ease.

Example Aztec code
Example Aztec code

After a bit of digging (thank you to the KDE Itinerary people for documenting this!) I stumbled upon an European Union Agency for Railways PDF titled ELECTRONIC SEAT/BERTH RESERVATION AND ELECTRONIC PRODUCTION OF TRANSPORT DOCUMENTS - TRANSPORT DOCUMENTS (RCT2 STANDARD) which, in its Appendix C.1, describes how the information is encoded in the code.4 (As a side note, various sources call these codes SSB version 1 codes, although that term isn't used in this specification. So maybe there are more specifications about the format that I haven't discovered yet!)

I then wrote a parser in Go for the binary data embedded in these codes. So far it works, although I wouldn't be surprised if there are some edge cases that it doesn't handle. In particular, the spec specifies a custom lookup table for converting between text and binary data, and that only has support for characters 0-9 and A-Z. But Finnish railway station codes can also use Ä and Ö.. maybe I need to buy a ticket to a station with one of those.

Extracting barcodes out of emails

A parser just for the binary format isn't enough here if the intended source input is the emails that VR sends upon making a booking. Time to write a single-purpose email server! In short, the logic in the server, again written in Go and with the help of go-smtp and go-message, is:

  • Accept any mail with a reasonable body size
  • Process through all body parts
  • For all PDF parts, extract all images
  • For all images, run them through ZXing
  • For all decoded barcodes, try to parse them with my new ticket parsing library I mentioned earlier
  • If any tickets are found, send the data from them and any metadata to the main backend, which will save them to a database

The custom mail server exposes an LMTP interface over TCP for my internet-facing mail servers to forward to. I chose LMTP for this because it seemed like a better fit in theory than normal (E)SMTP. I've since discovered that curl doesn't support LMTP which makes development much harder, and in practice there's no benefit of LMTP here as all mails are being sent to the backend in a single request regardless of the number of recipients, so maybe I'll migrate it to regular SMTP at some point.

Side quest time

The last missing part is automatically forwarding the ticket mails to the new service. I've routed a dedicated subdomain to the new service, and the backend is configured to allocate addresses like i2v44g2pygkcth64stjgyuqz@somedomain.example for each user. That's great if we wanted to manually forward mails to the service, but we can go one step above that. I created a dedicated email alias in my mail server config that routes both to my regular mailbox and the service address. That way I can update my VR account to use the alias and have mails automatically processed while still receiving backup copies of the tickets (and any other important mail that VR might send me).

Unfortunately that last part turns out something that's easier said than done. Logging in on the website, I'm greeted by this text stating I need to contact customer service by phone to change the address associated with my account.5 After a bit of digging, I noticed that the mobile app suggests filling out a feedback form in order to change the address. So I filled that, and after a day or two I got a "confirm you want to change your email" mail. Success!


  1. Including (but not limited to): a page of this website, the notes app on my phone, and an uMap map. ↩︎

  2. Which I'm not directly naming here because I still think it needs a lot more work before being presentable, but if you're really interested it's not that hard to find out. ↩︎

  3. Someone should invent human cloning so that we can fix this. ↩︎

  4. People who know much more about railway ticketing than I do were surprised when I told them this format is still in use somewhere. So, uh, sorry if you were expecting a nice universal worldwide standard! ↩︎

  5. In case you have not guessed yet, I do not like making phone calls. ↩︎

InputBox parameters and form submission

Friday, 4 July 2025 05:34 UTC

Fremantle

· Wikimedia · MediaWiki · searching · InputBox ·

Thanks to a recent wish I've been poking a bit at the InputBox extension lately, to make it work better with MediaSearch and CirrusSearch. This involves making it honour the user preference for Special:Search or Special:MediaSearch (if the extension for the latter is installed), and fixing up the way in which it passes its searchfilter parameter to the search page.

The fix for the first issue was to set the initial form action (which ends up in the parser cache and so can't be user-specific) to the site's default, and then have a front-end switch that dynamically changes it to whatever the current user has as their preference. Slightly clunkily, this involves sending both possible URLs to the front end and then choosing between them, because otherwise they wouldn't be localised.

The second issue came about because InputBox submits search and searchfilter values as separate GET parameters, and then on loading the special page it would changes the internal request object to have a unified value (i.e. these two values concatenated with a space between them). The trouble with that was that you'd end up at a URL like Special:Search?search=foo&searchfilter=insource:Bar and so anything that was accessing the search value directly would get it wrong. So the fix was to unify the values and then redirect to a new URL without searchfilter, and also to skip that redirect by doing the same sort of replacement in the front-end. So most people will not get the redirect, but we always aim to have a no-JS fallback. I did wonder about switching the input names around so that there's no visible change to the text input, which might be confusing to people who see it change but only after they've clicked submit and so there's no time to notice what it's doing.

I think there are similar improvements that could be made to other parts of InputBox, such as type=move with a prefix, but no one's complained about that not working so I don't think I'll bother digging any further for now.

← PreviousNext →
Reply, comment, contact, follow, etc.

My main RSS news feed: https://samwilson.id.au/news.rss
(or Wikimedia.rss, Fremantle.rss, OpenStreetMap.rss, etc. for topic feeds).

Email me at sam samwilson.id.au or leave a comment below…

No comments yet

Highlights from MediaWiki 1.44

Friday, 4 July 2025 00:00 UTC

The latest release of MediaWiki, 1.44, published in early July 2025 and supported through June 2026, brings a range of enhancements to your wiki.

Version 1.44 improves wiki features you use every day, from blocking accounts and review workflows to redirect handling, file protection, and editing extensions. They help you spend less time on maintenance and more on content. Whether you are running a public wiki or an in-house knowledge base, these improvemnts cut overhead and increase security. Explore the sections below to see exactly what is new.

The New Manage Blocks Interface and Multiblocks

With the new Manage Blocks interface for doing advanced user blocking, you have a cleaner and more powerful workflow. At the heart of this release is the Multiblocks capability, which empowers administrators to impose multiple, distinct restrictions on the same user or IP address concurrently.

  • Layered Restrictions
    Apply overlapping blocks that target specific actions, including both standard and newly supported ones. You can block traditional actions like editing pages, account creation, or sending email, and take advantage of new options for moving pages and files, creating new pages, and uploading files. Each restriction has its own customizable expiration time.
  • Partial screenshot of the “Block” special page showing how to set a partial block with multiple action restrictions and a custom duration.
    Partial screenshot of the “Block” special page showing how to set a partial block with multiple action restrictions and a custom duration.
  • Transparent Status Display
    Users can view exactly which restrictions are in place and their expiration times, so they know where they stand. Administrators will see active and past blocks, with all details at their fingertips, simplifying audits and follow‑ups.

By enabling targeted, situation-specific restrictions instead of all-or-nothing bans, the new Manage Blocks interface ensures you apply only the necessary limits, and only when needed. With customizable sub-blocks and clear status visibility, you can now enforce rules more appropriately to the given situation, reduce collateral impact, and maintain community trust without overblocking.

If desired, you need to enable this feature via the configuration parameters ($wgEnableMultiBlocks and $wgUseCodexSpecialBlock).

Improved Patrolling

To improve quality-assurance workflows, MediaWiki’s Patrolling feature has been enhanced for better oversight:

  • Recreated pages tagged
    New pages using a previously deleted title receive an automatic “Recreated” tag, which you can see on special pages listing new pages or recent changes. This way, patrollers can quickly spot and review them.
  • Rollbacks marked “manually patrolled”
    Rolling back an unpatrolled edit now flags it as manually patrolled (not autopatrolled), accurately reflecting human review.

Together, these two enhancements give patrollers a clearer, more accurate view of new and reverted content, helping communities keep their wikis clean and trustworthy.

Smarter Redirect Handling

Several enhancements came to MediaWiki’s Redirects feature:

  • Redirects to non-existent pages now trigger a warning.
  • Warnings appear when you point a redirect at another redirect.
Partial screenshot of a page showing a warning triggered when a redirect points to another redirect.
Partial screenshot of a page showing a warning triggered when a redirect points to another redirect.

These checks prevent errors at their source, reducing the need for tedious follow-up later.

Additionally, editors will now find it easier to manage redirects: action tabs on redirect pages link back to the redirect itself, letting you edit without jumping to the target page.

Together, these improvements make redirects more manageable and mistakes easier to avoid.

File Protection Security Pitfalls Addressed

Several file protection issues were resolved by applying cascading protection to both file uploads and their description pages, while also enforcing reupload permissions for version reverts.

When you embed a file with [[File:Example.png]] on a cascade-protected page, only the upload itself is locked down; anyone can still edit the file’s description page. However, if you transclude the description page directly with {{:File:Example.png}}, the protection carries over, and you cannot change the metadata or licensing details either.

That means embedded files leave their documentation vulnerable, while transcluded files stay fully locked. For sensitive assets like legal logos, policy diagrams, or any image where the caption matters transclusion is the safer choice.

Additionally, reverting files now requires the same reupload permissions and respects cascading protection, preventing rollbacks without the proper rights.

Linking across wikis or languages gets messy when your local namespace clashes with an interwiki or language code. Two new parser functions are now available to solve precisely that problem:

  1. #interwikilink – Forces an interwiki link, i.e., a link in your text, even if the prefix matches a local namespace.
  2. #interlanguagelink – Adds an interlanguage link, i.e., a link in the sidebar, when the prefix overlaps a namespace.

The problem is conflicting prefixes. On your internal wiki, you may have an “EU:” namespace collecting your internal knowledge and data about the “European Union”, yet “eu:” is also the Basque Language code. Writing: [[eu:PageName]] will resolve to https://example.org/wiki/PageName instead of, e.g., https://eu.wikipedia.org/wiki/PageName since the local namepace takes precedence.

Now you can use {{#interwikilink:eu|PageName|PageTitle}} or {{#interlanguagelink:en|PageName}} with “eu” as your prefix to link to the external wiki or add a sidebar language link, depending on which kind of link you need.

With these new parser functions at hand, you will never accidentally link to the wrong “EU:” page again. Enjoy conflict-free cross-wiki linking!

Spotlight on Bundled Extensions

Several bundled MediaWiki extensions have received usability enhancements that make editing, maintenance, and customization faster and more powerful. Here is our pick of the key improvements in WikiEditor, Nuke, TemplateStyles, VisualEditor, and CodeEditor, although other bundled extensions may have also picked up enhancements.

WikiEditor

The WikiEditor, also known as “2010 Wikitext Editor,” offers the following improvements:

  • Symbol tracking: Keeps your 32 most-recently used special characters front and center across sessions.
  • List handling: Select one or more lines and click indent/outdent to build properly nested sub-lists instantly.
  • Code button: Wrap selected text in <code> tags with a single click for quick inline code formatting.
  • New keyboard shortcuts: “Ctrl +B” for bold, “Ctrl +I” for italics, and four more to speed up your editing.

Nuke

The Nuke extensions got streamlined bulk-deletion features with advanced filtering and preview options, and more:

  • Namespace multi-select: Pick any combination of namespaces when fetching pages for bulk deletion.
  • “Nuke” tagging: Mass deletions are now automatically labeled with the “Nuke” tag for easier review.
  • 90-day history: When targeting a specific user or IP, you can delete pages they created in the last 90 days (up from 30).
  • Size filter: Limit deletions to pages within a specified byte-size range.
  • Include talk pages & redirects: Bulk delete associated talk pages and redirects alongside the main pages.
  • Preview for all: Even non-admins can now see which pages would be deletable under Special:Nuke.
  • Post-queue links: After queuing deletions, get one-click links to the user’s page and any pages you did not select.
Partial screenshot of the “Nuke” special page showing how to configure multiple mass deletion options.
Partial screenshot of the “Nuke” special page showing how to configure multiple mass deletion options.

TemplateStyles

Template editors working with the TemplateStyles extension can now tailor their output for users with specific accessibility needs using media queries like prefers-reduced-motion, prefers-reduced-transparency, prefers-contrast, and forced-colors.

VisualEditor

As the WikiEditor does, VisualEditor now tracks symbols, keeping your 32 most recently used special characters readily accessible across sessions.

CodeEditor

The CodeEditor extension now supports live code autocompletion across JavaScript, CSS, JSON, and Lua code pages.

Interwiki

The functionality of the Interwiki extension is now part of MediaWiki core. A separate install is no longer needed. Everyone can view the interwiki table, and users granted the “interwiki” permission can edit it.

Impact on Technical Teams

MediaWiki 1.44 brings quite some under-the-hood tweaks. The biggest one: if you are running DNS-based spam block lists, do not forget to add your extra spam-DB URLs. There are a handful of other minor changes as well, so give the RELEASE NOTES’ configuration changes a quick review to avoid any surprises.

This release also brings notable enhancements for developers and integrators. Extension authors and API consumers will love the expanded Action API capabilities, including authenticated cross-origin requests, alongside refreshed front-end tooling. The updated Codex library gets minor visual and breaking changes to core MediaWiki and extensions. A range of new hooks, services, and a new system of events and listeners further help create and maintain extensions. For a complete overview of every technical change, be sure to consult the “New developer features,” “Action API changes,” “Breaking changes,” and “Deprecations” sections of the 1.44 RELEASE NOTES.

Upgrading: Planning, Compatibility, and Considerations

Before upgrading to MediaWiki 1.44, check your upgrade path, system requirements, and compatibility:

System Requirements and Compatibility

PHP Requirements
MediaWiki 1.44 continues to require PHP 8.1 or later as the minimum supported version. PHP 8.2 to 8.4 are also supported though support for PHP 8.4 is somewhat less mature.

Database Requirements
MediaWiki 1.44 maintains the minimum database requirement of MariaDB 10.3 or later, or MySQL 5.7 or later.

Essential Information for MediaWiki Upgrades

MediaWiki 1.35 is the oldest version from which a direct upgrade to MediaWiki 1.44 is possible. However, we advise upgrading via MediaWiki 1.39 and 1.43 as intermediate steps.

If you want to minimize upgrade hassle, consider sticking with MediaWiki 1.43, which remains a rock-solid choice, although 1.44 is now available. It will continue getting long-term support through December 2027.

If you are using MediaWiki 1.39, 1.40, 1.41, or 1.42 (or older), consider moving to either 1.43 (recommended) or the new 1.44 release soon.

For comprehensive assistance on handling MediaWiki, check out our upgrade guide. It contains detailed instructions for installation and configuration.

Conclusion

MediaWiki 1.44 makes life easier for editors, admins, and programmers alike. A redesigned user blocking interface and cleaner review flows streamline processes. Better redirect handling and file protection mean less cleanup and wiki gardening afterward. Bundled extensions, such as WikiEditor and Nuke, bring useful new features. All of this delivers a quicker, safer, and smoother wiki experience from the very start for everybody.

MediaWiki Hosting

Create your wiki instantly via ProWiki. Never worry about upgrades again. Get started immediately via the free trial, or contact us to migrate your existing wiki.

Increased error rates on all wikis

Wednesday, 2 July 2025 15:12 UTC

Jul 2, 15:12 UTC
Resolved - Incident resolved, editing at normal levels.

Jul 2, 14:58 UTC
Update - A fix has been implemented and we are monitoring the results.

Jul 2, 14:57 UTC
Monitoring - A fix has been implemented, and reads and edits are recovering.

Jul 2, 14:36 UTC
Update - Commons API is unavailable

Jul 2, 14:21 UTC
Identified - The issue has been identified and a fix is being implemented.

Jul 2, 14:00 UTC
Investigating - We are investigating an issue affecting reading and editing on all wikis

SLAPPs are “strategic lawsuits against public participation”: legal cases brought to the courts in order to threaten and/or silence journalists, activists, and academics, including Wikipedia volunteer contributors. This legal phenomenon risks causing a chilling effect, which may be particularly acute in the case of those people who contribute voluntarily to Wikipedia and the other Wikimedia projects.

Last year, the European Union (EU) decided to take a major step to address this serious issue, and adopted the Anti-SLAPP Directive. It is now the turn of national governments to play their role and adopt strong national transposition laws to fulfill their responsibilities towards civil society and the public interest.

It is crucial that EU governments put in place strong and tailored safeguards, which can protect the fundamental rights to freedom of expression as well as privacy of those who make reliable information available to European society, including Wikipedia volunteer contributors. We call upon them to recognize the specificities of the SLAPPs targeting anonymous public participation, exactly as the Council of Europe did in its Recommendation.

Episode 185: Marijn van Wezel

Tuesday, 1 July 2025 17:19 UTC

🕑 55 minutes

Marijn van Wezel is a part-time developer at the MediaWiki consulting company Wikibase Solutions. He is also working on a master's degree in computer science at Radboud University Nijmegen.

Links for some of the topics discussed:

For some students, choosing a Wikipedia article to improve can be a challenge, but for recent Rhode Island College graduate Samantha Cote, picking an article for her Wikipedia assignment was easy – and then the real work began!

“I entered the project with the understanding that the article I chose was likely to be a mishmash of various and, to a degree, biased, viewpoints that I would have to analyze and make sense of, due to the nature of the topic,” said Cote, who transformed the Association Shams Wikipedia article as part of her coursework. “Associations Shams is a nonprofit that focuses on LGBTQ rights and sexual health advocacy, and it exists in a country that criminalizes homosexuality.” 

First, the political science student needed to untangle what felt like a jumble of biased, conflicting viewpoints she encountered in the existing text. Next, she had to find high-quality sources to enhance the accuracy, neutrality, and representation of the topic.

Samantha Cote
Samantha Cote. Image courtesy Samantha Cote, all rights reserved.

Already familiar with the Tunisian organization through her research for a previous course, Cote began her work by trying to identify issues, biases, and missing or redundant information in the article.

“I wanted to improve the quality of the article and clarify the viewpoint to a more neutral and informational one, rather than a fight between different editors to paint the organization one way or another,” explained Cote. “I was very insistent on adding academic sources, because all of the information in the [existing] Wikipedia article was sourced from news articles, which focused on sensationalism and controversy.”

Cote not only incorporated new content and sources into the Association Shams article, but she also transformed its structure, creating a more balanced and representative distribution of sections. Before Cote’s Wikipedia assignment, the online encyclopedia’s coverage of the nonprofit organization was dominated by information related to legal battles and controversies.

“I edited the controversy sections so that it was clearer to distinguish who or what was being critiqued specifically, and to sequester the information into the relevant sections, rather than spread out throughout the entire article,”  said Cote. “I wanted to bring to the forefront how deeply connected the organization is to the Arab Spring movement and the increasing push towards secularism and democracy throughout the Arab world. The first academic source I found helped me add information about how the fight for democracy was the catalyst that inspired numerous people in Tunisia to fight for relevance and power, including the LGBTQ community.”

Though she experienced a bit of shaky confidence as a new Wikipedia editor, Cote underscored her appreciation for the assignment and the new skills she developed throughout the process.

“Despite the learning curve, this was one of the most interesting and engaging projects that I have ever done throughout my college years,” said Cote. “It was challenging, yet not overwhelming. Most of all, it taught me a lesson I’ll use going forward, more often than drafting an academic paper of any sort.”

One of Cote’s key takeaways? Information can be constructed in ways that present biases – and all of us have the power to contribute to a more accurate, representative, and complete information landscape.

“For a lot of young people, using Wikipedia to find a starting point is particularly important to the way that we explore and understand the world around us,” reflected Cote. “The abundance of articles and the ubiquity of Wikipedia contributes to an atmosphere of understanding, trust and consistency that feels less and less common these days. And now I have a Wikipedia account, experience, and the knowledge and confidence to make useful and worthwhile contributions.”


Interested in incorporating a Wikipedia assignment into your course? Visit teach.wikiedu.org to learn more about the free resources, digital tools, and staff support that Wiki Education offers to postsecondary instructors in the United States and Canada.


Regional outreach in South Australia
, Belinda Spry.


As part of Wikimedia Australia's regional outreach, a Wikipedia workshop was held in Aldinga, South Australia (45 kilometres South of Adelaide), focusing on enhancing Wikipedia articles about the local people and history of the area.

The event grew out of the determination of a local historian, who was frustrated by the absence of Wikipedia articles about significant women artists from South Australia. Motivated to take action, he drove a return trip of nearly 2,500 kilometres to Canberra to attend a Know My Name edit-a-thon at the National Gallery of Australia, to learn how to contribute to Wikipedia. On returning to Adelaide, he continued editing and has since liaised with WMAU staff to facilitate the Alginga workshop aimed at documenting the cultural heritage of regional South Australia.

Willunga in South Australia

History writing for Wikipedia[edit | edit source]

Participants learned editing skills and worked together to improve articles. Using the Aldinga Library’s local history resources, the group collaboratively reviewed articles with errors, poor citations, and unclear writing. Through discussion and hands-on editing, participants improved several entries, ensuring more accurate, better-sourced, and locally informed content on Wikipedia. There is great interest in another workshop in the near future to learn how to upload images to Wiki Commons.

This event is part of ongoing efforts by Wikimedia Australia to grow Wikipedia’s coverage of regional Australia while building digital literacy and research skills in local communities.

Willunga streetscape in 2023

Images[edit | edit source]

weeklyOSM 779

Sunday, 29 June 2025 10:12 UTC

19/06/2025-25/06/2025

lead picture

[1] | © jaz-michaelking | Map tiles by CartoDB – Powered by uMap | Map data © OpenStreetMap Contributors.

Community

  • Pascal Neis recently conducted a GNSS accuracy comparison involving drones, an action camera, and a smartwatch. In the experiment, both the DJI Avata and the NEO drones successfully recorded GNSS tracks, but commonly deviated several metres from the reference route. The GoPro action camera yielded similar or slightly worse accuracy, with GNSS tracks occasionally straying even farther than the drones. In contrast, the Garmin smartwatch delivered the most consistent and accurate results.
  • Martijn van Exel published his slides and cleaned-up speaker notes from his talk about MapRoulette and his lightning talk about the Meet Your Mappers tool given at State of the Map US in Boston.
  • Graham Park provided an explanation on how to utilise Strava traces for mapping paths in OpenStreetMap, noting that contributors are permitted to use Strava heatmap data for tracing, a usage right reconfirmed in November 2019.
  • Christoph Hormann commented on the recent controversy over the OpenStreetMap Foundation’s largest externally funded project to date, warning that it risks creating conflict between those driven by substantial extrinsic motivations, such as financial incentives or vague promises of money, and a community of intrinsically motivated volunteers. He emphasised that this issue stems from the broader problem of generational turnover within the OSM project and encouraged the community to develop their own approaches to facilitate this transition, either by creating their own methods or by pressuring the OSMF to begin taking responsibility.

OpenStreetMap Foundation

  • On Wednesday 25 June at 21:43 UTC the OpenStreetMap Grafana dashboard indicated an issue with the minutely replication diffs not updating. This disruption led to several downstream problems; most notably, newly uploaded changesets could neither be queried on the map nor rendered. The issue was first reported by community member queenofthenightosphere at 23:37 UTC. The OSM services that rely on this diff stream to stay in sync with the live database, including the Overpass API, tile rendering, Nominatim, and others, stopped updating. While the changesets were safely stored, they could not be accessed through these services due to the sync failure.

    • To resolve the issue, the OpenStreetMap Operations Team temporarily took the main site offline for maintenance. This was done to bring the replication diffs back into alignment with the database. The maintenance was successfully completed around 13:27 UTC on Thursday 26 June.
    • As of 14:06 UTC, several core services, tile rendering (tile.osm.org) and Overpass API, had caught up with current OSM edits via diffs. However, Nominatim was still in the process of catching up. Other services, especially third-party instances, may take additional time to fully resynchronise. The root cause appears to stem from an unexpected issue involving PostgreSQL and how osmdbt pulls replication diffs from the database.

Events

  • State of the Map Europe, happening in the UK this November, has opened its call for proposals. Across eight tracks, they hope you will propose talks, panels, and other sessions for the schedule.

OSM research

  • HeiGIT reported that a new paper was published by Huber et al., introducing the isocalor approach to assess how solar exposure and heat stress affect pedestrian access to essential services in Heidelberg, Germany. The study used OpenStreetMap data and a customised openrouteservice routing engine, and was conducted within the framework of the HEAL project.
  • Daniel Schep, Overpass Ultra’s developer, requests your input on how users engage with Ultra.
  • Ibra Lebbe Mohamed Zahir and colleagues have assessed the impact of the Booster Grants programme on climate activism and education among YouthMappers in Sri Lanka. Funded by HOT/Open Mapping Hub Asia Pacific, the programme supported grassroots mapping initiatives. Eastern University, Sri Lanka, was among its local implementing partners.

Maps

  • Lostmonkey tooted the ’17 Walls – Murals in Aarhus, Denmark’ , a uMap that shows the geolocation of the murals from the ’17 Walls project’ plus many other murals in Aarhus (Denmark) and those that were mapped in OpenStreetMap. You can also visit the project’s website.

Open Data

  • At the State of the Map US conference, Jacob Whall raised concerns, in a direct conversation with members of the Overture Maps team, regarding the organisation’s apparent failure to fully comply with the OSM project’s Open Database License. In response, Overture representatives acknowledged and apologised for certain oversights, and gave a verbal commitment to address the issues and implement necessary corrections.

Software

  • BeardMD was disappointed with the apps used for hiking the St James Way (Camino de Santiago). These apps ask users for updates to opening hours or bed counts, but never contribute the information gained back to OSM. To fix that he has written the Camino App, which does contribute back, starting with fountain potability information.
  • Ralph Straumann reported that Cadence Maps is currently developing a new cloud-native data service for distributing OSM data in the Germany–Austria–Switzerland–Liechtenstein region. The service delivers data in the GeoParquet format using a unified schema modelled after the Overture Maps framework.
  • The first public test build of CoMaps is now available for iOS!
  • HeiGIT reported that their online dashboard ohsomeNow Stats has been updated. The dashboard now provides near real-time access to all OpenStreetMap contributions since 2005 without the need for hashtags, enabling more flexible and detailed analysis of global mapping activities.
  • novaTopFlex tooted about the novaMapTerm, an application written in Go, with the Fyne toolkit, combining the Linux terminal with OpenStreetMap data. Watch a demonstration video.
  • VersaTiles tooted about a website that compares vector tiles for OpenStreetMap. It was developed with planetiler-shortbread, a Docker image containing planetiler and VersaTiles vector tiles. The code is available on GitHub and the demo is accessible as well.
  • mondstern tooted about the PublicArtExplorer, an application developed by Thomas Zell that allows you to discover and explore public art around you in just two clicks. The app uses your location to instantly reveal nearby sculptures, murals, and installations, complete with detailed information about each piece and its artist. The PublicArtExplorer’s data is based on OpenStreetMap and Wikipedia and is available for Android.
  • What’s new on the OpenStreetMap website since mid-May? It’s now easier to get directions, link to your social media profiles, set your preferred language, and much more.

Programming

  • Paul Norman showed how he can identify when coastlines have changed, which was required for the OSMF Shortbread vector tiles. He stated that every few days about 1000 coastline tiles need re-rendering.

Releases

  • TrickyFoxy has published better-osm-org 1.0.0, which includes: rendering of type=restriction relations, displaying information about deleted users, filtering notes on the map, a ruler for measuring distances on the map, and the ability to move node POIs. Previous updates had also added links to 3D renderings of buildings, the ability to partially revert changesets via osm-revert, and filtering changesets in the user’s heatmap. The source code is available on GitHub.
  • A completely new release of Tracesmap has dropped. Its main feature has pivoted to a GPX file viewer.
  • Version 1.15.0 of Baba, the Panoramax Android app, has been released and now supports the Taiwanese Panoramax instance.

Did you know that …

  • [1] … jaz-michaelking has developed ‘Fediverse Near Me’? It is an interactive web map showcasing a curated list of well-moderated Mastodon servers open for registration and tailored to specific countries, regions, or languages.

Other “geo” things

  • AllTrails has launched an AI-powered feature that allows users to create custom hiking routes from scratch, modify existing trails, or utilise a new routing system to make hikes shorter, less steep, or more scenic. While the feature is designed to enhance user experience, search and rescue officials have warned that it could worsen a long-standing problem: hiking apps spreading inaccurate or misleading trail information. Environmental advocates have also raised concerns that the feature could encourage users to create unofficial trails wherever they please, emphasising that trails exist for a reason, as staying on designated paths is not only important for human safety but also for protecting entire ecosystems, which often rely on carefully planned trail designs to allow limited, sustainable access.
  • A research team, led by Ed Hawkins and the University of Reading, maintains an interactive map, #ShowYourStripes, which shows the rise in average temperatures around the world. Stripes turned from mainly blue to mainly red in more recent years for virtually every country, region, and city. The application uses OpenStreetMap as base layer.
  • The National Remote Sensing Centre of the Indian Space Research Organisation is hosting a free webinar on Bhuvan, the country’s web-based geospatial platform. The webinar is from 9 to 11 July and registration closes on Wednesday 1 July.

Upcoming Events

Country Where Venue What Online When
flag Saint-Étienne Zoomacom Rencontre Saint-Étienne et sud Loire 2025-06-30
flag Žilina FRI UNIZA Missing Maps mapathon Žilina #18 2025-07-01
flag Derby The Brunswick, Railway Terrace, Derby East Midlands Pub Meet-up 2025-07-01
Missing Maps London: (Online) Mapathon [eng] 2025-07-01
flag Stuttgart Stuttgart Stuttgarter OpenStreetMap-Treffen 2025-07-02
flag Wageningen Campus, Omnia FOSS4GNL 2025 2025-07-02 – 2025-07-03
iD Community Chat 2025-07-02
UN Mappers #ValidationFriday Mappy Hour: Waterways 2025-07-04
flag Bogotá Remote Mapping Party Semanal LATAM Weekly LATAM Mapping Party 2025-07-04
flag Pforzheim VPE Verkehrsverbund Pforzheim-Enzkreis, 3. OSMS meets VPE 2025-07-05
flag Ghaziabad Vaishali OSM India monthly mapping party (online) 2025-07-06
flag San Jose Online South Bay Map Night 2025-07-09
flag Hamburg Voraussichtlich: "Variable", Karolinenstraße 23 Hamburger Mappertreffen 2025-07-08
flag Online OpenStreetMap Midwest Meetup 2025-07-11
OSMF Engineering Working Group meeting 2025-07-11
flag Berlin Parzelle III/23b, Kleingartenkolonie Johannisberg, OSM-Stammtisch Berlin-Brandenburg 2025-07-11
flag København Cafe Mellemrummet OSMmapperCPH 2025-07-13
flag Ghaziabad Vaishali 18th OpenStreetMap Delhi Mapping Party 2025-07-13
flag MZ Centar II FOSS4G Europe 2025 2025-07-14 – 2025-07-20
flag 臺北市 MozSpace Taipei OpenStreetMap x Wikidata Taipei #78 2025-07-14

Note:
If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.

This weeklyOSM was produced by HeiGIT, MatthiasMatthias, Raquel Dezidério Souto, Strubbl, Andrew Davidson, barefootstache, derFred.
We welcome link suggestions for the next issue via this form and look forward to your contributions.

The Wikipedia Test

Friday, 27 June 2025 23:00 UTC

We get it; we really do. Lawmakers across the world are rightly focused on regulating powerful, for-profit platforms to mitigate the harms ascribed to social media and other threats online. When developing such legislation, however, some draft laws can inadvertently place public interest projects like Wikipedia at risk. At the Wikimedia Foundation, the nonprofit organization that hosts Wikipedia and other Wikimedia platforms, we have found that when a proposed law harms Wikipedia, in many cases it likely harms other community-led websites, open resources, or digital infrastructure.

That is why we have created the Wikipedia Test: a public policy tool and a call to action to help ensure regulators consider how new laws can negatively affect online communities and platforms that provide services and information in the public interest.

The Wikipedia Test offers a straightforward idea as its central premise:

Before passing regulations, legislators should ask themselves whether their proposed laws would make it easier or harder for people to read, contribute to, and/or trust a project like Wikipedia.

When we say “Wikipedia” in the context of the test, we mean it as a model for the best parts of the internet. Wikipedia can act as a stand-in for those other online spaces that are open, privacy-respecting, and enable people around the world to share knowledge that can advance education, development, and civic participation.

This includes things like: Project Gutenberg, which makes educational and cultural resources freely available; FixMyStreet and its public reporting forums so citizens can direct their representatives to their concerns; Global Voices and its citizen journalism platforms, which amplify stories left untold by larger news media; and, a multitude of data-sharing and code repositories and digital public goods that help researchers advance our understanding and actions regarding public health, climate change, and Sustainable Development Goals.

In a nutshell, the Wikipedia Test is a reminder: When regulation fails to account for the various kinds of platforms and services that exist online, the result can be laws that unintentionally harm the very spaces that offer an alternative to the commercial web.

The Wikipedia Test is more than just a safeguard: it is a way to promote a positive vision for the internet. We envision a digital ecosystem where “everyone has easy access to a multilingual digital commons supported by a thriving public domain and freely licensed content.” To get there, policymakers must support online spaces where diverse communities can build and govern knowledge-sharing platforms in their own languages and cultural contexts. The Wikipedia Test helps identify whether a proposal aligns with this future — or undermines it.

As you will see below, the tool itself is a short, easy-to-use rubric designed to help lawmakers, regulators, and public interest advocates ask the right questions. These are the kinds of considerations that define whether a law or regulation protects the knowledge and information that belongs to everyone online — i.e., the digital commons — or, worryingly, threatens it.

Like everything in the Wikimedia ecosystem, the Wikipedia Test is free to access and share. Policy advocates both inside and outside the Wikimedia movement can use the Wikipedia Test to spark better conversations with lawmakers. Regulators can use it to spot potential red flags early in the drafting process. And best of all, it is not a pass-fail assessment: it is an invitation to think more critically, to ask better questions, and to reach out to others that are also concerned about making sure that the internet is the best that it can be.

When in doubt, contact the Global Advocacy team at the Wikimedia Foundation. We are here to help assess the impacts of proposed rules and laws, and to work together toward ensuring better outcomes for everyone.

Last but not least, we would love your feedback. Are you a policymaker looking for a clearer path through complex digital questions? Are you an advocate who wants to integrate the Wikipedia Test into your own work? Let us know at globaladvocacy@wikimedia.org.

By working together, we can ensure that the internet remains a space where knowledge can be built and shared by everyone, in every language, everywhere in the world.

The Wikipedia Test

[Free expression] Could the policy increase the legal risks or costs of hosting community-led public interest projects like Wikipedia?

Community-led moderation on platforms like Wikipedia, Reddit, or OpenStreetMap relies on intermediary liability protections, which shield websites and users from legal responsibility for user-generated content (UGC). The best-known example is Section 230 of the US Communications Decency Act (CDA). Weakening these protections could force centralized moderation, undermining crowdsourced models. Proposals to change Section 230 are frequent — we even published a three-part blog series on the issue. The Electronic Frontier Foundation also explains why this matters for Wikipedia.

[Access to information] Could the policy make it harder to access or share information, including works that are freely licensed, protected by copyright, or in the public domain?

A good example of policy supporting access to freely licensed information is the 2021 UNESCO Recommendation on Open Science. It urges governments to reform copyright and data laws to enable open access, public domain use, and collaboration in order to enhance scientific research. This is a strong foundation for legal frameworks that support cocreation and citizen science. The International Federation of Library Associations and Institutions (IFLA) praised it for reinforcing libraries’ roles in equitable access. Implemented well, it ensures public funding results in public knowledge — not paywalled content.

[Privacy and safety] Could the policy potentially threaten user privacy by requiring the collection of sensitive, identifiable information like ages, real names, or contact information of Wikipedia’s volunteer editors and readers?

The UK Online Safety Act (OSA) and Australia’s Basic Online Safety Expectations (BOSE) are examples of laws that threaten user privacy by requiring websites to collect ages or real names. The collection, processing, and retention of such sensitive data increase the risk of a range of privacy harms, including identity theft, surveillance, and harassment. Journalists have reported on how this could undermine Wikipedia’s commitment to anonymity and privacy, potentially making both readers and volunteer editors less willing to access or contribute to Wikipedia.

[Free expression] Could the policy lead to potential surveillance and cause a chilling effect that discourages people from reading or editing Wikipedia?

Electronic mass surveillance, like that conducted by the United States National Security Agency’s (NSA) “Upstream” program, has been legally contested in a number of countries. It is one of many forms of surveillance that can chill freedom of expression by making people afraid to access or contribute to discussions of certain topics — even on an encyclopedia such as Wikipedia.

[Privacy and safety] Could the policy make it riskier for people to access, edit, and share content on Wikipedia by enabling governments to collect identifying information about volunteer editors or readers, leading to intimidation or retaliation?

The United Nations Convention Against Cybercrime is an international treaty that, if widely adopted, could be used by repressive governments to reach across national borders in order to prosecute political enemies, dissidents, and others who challenge authoritarian rule — including Wikipedia editors. Freedom House also explains that the treaty would make it easier for law enforcement agencies to obtain private companies’ electronic records and data, undermining the human rights of people outside of those agencies’ jurisdictions.

[Free expression] Could the policy limit the ability of volunteer editors to govern Wikipedia’s content and guidelines?

A 2021 bill in Chile could have severely threatened community-led models of platform governance. The bill’s one-size-fits-all approach would have imposed content moderation obligations designed for commercial platforms, including preemptive content takedown. This would have undermined the autonomy of volunteer editors in shaping content and guidelines. The CELE noted how such regulations risked threatening rights, chilling participation, and eroding the collaborative nature of websites — such as Wikipedia.

[Access to information] Could the policy restrict the free flow of information across borders, potentially limiting access to Wikipedia and its content?

During 2017–2020, the government of Turkey blocked Wikipedia in the country, denying 80+ million people there from reading and contributing to an essential source of information that the rest of the world could access. Freedom of expression and access to reliable information empowers people to make better decisions, be more connected, and build sustainable livelihoods. This violation of human rights was ultimately condemned by the Turkish Constitutional Court, who ruled that access to the encyclopedia had to be restored. Since then, Turkish Wikipedia, which is viewed more than 150 million times a month, has grown by more than 474,000 articles.

Remember: We value your feedback, so please reach out to the Global Advocacy team with any questions, thoughts, and suggestions that you might have about the Wikipedia Test.

Together we can promote and protect the best parts of the internet!

Stay informed on digital policy, Wikipedia, and the future of the internet: Subscribe to our quarterly Global Advocacy newsletter.

The post The Wikipedia Test appeared first on Wikimedia Foundation.