The Unilateral Decision on the Moderation Policy by the OSMF

First of all, I would like to express my sincere gratitude to the members of the OpenStreetMap Foundation (OSMF) for their ongoing dedication to the OSM project.

That said, I was genuinely surprised to learn that the OSMF adopted the Policy on Chatbots and Artificial Intelligence without a broader consultation with the community. What shocked me even more was the provision allowing any member of the moderation team to delete a user’s post based solely on their individual judgment.

While it’s possible that an appeal process may exist, the published policy makes no mention of such a safeguard. It simply states that any team member may, based on their own discretion, remove posts — without outlining clear criteria or steps for accountability.

I do agree with the intention behind the policy: to promote thoughtful, human-led conversations that support the forum’s role as a space for meaningful exchange. However, as someone who has recently used AI tools in their own research, I find it concerning that a policy with such far-reaching implications — including the potential for post deletion and even member bans — was introduced without prior notice, clear guidelines, or community input.

Under the section “Changes to the Moderation Policy”, it is stated: “The board will probably vote on this issue.” In my view, a board vote should not come before a thorough discussion with the wider OSM community. Shouldn’t the community — the very people who contribute to and sustain this project — be meaningfully consulted before major policy decisions are made?

Are we, the contributors, simply expected to follow top-down directives from the OSMF without question? Personally, I have always believed that the strength of the OSM ecosystem lies in the initiative, diversity, and commitment of its contributors. That’s why I find it difficult to understand how such a significant change could be introduced — even in a draft form — without open discussion.

Have I misunderstood how decision-making is supposed to work within the OSM community?

Do others here view this process as acceptable?
Is the pursuit of a well-meaning goal enough to justify sidelining transparency and due process?


Unfortunately, I may have overreacted due to the significant discrepancy between what I had understood about the OSM system and the recent policy decision.

Since I don’t fully understand the structure of OSM (including the roles of the OSMF, the OSM ecosystem, and contributors), I believe I need to first learn more about how this decision was made.

I’m particularly curious about how a policy related to the OSM forum was decided through discussions among a few moderators via a mailing list. (If it was thoroughly reviewed, I would certainly understand…)

I would also like to know exactly what the updated policy entails and the process that led to its adoption—who raised the issue, what the triggers were, what discussions took place, and through what procedure the decision was made. I assume regular contributors have the right to know about this, don’t they?

Who should I ask for an answer to this kind of question?

Should we form an OSM Contributors' Union or something?

“In the event that a suspected chatbot is found to be a human being, the normal process of appeal to the Board of Directors of the OpenStreetMap Foundation will apply”.
https://wiki.openstreetmap.org/wiki/Etiquette/Moderation_Team_Guidelines#Policy_on_Chatbots_and_Artificial_Intelligence

5 Likes

Could you stop ranting about OSM’s policies (or its contributors) for five minutes?

As for the policy… just write in imperfect English instead of translating your thoughts with AI.

6 Likes

I have not yet investigated it deeply, but my cursory understanding is that situation is “LLM spam accounts will be treated as spam account”.

definitely no, and AFAIK noone tried to introduce OSM version of papal infallibility

why you think that anyone is trying to do this?

Board/Minutes/2025-06 - OpenStreetMap Foundation mentions “Due to an increasing number of Bot and AI posts to the forums

"the normal process of appeal to the Board of Directors of the OpenStreetMap Foundation will apply” in Etiquette/Moderation Team Guidelines - OpenStreetMap Wiki ?

I would start from reading Board/Minutes/2025-06 - OpenStreetMap Foundation and linked docs

I think that it makes sense if people (A) would comment (B) would comment with at least cursory understanding of situation.

In my experience public consultations in OSM received relatively minor feedback (and even lesser after taking (B) into account) even for more important and controversial issues

still, I think it could make sense

4 Likes

Thank you for your thoughtful and considerate comment.

I would like to sincerely apologize once again for having gotten somewhat agitated. This was due to my limited understanding of the decision-making structure and policy-making processes within OSM, and also because what I read seemed to conflict so starkly with my prior understanding—that, although slow and often filled with disagreements, the community has generally developed its policies through discussions and deliberation.

In particular, I must admit that I became more emotional because I have long believed that there are people who are entirely excluded from the policy-making process, especially due to language barriers. This recent policy decision appeared, to me, as possibly another instance of such exclusion.
(One thing I truly wish to appeal for—though I realize it is not the main point of this discussion—is that more thought and care be given to those who, unintentionally, may be left out of the process of change in OSM. I will leave it at that for now.)

Before writing my previous comment, I tried my best to find all available links and related discussions. However, perhaps due to my language limitations, I was unable to find anything beyond the two links already mentioned above.

The only information I could access was contained in those two links. The “Internal board GitLab ticket 805” mentioned there is not a public page.
Therefore, I had no choice but to rely on the content visible in those two links, and that is why I expressed my desire to better understand the background discussions and context behind this policy decision.

Also, I am already aware of the content stating that “accounts used by chatbots will be suspended,” and I have no comment regarding that.
However, I find the definition of “reposting content generated by chatbots” to be unclear—surely it does not mean that any use of chatbot-generated text is completely prohibited?—and I do not understand why such judgment is left solely to a single member of the operations team.

Especially for people like myself who face language barriers, AI tools can be extremely helpful in overcoming those limitations and, moreover, in effectively communicating our intent to native English speakers. I believe that such tools should, in fact, be encouraged. The ambiguity surrounding what constitutes acceptable use, however, could cause unnecessary misunderstandings and conflict. (As we can already see an example of above.)

Again, regardless of whether this decision followed the proper rules or procedures, I believe it is important to gather opinions and feedback from forum users in advance—especially since it directly and significantly affects the way contributors engage in the forum.
I also believe it should be announced to the users once more before it is finalized.

Furthermore, although I have not seen the full text of the decision, based on the brief messages available in the two public links, it seems that the decision has already been mostly made and is simply awaiting approval by the board—which makes me all the more concerned.

I sincerely hope that no members are left out of the policy-making process in OSM, and that the process is carried out as transparently as possible.
Despite the noise, the slowness, and the complexity, I truly hope that OSM remains an ecosystem shaped and built by its community members themselves.

Spammers have shifted to LLM-generated comments. I’m seeing an increasing number of brand new accounts posting where it is obvious both the ideas and text were generated by a LLM. Most users won’t be aware of this because the moderators do a good job at dealing with them. They are also sometimes caught by other anti-spam measures.

This is different than users using LLM-based tools to help communicate their ideas.

10 Likes

Chatbots are not people, and AI-produced spam doesn’t qualify as free speech. In addition, the @mods-general are not empowered by this to do anything more than they are already empowered to do by the Etiquette Guidelines and supporting documents; we’re merely streamlining the ability to declutter the forum of chatbot-produced and AI-produced spam without requiring all five members of the moderation team to vote. I should also point out that the moderation team usually (as in virtually always) acts after receiving complaints from members of the community about a post.

Furthermore, although I have not seen the full text of the decision, based on the brief messages available in the two public links, it seems that the decision has already been mostly made and is simply awaiting approval by the board—which makes me all the more concerned.

All documents related to guidelines for operation of and actions of the @mods-general are either posted to or linked from this page: Moderation team for talk and osmf-talk mailing lists - OpenStreetMap Foundation.

I just searched the OSM wiki for the word “chatbot” and the first item the search engine found was the policy on chatbots: Search results for “chatbot” - OpenStreetMap Wiki

imho: You can use a better prompt with a disclaimer.

like :

Translate to simple, polite, 
basic-level British English with extra clarity. 
Avoid unnecessary words, em-dashes, rare words, 
special quotes or characters, 
and AI-generated phrases.
The audience is the OpenStreetMap community, 
so use an OpenStreetMap ethos and bridge-building style.
I am Korean, so add a little Korean-English greeting
and goodbye - only if it will not be misinterpreted.
Important: Add this disclaimer at the end 
exactly as shown: [[disclaimer: AI translated text]]
"""
< --- your native korean text --> 
"""

Notes:

  • If your English level is “intermediate” or “upper intermediate” - adapt the prompt. (You need to understand the translated text ! )
  • Add your OSM username/profile (with your OSM edit history) to your “About me” section. ( ~ Spam profiles do not edit OSM. )
  • But it’s best if you get your translation prompt approved by the moderators, because this is my interpretation.

I also hope a more detailed guide will be published, because many concepts are not clear in the suggested guidelines.
( but this is a first iteration )
for example - "chatbot” could describe anything from a spam account ( ~ AI Spam agent ) to a mapper using a translation assistant. And ambiguity invites inconsistent enforcement.


Relevant research - ( ~ If you don’t use AI, you can’t detect )

Arxiv: People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text

The policy is fundamentally about the lack of constructive content that happens to be produced in a certain manner.

Each of us “uses” LLM chatbot output in the sense that we’re forced to read it increasingly wherever we turn on the Internet. Maybe those of us who abstain would have more difficulty detecting sophisticated chatbot usage, but so far the spammers aren’t sophisticated and aren’t trying very hard to obscure their methods.

3 Likes

Exactly right. Clutter (lack of substance that would contribute to the body of knowledge) is the target. Please note as well this line from the Moderation Team Guidelines:

Sharing of accurate information that contributes to general knowledge created through artificial intelligence, such as computer code, step-by-step instructions for procedures, and the like, is of course always permissible.

Using AI to assist a human writer with grammar and syntax is certainly not the target. Using AI to help a human writer translate from one language to another is certainly not the target. The target, again, is substance-free content that appears to have been generated by AI.

1 Like

Slightly off-topic, but I wish similar standards were applied to the osm.org diaries as well. There are many substance-free “my journey” posts, which makes it hard to find meaningful, human-written content.

1 Like

Does Discourse (or a plugin) support any sort of “user diary” natively? If it did then we’d gain a lot - better editing tools for one - and also get voting, translation, etc…

3 Likes