There are important differences between good-faith errors and intentional vandalism. Sometimes accidents are widely felt too, but we look at them in a different light than vandalism. We can mitigate good-faith errors in a variety of ways without affecting the distribution mechanism: user education, improved editor usability, better documentation, tagging scheme reform. We have room for improvement in all these ways, but we’ve done a good job of managing accidents in general.
The concern expressed in this thread is that we’ve been slow to respond to the rise in malicious edits with systematic measures. We have some guardrails against casual vandals, like iD and Rapid “locking” features that have Wikidata tags, suggesting some degree of notability. But to the extent that we have any countermeasures against more persistent vandals, they’ll inherently grow outdated over time.
Today, our first line of defense against persistent, high-profile vandalism is a relatively small group of elite mappers who know how to use a third-party service[1] to detect spot fires and various arcane revert tools to extinguish them. It’s easy to see how the rate of vandalism could potentially outstrip our capacity to fight it as OSM becomes more prominent and vandalism techniques also become more widely known. We can’t say for sure if or when this will happen, but planning for this scenario wouldn’t be wasted effort, because it would also cut down on the effort to fight ordinary vandalism, which we currently take for granted.
In discussions about the future of countervandalism, we’re quick to dismiss approval systems, no matter how nuanced. This is only natural, because many of us are familiar with the approval systems of other platforms such as Google Maps and we don’t want to be like them. Gating volunteer contributions behind an an approval system can fail too, sometimes disastrously. But anyways this is a sledgehammer when perhaps there are scalpels we haven’t considered yet. My suggestion would be to investigate the countermeasures employed by similarly situated projects that also need to maintain a high degree of openness despite being prominent targets for vandalism.
To pick on a project I’m familiar with, many of the technical measures against vandalism in the MediaWiki ecosystem are responses to common behavioral patterns that the Wikipedia community has identified over the years. These measures also ended up helping the project deal with other problems besides vandalism. Abuse filters block many attempts at vandalism but also block SEO spam, a scourge we’re also familiar with. CheckUser helps administrators block sockpuppets whether for block evasion or for astroturfing in a content dispute. Revision scoring not only catches subtle vandalism but also lets Wikimedia claim that they’ve been using AI all this time.
Our analogues to these tools would look different due to our data model, but they wouldn’t be impossible, and I think it would be possible to avoid a backlash over egalitarianism or transparency.
Or does OSMCha count as second-party because of its OSMUS sponsorship? You get my point. ↩︎