What I did was set the tile.openstreetmap.org CDN to not consider any tiles before Wed Apr 10 2024 05:00:00 GMT (time after revert) as valid and instead retrieve new tiles. Light version of a shift-F5 or similar.
The render backend will then normally supply a fresh tile rendered after Wed Apr 10 2024 05:00:00 GMT, but if the render backend is overloaded / slow it might supply an older tile back to the CDN.
Most of the tiles with vandalism were for tiles render before Wed Apr 10 2024 05:00:00 GMT. If a tile with vandalism was viewed then it may still be in local browser cache. The result is, if user sees vandalism then ctrl-F5 or shift-F5 is best route for them to try get a fresh non-cached tile without vandalism.
As described above, the actual data has probably already been fixed, but itâd be great if you could run through the steps above just to make sure?
Thereâs a (fairly technical) description of what the process is between data being fixed and you looking at a map above here, and according to this post above the people in charge of the displayed maps are trying to fix the problem that you are seeing.
Most of these are/were horrible, but I must admit the âAndy Townsend is butchâ one makes me laugh. Itâs nearly the etymology of Andy, and usually quite flattering besides.
Hey Andy, whether youâve been called worse or not, I imagine itâs still a horrible thing to experience. So, just want to say thanks for what you do.
Thatâs the important bit - the data was fixed very quickly (with the exception of a few problems related to well-meaning people who didnât understand what was happening doing individual single-changeset reverts using online revert tools). Also, the limits on what ânewâ accounts can do worked well.
All of the discussion above was about people seeing things in local cache (mostly) or from the CDN (less often, but now fixed).
Itâs a shame that a lot of valuable time has to be spent to clean this scrap up, but itâs good to know that there are people behind OSM able to handle such cases fast and effectively. Thanks to everybody helping to clean up the mess!
Hi, just checking in to see if people still donât think we have a cybersecurity problem
I note that a couple petty vandals in a few minutes with sock puppet accounts can waste a lot of time by many other people. As usual, this will not be a wake-up call for the project to understand that they need to think a lot harder about things like user trust and data integrity.
I am thankful for folks that clean this stuff up, but I resent that they are wasting their time on it rather than pursuing more fruitful pursuits.
Iâve no doubt people did the best they could with the tools available to them, but it seems a bit of a stretch to call the cleanup âfast and effectiveâ. Some users were still seeing the vandalism around a week after it occurred (?).
Iâm sure this sort of thing is already being considered, but perhaps we need a âcache bustingâ mechanism that can force clients to request fresh tiles in this kind of situation?
Iâve said on several occasions that what the project needs is a cybersecurity professional. Someone with expertise in protecting digital assets. While our use case is fairly unique in the scheme of things, it is no less important to consider cybersecurity protections and how we might protect whatâs important to the project. Itâs naive to think that those of us who casually dabble in security at the minimal level needed to write code and launch websites are equipped to correctly assess and identify the best controls that balance protection with access. I put myself in that category as someone who knows just enough about Internet security to operate a public-facing web site, but definitely not enough to secure a popular and complex global enterprise.
So with that caveat,
I think the way it is now, where users can essentially spin up accounts anonymously is fine, however, those new accounts should live in a highly restricted sandbox. In that sandbox, new users should be allowed to do the typical types of edits that new users might make. Namely, low volume and geographically compact.
I think we should let new accounts out of that sandbox when they do certain things. Link a cell phone or authenticate with two-factor authentication? Your trust goes up. Account ages and you make edits for awhile? Your trust goes up. More trust=looser restrictions. This is a general principle, not a recipe.
There is probably some need for a (presently non-existing) feature where users can crowd-source the trust factor in a neutral way. Getting lots of from highly trusted users? The shackles come off faster.
Youâre asking good questions but I think itâs a trap to think that thereâs a simple formula on how to do this. These days, people get college degrees in cyber security yet somehow we donât seem to have anyone around on the project applying that expertise to our rampant vandalism problem.
I can do my taxes, but that doesnât qualify me to audit a companyâs financial statements. I can brew a cup of coffee in the morning, but that doesnât qualify me to hire baristas and open a coffee shop. I can dig a hole in the ground but it doesnât make me an archaeologist. Likewise, I can write code, but that doesnât make me competent in designing a cyber security plan for a major global project. But I can certainly recognize that we have a problem.
I think our failing here is thinking that itâs easy and not asking for the right expertise.
Maybe - but we as a community need to decide what sort of usage is and is not OK.
Someone external to OSM may well be able to help with detecting and preventing access and usage that we donât want - but without knowledge of the OSM community they wonât know what weâre trying to prevent and what weâre trying to explicitly encourage - see for example the discussions elsewhere in this forum with e.g. new armchair contributors âjust trying to add 1000 round buildings after some emergencyâ.
I hope we can agree that drawing roads thousands of miles long with obscenities on it is so clearly in the ânot OKâ bucket that it doesnât need discussion. Letâs leave the âsquishy gray areaâ stuff to the humans already adjudicating edits and put some controls in to protect against the âobviously wrongâ stuff.
I find it hard to believe that there arenât people âinternal to OSMâ (whatever that means) with this type of expertise.
Itâs a strawman argument that someone is going to come in like a bull in a china shop and break things with some kind of ham-fisted approach thatâs wrong for our community. Yes, by all means letâs not ask for help because weâre afraid we wonât get good help.
âŚwhich is a case that can easily be resolved with the right access and privilege controls and the ability for trusted overseers like yourself to increase user privileges in exceptional cases when itâs assessed that users are editing in good faith.
Yes, securing and protecting our data while still maximizing the ability of users to contribute and minimizing the impact of administering the works is hard work. But, if done right it will be less work than constant whack-a-mole and bad press we get every time some potty-mouth kid at a keyboard has the brilliant idea to draw long lines and type in obscenities.
Which by the way, could be easily detected with some basic heuristics like a dirty word list applied to new users triggering an auto account-lockout. We wonât get it right immediately but over time Iâm sure we could compile a list of triggers that are clear vandalism and work out the false positives over time.
Imagine:
âHello, NewUser123. The is the automated system at OpenStreetMap. Your account has been locked because your account is new and we detected edits that appear to be vandalism. If you think this message is in error, please email data@openstreetmap.org and reference ticket number 123456789â
Gentlemen, please tone it down a notch. You are both right, in a certain sense.
Brian has a point that we do need security experts to advise us what kind of measures we should apply to prevent large-scale vandalismâŚ
âŚbut then, Andy is also right that we first ought to define âbusiness requirementsâ how to separate potentially harmful activity from normal good-faith editing practices, identify gray areas, specify use cases where exceptions may apply (mapathons which involve newbies working under auspices of experienced users, alternative accounts for automated editingâŚ) and so on.
Before the recent string of incidents (Ukraine/Russia name clashes; vandalism from new accounts belonging experienced long-term abusers; to name the most prominent ones), our basic defense was âassume good faithâ, trust that malevolent actors are just not interested in OSM, and that any revert or DWG intervention will be sufficiently quick. However, that worldview is obviously too naive for the todayâs world, and we have to work in both directions (cybersecurity expertise and subject area knowledge) to reduce the future risk.
In my experience this is the problem OSM community faces quite often: the outsiders who would like to help (be it with validation, data sources, imports, software development or even communications work) are not as useful as we (or they) wouldâve hoped. It takes intentional effort to learn how the OSM community operates due to it being a semi-anarchy. Having to hand-hold outsiders slows things down.
There are people who can easily get into any new field and not experience any mental blocks, but theyâre rare.
If it canât be done in English someone will use their own language so weâll have to include many languages and the dirty word list will get very, very long.
This is my first thought but I have no idea if the list can be done in the real world.
Otherwise it sounds like a good idea to me.
Also limiting a changeset to 1 km² (sounds rather complicated to implement) and limiting way length (between two connected nodes) to 1 km sounds reasonable to me.