How about limit new accounts?

Sure, though the overuse of reverting is a big issue IMX and one of the reasons I don’t really contribute to Wikipedia - your mileage may of course vary.

Going back to the comment I was actually replying to, I don’t believe there is a “critical gap in expertise” in OSM which makes us unable to implement technical measures like this. There have been organisational issues which have made the response in this case slower than it could have been, and I guess that’s where I’d like to see some brainpower applied. In terms of lessons learned, I have high confidence that EWG/sysadmins will consider what changes need to be made to the site code, but perhaps a little less that OSMF will develop a response protocol in time for the next incident.

3 Likes

Given that there are 7 people on the OSMF Board, I wonder which ²⁄₃₅th of another board member you think Matseusz has eaten. :rofl: I know some board members are more vocal than others, but that doesn’t mean they don’t exist!

1 Like

Well, I stand fractionally corrected then :crazy_face:

I would like to disagree, either in whole or in the interpretation of the phrases “cyber” and “security”.

Moderation (anti-vandalism) has little to do with “cyber security”. Anti-vandalism tools are not really security products, and they do not need anyone on board. In theory I could write a simple bot in a few days to process changes and run a filtering engine/analysis on them and alert someone, or even trigger kind of automagic revert, but first I do not have the time handy at the moment and second it would really be much better to organise it instead of people individually solving it in a possibly conflicting way.

I would even guess that the current tools provide adequate output to handle that, even more so if someone would detail volunteer editors how to coordinate their efforts. I am not very active here because I do not really meet these kinds of abuse.

In general, yes, but in a project as visible as OSM, abuse can run along a spectrum from casual graffiti by bulbasaur gardeners all the way up to massive, sustained attacks by well-resourced organizations. Even if that isn’t quite what we’re dealing with right now, we should consider it a wake-up call: OSM’s well-intentioned vulnerabilities make us an easy target during real-world conflicts. Not everyone needs to concern themselves with such weighty issues, but someone should have a security hat on.

4 Likes

Does Wikipedia ever publicly name and shame or press legal action?

Wikipedia’s editor community has previously name-and-shamed organizations for trying to use the site as a public relations channel – most notably, various members of the U.S. Congress. But that only had an impact because those legislators had a reputation to defend (and embarrassingly didn’t always know what their staffers were up to). Vandalism is a different beast with different incentives. Even if the OSMF could identify a specific malicious actor, it would only have a legal recourse in some jurisdictions, and exercising that recourse wouldn’t be without cost. So at best, this would be a complementary approach alongside technical defenses and mitigations.

1 Like

We are getting a lot of traffic and interest in this topic because it is clearly surpassed the level of moderation and the amount of people actively working against the one bot is showing this is an attack.

The terms cyber-security is apt and appropriate to mention here, security the login system and indeed the actual data all our volunteers put into OSM is something that could very much be improved upon.

I’m very happy to see small steps being taken, the rate-limiting features added by Tom are great and make total sense. I doubt you’ll find anyone going against that. Well, other than those that are doing the vandalism anyway :slight_smile:

This post (and I’m the post-starter) was indeed aimed at the foundation and I guess the board as the decision making people there. The goal is to get the ball rolling on essentially the same thing we’ve seen now. Rate-limiting at a bit more advanced level than we’ve seen so far.

A new account having the editing rights that match the account age. A zero day mapper will do a LOT less than one that has logged a lot of mapping days. This is a natural rate-limiting that would not have anyone actually complain as the limits will likely never be set so low as to cause zero-day mappers to hit them.

But I’m not entirely sure if the board and the foundation have even considered this approach. The actual replies on this topic here are positive about the idea. But are we ever going to see this? Or is this idea dismissed as not having majority consensus?

So, apart from this needing a coder, what is the chances of “limiting new accounts” being actually implemented?

Would you care to exactly define limiting please?

Very high. In fact, it is already partially implemented and released.

Rate limits are part of limiting new accounts as new accounts have lower limits.

See say openstreetmap-website/app/models/user.rb at 75bde83a138226179059551e386561640adc285d · tomhughes/openstreetmap-website · GitHub - new users have lower allowed limits, that grow (up to some point) as users made more comments.

As result brand new account with normal activity is fairly unlikely to be impacted, accounts with more activity have higher limits and brand new vandal account will hit limits earlier.

1 Like

Please remember that limits will have some unobvious result - it will make vandalism harder to detect.

Can you be more specific which measures will have such effect? (except that reducing volume of vandal attacks results in the less vandalism to be found in the first place)

The 10km altitude idea is that a user-account of a certain age (or maturity) belongs to a person mapping matching that maturity. This means that an account that is 1 day old will be limited in mapping a whole new forest. But being active as day-1 mappers will be well within limits; normal users should not even know these limits exist.

What specific limits would be best should likely be the result of some data-mining in account history over the last couple of years. Any numbers I state on what I think will work will be pure guesswork. So I’ll skip numbers.

Limits that would be very useful are:

  • number of changesets per hour/day.
  • number of new points per changeset / day.
  • number of updated / added properties per changeset / day.
  • number of comments the user can make (this looks done).
  • Limiting the area which a single changeset can cover. Meaning you can’t edit Russia and New Zealand in the same changeset.
    Notice that this one requires editors to also add such features in order to avoid upsetting users.

We’ll probably need more bots with (secret) rulesets to find and flag anything that looks weird for human introspection.

But I also disagree on some level, about it being harder to detect. It is like graffiti under the bridge, the trick is not making authorities detect them, the trick is to empower the citizens in fixing it.

Abuse is less obvious/visible if it’s streched evenly through multiple days or weeks.

Thank you!
Now, look back two steps, look at your list and tell me which one would help anything on the current problem?

As far as I see none of them, that’s why I’m asking. (Making one edit per zombie account would trigger none of these.) If that’s right I’m wondering why this topic even exist if it’s not serving the solution to the problem in the OP at all. Mixing one problem with a solution to a different problem is not very easy to follow and debate., especially since half of the people talk about the former and the other half about the latter.

At the moment primary tool to detect this type of vandalism is its scale. We should not be complacent because such vandalism - especially motivated ideologically - will not stop. It will just become smarter.

All of them. Just check out the actual changesets that are being reverted if you have any such doubt.

Interesting. Seems you didn’t read OP, or who is OP. :rofl:

…I often see the case when there is a problem where people start to suggest “solutions” which are not helping the problem; they may try to solve a different problem, or more often than not they are not solve anything but give the false sense of having more control.

Limiting “new users” is not an obvious “solution” to “some” problems. Also it is obvious that the “limiting new users” phrase is pretty loosely defined here. It may well mean:

  • Limiting registrations (based on various metrics, but there are a few which is reliable).
  • Limiting already registered accounts based on activity, age, and various metrics which are not very reliable.
  • Deciding what and how to limit is not very obvious (and it’s not friendly to throw API errors to new users)
  • Manual oversight. All such mechanisms need continuous control, supervision and lot of manual handling the false positives (and false negatives as well); without that it quickly become a tool blocking new users generally.

I believe that it is good to ask the developers to consider various limit methods, but I don’t believe that any of those should be pushed without extensively thinking about the consequences (not for the established editors and not for the abusers) and matching it up with the data we already have about historical usage patterns. I strongly believe that OSMF does that, maybe it’s not well communicated. But I see little point in handing out advices without examining all of that.

You seem to imply that nobody does anything about it. Maybe it’s just a lack of information?