Overpass API performance issues

How is it now? It seems to have been aggravated for a while.

I’ve noticed that 2 of the 4 server of ā€œprivate.coffeeā€ are not up-to-date: their ā€œtimestamp_osm_baseā€ is ā€œ2026-01-15T18:15:15Zā€ and ā€œ2026-01-06T16:09:29Zā€ (today: 2026-01-17). The server’s names could be ā€˜h8’ and ā€˜h9’. Selection seems to be RR (round-robin) DNS wise.

I created an issue some days ago.

N.B. That’s why I did not yet switch to private.coffee for PTNA’s on-demand analysis (GTFS vs OSM comparison)

2 Likes

FWIW the spike in " Netstat, established only" in August hasn’t reoccurred, but the Main Overpass API instance has been unreliable for some time now. Suggesting that people ā€œUse alternative serversā€ is unhelpful as most people’s use of the Overpass API isn’t direct; it’s via third-party tools that either ā€œjust failā€ or produce unpredictable results when the Overpass API they’re using is unreliable**.

As I mentioned elsewhere I’ve ā€œjust used something elseā€ when I’ve wanted to use ā€œsomething like Overpassā€ (e.g. Postpass), but alas I suspect that @SimonPoole was right to say ā€œThe surprising thing is more that the public Overpass instances have been able to stave off collapse for so longā€.

How feasible would ā€œrequiring people to be signed into OSM before using Overpassā€ be?

** and of course it really is the third-party tools that should catch this - some are better than others.

3 Likes

Please create a new post. It seems unrelated. Overpass API/status - OpenStreetMap Wiki

Agreed, for the nightly PTNA reports, I’m switching to planet extracts, no need for ā€œup-to-date by minuteā€ data when mappers are asleep when their region gets analysed at 3AM their time.

For the other reports, the on-demand report, PTNA uses simple and more complex structured data

  • boundary relations → can be served by ā€œpostpassā€, I’m interested in their ways only
  • route_master and route relations, ā€œpostpassā€ is not suitable (I discussed this with Frederik)
    • so I need ā€œup-to-date by minuteā€ overpass data, as mappers what to see the positive impact of their work (uploaded some minutes ago)

But these details are off-topic.

1 Like

I’ve created a new thread - even though people observe it as the same issue, the cause of the August 2025 issue was distinct.

1 Like

This is a debatable claim. It seems to me that the problem lies elsewhere. Let’s just look at the error message users see for Overpass Turbo:

It’s bad. Overpass Turbo could check the HTTP error code and show a link to https://osm.wiki/Overpass_API#Public_Overpass_API_instances if it’s 5xx.

My experience with people using Overpass Turbo is that they are hearing about other servers for the first time.

2 Likes

How would that help? That is a list of Overpass API endpoints, not Overpass Turbo ones. Pointing a web browser at one of those just gives:

The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.
Error: encoding error: Your input contains only whitespace.

Let’s imagine I’ve just got the Overpass API error above in Overpass Turbo. What should I do to run the same Overpass Turbo query against a different server?

Edit: Until @Mateusz_Konieczny’s post below I had no idea that any Overpass Turbo instance could be pointed at any other Overpass API one. I suspect that I wasn’t the only person in OSM who didn’t know that :slight_smile:

2 Likes

change it in Overpass Turbo settings, accessible from top bar, second button from right

Screenshot_20260117_144206

then near top in window that will appear currently the second line allows you to try on a different server

the problem is that I found no server that would be up to date and working well, when I was looking for alternatives

I can only echo ā€œThe surprising thing is more that the public Overpass instances have been able to stave off collapse for so longā€.

4 Likes

That’s a good point. I’ve edited this wiki page to make those links unclickable.

But I just wanted to get the idea across. We need to show the user what they can do: use a different frontend, change settings, and so on.


Or optimize the query. I found an interesting tip from the documentation in the previous thread: Commons

The server admits a request if and only if it is going to use in both criteria at most half of the remaining available resources.
For the maximum memory usage, the default value is 512 MiB.

Now I’ve experimented a bit and realized that for many requests [out:json][maxsize:16Mi] is enough for me and I’ve never received a 504. You can even get away with 1Mi if you’re asking for something like node[...]({{bbox}}). 64Mi was enough for me to request all buildings in St.Petersburg (but as expected, Overpass Turbo froze for me:)

Perhaps this is just a hack for the current rate limit mechanism, and if everyone defaults to 64Mi, we’ll end up back where we are. But perhaps this is low-hanging fruit for optimization.

But it’s worth noting that this method has its problems. For example, requests to maps.mail.ru will be blocked by Tracking Protection in Firefox. Also on the list is a long-time non-operating overpass.openstreetmap.ru

1 Like

Well it’s certainly not getting any better: osm db request count (Munin :: localdomain :: localhost.localdomain :: osm db request count)

There seems to have been a significant uptick in the abuse in November.

I feel we may be in the era of if you need Overpass for your process you’re going to have to run your own. I know I’ve had to set up my own AU instance to keep my scripts from constantly breaking.

EDIT: Ha, turns out I just repeated what Simon already said Undiscussed mass edits of sport=rugby_union and sport=rugby_league - #8 by SimonPoole

2 Likes

The situation seems to get worse and worse…
Maybe a funding campaign is needed to beef up or add servers that run overpass-api?

PS: I have tried to use other instances than overpass-api.de but none is as reliable as this one (before the timeout errors started to happen more and more frequently, that is).

in case of funding campaign it would make more sense to gather funding to develop some way of limiting overuse (authentication? API keys? accounts with some good way to limit multi-accounts?)

rather than funding misbehaving broken DDOSers

1 Like

How did they show their unreliability? Just for reference, the Overpass server from mail.ru is much more powerful than the main server. And I doubt it’s as popular.

What type of queries did you perform?

Kumi instances had corrupted/severely outdated data when I tried to use them in past.

For completeness, the timestamp returned from each of the listed public ones is as follows:

Main Overpass API instance
"timestamp_osm_base": "2026-03-06T12:00:58Z",

VK Maps Overpass API instance (Russia) 
"timestamp_osm_base": "2026-03-06T12:00:58Z",

Swiss Overpass API instance
"timestamp_osm_base": "112530",

Private.coffee Overpass Instance
"timestamp_osm_base": "2026-03-06T12:03:00Z",

Britain and Ireland Overpass Instance
"timestamp_osm_base": "2026-03-05T21:20:40Z",

MapRVA Overpass server
"timestamp_osm_base": "2026-03-05T21:11:14Z",

I’m not sure what the Swiss number is. It’s not a date and doesn’t match https://download.geofabrik.de/europe/switzerland-updates/000/004/715.state.txt either.

There are lots of 504 HTTP errors on OSM Latest Changes nowadays.
Edit: I think I misuderstood. You are asking how the other instances showed their unreliability. They often returned no data or it took them extremely long to respond.

adiff queries

One of the big advantages of an openly accessible API is that it allows people to build very simple browser-based tools without any backend infrastructure to store API keys or handle authentication. Tools like Overpass Turbo or the above-mentioned OSM Latest Changes work precisely because they can query the API directly from the browser.

Requiring API keys would raise the barrier to entry quite a bit. Many small utilities, prototypes, or personal tools would either not exist or would need a server just to proxy requests.

It would also make things like tutorials, shared queries, and quick experiments harder, since users could no longer simply run a query immediately without setting up credentials first. The current openness is likely one of the reasons why so many small but useful tools have emerged around Overpass.

1 Like

It would, but basically that ship has sailed. We live in a world where the infrastructure exists to allow the easy (and relatively cheap) scaling up of any query, and the scourge of residential proxies allows bad actors to obfuscate the source.

Depending on how the login is handled, it needn’t be much of a chore for actual real human users. I rarely see a login prompt for https://community.openstreetmap.org because I am usually already logged in via Oauth2; that could also apply to (say) an Overpass API server. The separately hosted ā€œscript that uses Overpassā€ doesn’t need it’s own Oauth2 mechanism; the authentication transaction is between the user and the scarce resource.

Overpass Turbo is a good example - I run an Overpass API server and I don’t see network requests from the IP address of wherever https://overpass-turbo.eu/ happens to be; I see them from the user’s IP address.

3 Likes