Overpass API performance issues

How is it now? It seems to have been aggravated for a while.

I’ve noticed that 2 of the 4 server of ā€œprivate.coffeeā€ are not up-to-date: their ā€œtimestamp_osm_baseā€ is ā€œ2026-01-15T18:15:15Zā€ and ā€œ2026-01-06T16:09:29Zā€ (today: 2026-01-17). The server’s names could be ā€˜h8’ and ā€˜h9’. Selection seems to be RR (round-robin) DNS wise.

I created an issue some days ago.

N.B. That’s why I did not yet switch to private.coffee for PTNA’s on-demand analysis (GTFS vs OSM comparison)

2 Likes

FWIW the spike in " Netstat, established only" in August hasn’t reoccurred, but the Main Overpass API instance has been unreliable for some time now. Suggesting that people ā€œUse alternative serversā€ is unhelpful as most people’s use of the Overpass API isn’t direct; it’s via third-party tools that either ā€œjust failā€ or produce unpredictable results when the Overpass API they’re using is unreliable**.

As I mentioned elsewhere I’ve ā€œjust used something elseā€ when I’ve wanted to use ā€œsomething like Overpassā€ (e.g. Postpass), but alas I suspect that @SimonPoole was right to say ā€œThe surprising thing is more that the public Overpass instances have been able to stave off collapse for so longā€.

How feasible would ā€œrequiring people to be signed into OSM before using Overpassā€ be?

** and of course it really is the third-party tools that should catch this - some are better than others.

3 Likes

Please create a new post. It seems unrelated. Overpass API/status - OpenStreetMap Wiki

Agreed, for the nightly PTNA reports, I’m switching to planet extracts, no need for ā€œup-to-date by minuteā€ data when mappers are asleep when their region gets analysed at 3AM their time.

For the other reports, the on-demand report, PTNA uses simple and more complex structured data

  • boundary relations → can be served by ā€œpostpassā€, I’m interested in their ways only
  • route_master and route relations, ā€œpostpassā€ is not suitable (I discussed this with Frederik)
    • so I need ā€œup-to-date by minuteā€ overpass data, as mappers what to see the positive impact of their work (uploaded some minutes ago)

But these details are off-topic.

1 Like

I’ve created a new thread - even though people observe it as the same issue, the cause of the August 2025 issue was distinct.

1 Like

This is a debatable claim. It seems to me that the problem lies elsewhere. Let’s just look at the error message users see for Overpass Turbo:

It’s bad. Overpass Turbo could check the HTTP error code and show a link to https://osm.wiki/Overpass_API#Public_Overpass_API_instances if it’s 5xx.

My experience with people using Overpass Turbo is that they are hearing about other servers for the first time.

2 Likes

How would that help? That is a list of Overpass API endpoints, not Overpass Turbo ones. Pointing a web browser at one of those just gives:

The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.
Error: encoding error: Your input contains only whitespace.

Let’s imagine I’ve just got the Overpass API error above in Overpass Turbo. What should I do to run the same Overpass Turbo query against a different server?

Edit: Until @Mateusz_Konieczny’s post below I had no idea that any Overpass Turbo instance could be pointed at any other Overpass API one. I suspect that I wasn’t the only person in OSM who didn’t know that :slight_smile:

2 Likes

change it in Overpass Turbo settings, accessible from top bar, second button from right

Screenshot_20260117_144206

then near top in window that will appear currently the second line allows you to try on a different server

the problem is that I found no server that would be up to date and working well, when I was looking for alternatives

I can only echo ā€œThe surprising thing is more that the public Overpass instances have been able to stave off collapse for so longā€.

4 Likes

That’s a good point. I’ve edited this wiki page to make those links unclickable.

But I just wanted to get the idea across. We need to show the user what they can do: use a different frontend, change settings, and so on.


Or optimize the query. I found an interesting tip from the documentation in the previous thread: Commons

The server admits a request if and only if it is going to use in both criteria at most half of the remaining available resources.
For the maximum memory usage, the default value is 512 MiB.

Now I’ve experimented a bit and realized that for many requests [out:json][maxsize:16Mi] is enough for me and I’ve never received a 504. You can even get away with 1Mi if you’re asking for something like node[...]({{bbox}}). 64Mi was enough for me to request all buildings in St.Petersburg (but as expected, Overpass Turbo froze for me:)

Perhaps this is just a hack for the current rate limit mechanism, and if everyone defaults to 64Mi, we’ll end up back where we are. But perhaps this is low-hanging fruit for optimization.

But it’s worth noting that this method has its problems. For example, requests to maps.mail.ru will be blocked by Tracking Protection in Firefox. Also on the list is a long-time non-operating overpass.openstreetmap.ru

1 Like

Well it’s certainly not getting any better: osm db request count (Munin :: localdomain :: localhost.localdomain :: osm db request count)

There seems to have been a significant uptick in the abuse in November.

I feel we may be in the era of if you need Overpass for your process you’re going to have to run your own. I know I’ve had to set up my own AU instance to keep my scripts from constantly breaking.

EDIT: Ha, turns out I just repeated what Simon already said Undiscussed mass edits of sport=rugby_union and sport=rugby_league - #8 by SimonPoole

2 Likes