How is it now? It seems to have been aggravated for a while.
Iāve noticed that 2 of the 4 server of āprivate.coffeeā are not up-to-date: their ātimestamp_osm_baseā is ā2026-01-15T18:15:15Zā and ā2026-01-06T16:09:29Zā (today: 2026-01-17). The serverās names could be āh8ā and āh9ā. Selection seems to be RR (round-robin) DNS wise.
I created an issue some days ago.
N.B. Thatās why I did not yet switch to private.coffee for PTNAās on-demand analysis (GTFS vs OSM comparison)
FWIW the spike in " Netstat, established only" in August hasnāt reoccurred, but the Main Overpass API instance has been unreliable for some time now. Suggesting that people āUse alternative serversā is unhelpful as most peopleās use of the Overpass API isnāt direct; itās via third-party tools that either ājust failā or produce unpredictable results when the Overpass API theyāre using is unreliable**.
As I mentioned elsewhere Iāve ājust used something elseā when Iāve wanted to use āsomething like Overpassā (e.g. Postpass), but alas I suspect that @SimonPoole was right to say āThe surprising thing is more that the public Overpass instances have been able to stave off collapse for so longā.
How feasible would ārequiring people to be signed into OSM before using Overpassā be?
** and of course it really is the third-party tools that should catch this - some are better than others.
Please create a new post. It seems unrelated. Overpass API/status - OpenStreetMap Wiki
Agreed, for the nightly PTNA reports, Iām switching to planet extracts, no need for āup-to-date by minuteā data when mappers are asleep when their region gets analysed at 3AM their time.
For the other reports, the on-demand report, PTNA uses simple and more complex structured data
- boundary relations ā can be served by āpostpassā, Iām interested in their ways only
- route_master and route relations, āpostpassā is not suitable (I discussed this with Frederik)
- so I need āup-to-date by minuteā overpass data, as mappers what to see the positive impact of their work (uploaded some minutes ago)
But these details are off-topic.
Iāve created a new thread - even though people observe it as the same issue, the cause of the August 2025 issue was distinct.
This is a debatable claim. It seems to me that the problem lies elsewhere. Letās just look at the error message users see for Overpass Turbo:
Itās bad. Overpass Turbo could check the HTTP error code and show a link to https://osm.wiki/Overpass_API#Public_Overpass_API_instances if itās 5xx.
My experience with people using Overpass Turbo is that they are hearing about other servers for the first time.
How would that help? That is a list of Overpass API endpoints, not Overpass Turbo ones. Pointing a web browser at one of those just gives:
The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.
Error: encoding error: Your input contains only whitespace.
Letās imagine Iāve just got the Overpass API error above in Overpass Turbo. What should I do to run the same Overpass Turbo query against a different server?
Edit: Until @Mateusz_Koniecznyās post below I had no idea that any Overpass Turbo instance could be pointed at any other Overpass API one. I suspect that I wasnāt the only person in OSM who didnāt know that ![]()
change it in Overpass Turbo settings, accessible from top bar, second button from right
![]()
then near top in window that will appear currently the second line allows you to try on a different server
the problem is that I found no server that would be up to date and working well, when I was looking for alternatives
I can only echo āThe surprising thing is more that the public Overpass instances have been able to stave off collapse for so longā.
Thatās a good point. Iāve edited this wiki page to make those links unclickable.
But I just wanted to get the idea across. We need to show the user what they can do: use a different frontend, change settings, and so on.
Or optimize the query. I found an interesting tip from the documentation in the previous thread: Commons
The server admits a request if and only if it is going to use in both criteria at most half of the remaining available resources.
For the maximum memory usage, the default value is 512 MiB.
Now Iāve experimented a bit and realized that for many requests [out:json][maxsize:16Mi] is enough for me and Iāve never received a 504. You can even get away with 1Mi if youāre asking for something like node[...]({{bbox}}). 64Mi was enough for me to request all buildings in St.Petersburg (but as expected, Overpass Turbo froze for me:)
Perhaps this is just a hack for the current rate limit mechanism, and if everyone defaults to 64Mi, weāll end up back where we are. But perhaps this is low-hanging fruit for optimization.
But itās worth noting that this method has its problems. For example, requests to maps.mail.ru will be blocked by Tracking Protection in Firefox. Also on the list is a long-time non-operating overpass.openstreetmap.ru
Well itās certainly not getting any better: osm db request count (Munin :: localdomain :: localhost.localdomain :: osm db request count)
There seems to have been a significant uptick in the abuse in November.
I feel we may be in the era of if you need Overpass for your process youāre going to have to run your own. I know Iāve had to set up my own AU instance to keep my scripts from constantly breaking.
EDIT: Ha, turns out I just repeated what Simon already said Undiscussed mass edits of sport=rugby_union and sport=rugby_league - #8 by SimonPoole
The situation seems to get worse and worseā¦
Maybe a funding campaign is needed to beef up or add servers that run overpass-api?
PS: I have tried to use other instances than overpass-api.de but none is as reliable as this one (before the timeout errors started to happen more and more frequently, that is).
in case of funding campaign it would make more sense to gather funding to develop some way of limiting overuse (authentication? API keys? accounts with some good way to limit multi-accounts?)
rather than funding misbehaving broken DDOSers
How did they show their unreliability? Just for reference, the Overpass server from mail.ru is much more powerful than the main server. And I doubt itās as popular.
What type of queries did you perform?
Kumi instances had corrupted/severely outdated data when I tried to use them in past.
For completeness, the timestamp returned from each of the listed public ones is as follows:
Main Overpass API instance
"timestamp_osm_base": "2026-03-06T12:00:58Z",
VK Maps Overpass API instance (Russia)
"timestamp_osm_base": "2026-03-06T12:00:58Z",
Swiss Overpass API instance
"timestamp_osm_base": "112530",
Private.coffee Overpass Instance
"timestamp_osm_base": "2026-03-06T12:03:00Z",
Britain and Ireland Overpass Instance
"timestamp_osm_base": "2026-03-05T21:20:40Z",
MapRVA Overpass server
"timestamp_osm_base": "2026-03-05T21:11:14Z",
Iām not sure what the Swiss number is. Itās not a date and doesnāt match https://download.geofabrik.de/europe/switzerland-updates/000/004/715.state.txt either.
There are lots of 504 HTTP errors on OSM Latest Changes nowadays.
Edit: I think I misuderstood. You are asking how the other instances showed their unreliability. They often returned no data or it took them extremely long to respond.
One of the big advantages of an openly accessible API is that it allows people to build very simple browser-based tools without any backend infrastructure to store API keys or handle authentication. Tools like Overpass Turbo or the above-mentioned OSM Latest Changes work precisely because they can query the API directly from the browser.
Requiring API keys would raise the barrier to entry quite a bit. Many small utilities, prototypes, or personal tools would either not exist or would need a server just to proxy requests.
It would also make things like tutorials, shared queries, and quick experiments harder, since users could no longer simply run a query immediately without setting up credentials first. The current openness is likely one of the reasons why so many small but useful tools have emerged around Overpass.
It would, but basically that ship has sailed. We live in a world where the infrastructure exists to allow the easy (and relatively cheap) scaling up of any query, and the scourge of residential proxies allows bad actors to obfuscate the source.
Depending on how the login is handled, it neednāt be much of a chore for actual real human users. I rarely see a login prompt for https://community.openstreetmap.org because I am usually already logged in via Oauth2; that could also apply to (say) an Overpass API server. The separately hosted āscript that uses Overpassā doesnāt need itās own Oauth2 mechanism; the authentication transaction is between the user and the scarce resource.
Overpass Turbo is a good example - I run an Overpass API server and I donāt see network requests from the IP address of wherever https://overpass-turbo.eu/ happens to be; I see them from the userās IP address.

