Is there any consistancy check for OSM data?


I have the problem, that some data in Unterlüß, Germany (N:52.8292,E:10.2830-N:52.8558,E:10.3273) seem to be somehow buggy. This at least said the map provider (4UMaps) to me. His renderer genarates a PNG file my Firefox can display ( ), but my Android App Locus Map aborts the map download, when it gets this png. Everytime I want to get a new set of map tiles, I have to download all the tiles around, but not this one. That’s very annoying! My question now is: Is there any consistancy check, which could give me a hint, what is wrong with the OSM data in Unterlüß? Then I want to fix those buggy data there.


If the application is crashing on a PNG which works for other applications, that application is broken.

PNGs are just bitmap images. If there was a fundamental error in the OSM internal data, it would fail to produce the PNG at all. If the data is structurally valid but doesn’t represent a sensible map, you will still get a displayable PNG, but the image won’t make sense to the human user.

I tried that image with three different programs and none of them complained.

Hi hadw,

it’s not so simple as you posted. Here is what the Locus Map developer posted to me;

You see the tile 2674.png is invalid, though it can be displayed by some or most of usable display software. And the 4UMaps owner/provider said to me, that this can happen, if the OSM data in that area are buggy. He had a similar issue in Paris, France and in fact he found a path there tagged as tunnel, which was wrong in the context around it. And in those cases his renderer aborts rendering and generates an invalid png file.

So my question is, wether there is a check functionality, which can find such bugs in OSM data.


That would indicate that the renderer is broken.

Given that we don’t have the slightest idea what “such bugs” are (the example you mention is clearly not a “bug”) they are going to be very difficult to find.


I don’t know if this will solve any of the problems, but you can use Osmose to find errors in tagging.
For your invalid tile region it would be this link:

$ wget
–2017-06-10 17:11:37--
Resolving (…
Connecting to (||:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 65549 (64K) [image/png]
Saving to: ‘2674.png.1’

2674.png.1 100%[=====================>] 64.01K 27.5KB/s in 2.3s

2017-06-10 17:11:46 (27.5 KB/s) - ‘2674.png.1’ saved [65549/65549]

$ pngcheck -v /tmp/2674.png.1
File: /tmp/2674.png.1 (65549 bytes)
chunk IHDR at offset 0x0000c, length 13
256 x 256 image, 32-bit RGB+alpha, non-interlaced
chunk sRGB at offset 0x00025, length 1
rendering intent = perceptual
chunk gAMA at offset 0x00032, length 4: 0.45455
chunk pHYs at offset 0x00042, length 9: 3779x3779 pixels/meter (96 dpi)
chunk IDAT at offset 0x00057, length 65442
zlib: deflated, 32K window, fast compression
chunk IEND at offset 0x10005, length 0
No errors detected in /tmp/2674.png.1 (6 chunks, 75.0% compression).

$ pngcheck
PNGcheck, version 2.3.0 of 7 July 2007,

$ md5sum /tmp/2674.png.1
6853018ea86d6cf1b3accd6cdfa6467d /tmp/2674.png.1
$ shasum /tmp/2674.png.1
cee34a7dade1a0818bf49dd2302f1092089c9f99 /tmp/2674.png.1

In addition,an IDAT truncation would cause a truncated image, even if there was no diagnostic from the browser.

I would suspect either an error in whatever is used to download it, or a truncated version has got stuck in a cache between you and the server.

As has already been pointed out, an error in the map would not have resulted in a structurally invalid PNG, so even if your pngcheck diagnostic were valid, there would be no error in the map to find and fix.

[Edited to show direct download rather than the original browser download, to show that the browser is not repairing the file.]

HTTP/1.1 200 OK
Cache-Control: max-age=1209600
Content-Type: image/png
Last-Modified: Thu, 25 May 2017 18:36:30 GMT
Accept-Ranges: bytes
ETag: “f139a1d385d5d21:0”
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Sat, 10 Jun 2017 16:48:00 GMT
Content-Length: 65549

[Edited to add HTTP meta data. pngcheck -x run to confirm that the file that matches these headers is good.]

What I have just noticed is that the image is 13 bytes longer than 64K. My guess is that something between the server and your pngcheck has a 64K size limit.

Your other example, at 45337, is significantly below 64K. The difference will be the amount of detail in the image, which will affect the level of compression possible. E.g. your “good” image gets 82.7% compression, whereas your “bad” image gets a mere 75%. In general you can expect the level of detail to increase with time, and the compressibility to reduce.

In this case, it is probably the topography shading that is causing the poor compression. Whilst the map itself is best represented by lossless compression, like PNG, the shading would be better handled with JPG, although that would produce DCT artefacts in the line art of the map. I don’t know if the slippy map tools being used to view it can assemble multiple tile layers, but from a technical point of view, I would suggest that the best approach would be to compose the image in the browser, from a JPG and an PNG. I don’t see implementing that as within the terms of reference of OSM, so, if possible, you would want to ask your map service provider about it. In any case, PNGs are not restricted to 64K, so it is only a performance, not a functionality, issue.


Hi all,

many thanks for all your answers!

@hadw: Your pngcheck of my invalid tile let me just try to repeat the failed download. And now it was successful! Obviously someone has fixed the problem in the meanwhile. I don’t know, what the problem was, the renderer of 4UMaps had before. Maybe that software was buggy and now is fixed, or perhaps Locus Map had a 64K size limit and now doesn’t any more (have just installed a new version).

So the check function I asked for is no longer needed. But when I get a similar problem, i will try the links, xXMapperXx and muralito posted here.



The HTTP meta data says the tile hasn’t changed since 25 May, so any apparent fix at your end since the 30th May will be the result of a web cache having had a bad copy and subsequently reloading it.