Confirm overpass diffs are being applied


I would like to confirm that my overpass install is actually receiving updates and applying them correctly. Are there logfiles I can consult? Below are some artifacts that are hopefully relevant…

I see what appears to be an issue in the replicate/fetch_osc.log whenever I reboot…

fetch_osc()@2020-10-04 03:33:54: upstream_delay 4207364
fetch_osc()@2020-10-04 08:29:28: upstream_delay 4207364
fetch_osc()@2020-10-04 09:02:57: upstream_delay 4207364
fetch_osc()@2020-11-07 10:58:19: upstream_delay 4207364
fetch_osc()@2021-02-21 11:14:47: upstream_delay 4207364
fetch_osc()@2021-05-16 03:59:56: upstream_delay 4207364
fetch_osc()@2021-07-05 07:08:20: upstream_delay 4207364
fetch_osc()@2021-07-23 16:41:15: upstream_delay 4207364

Could an infinite upstream_delay prevent getting diffs? Perhaps I have the source URL misconfigured or something.

This appears in my db/apply_osc_to_db.log file, repeat entry since install in Sept 2020…

$ tail apply_osc_to_db.log
2021-07-26 02:53:52: updating from 4207364
2021-07-26 02:53:57: updating from 4207364
2021-07-26 02:54:02: updating from 4207364
2021-07-26 02:54:07: updating from 4207364
2021-07-26 02:54:12: updating from 4207364
2021-07-26 02:54:17: updating from 4207364
2021-07-26 02:54:22: updating from 4207364
2021-07-26 02:54:27: updating from 4207364
2021-07-26 02:54:32: updating from 4207364
2021-07-26 02:54:37: updating from 4207364

This seems like a problem but I’m not sure… I’m using version so I’m not sure why a version 0.7.55 file is in play… in and elsewhere…

$cat osm-3s_v0.7.56.7/bin/osm_base.out
File_Error File exists 17 /osm3s_v0.7.55_osm_base Dispatcher_Server::1
File_Error Address already in use 98 /opt/osm/overpass/db//osm3s_v0.7.55_osm_base Unix_Socket::4
File_Error Address already in use 98 /opt/osm/overpass/db//osm3s_v0.7.55_osm_base Unix_Socket::4

My cron job does successfully start these 3 processes…

$ ps -ef | grep overpass
ec2-user 3386 1 0 Jul25 ? 00:00:04 /opt/osm/overpass/osm-3s_v0.7.56.7/bin/dispatcher --osm-base --attic --rate-limit=2 --space=10737418240 --db-dir=/opt/osm/overpass/db
ec2-user 3387 1 0 Jul25 ? 00:00:00 bash /opt/osm/overpass/osm-3s_v0.7.56.7/bin/ 4207364 /opt/osm/overpass/replicate
ec2-user 3388 1 0 Jul25 ? 00:00:04 bash /opt/osm/overpass/osm-3s_v0.7.56.7/bin/ /opt/osm/overpass/replicate 4207364 --meta

I do not find a “diffs” folder with pending updates.

Any help would be most appreciated!

You’re downloading daily diffs from, but your replicate_id value 4207364 is only valid for minutely diffs. Current daily diff id is at about 3200, so your current settings are way off. You need to decide what you’re really planning to do. Minutely or daily updates?

Fantastic! That must be the problem. Many thanks.

A couple more related questions… (feel free to RTFM if there are docs available somewhere that I was unable to find)…

  1. If I want to stick to daily updates, do I only have to update the replicate_id to a daily value (shortly before install date)? Or are there other changes required?

  2. As diffs are applied, does the replicate_id get changed to the next daily Id in the sequence? Or is the algorithm entirely different?

  3. Is there a record/log somewhere that lists the replicate_ids applied?

  4. What happens if the server is rebooted in the middle of a daily update? Does the db transaction cleanly rollback the transaction and re-apply?

Thanks again,

Yes, you would need to set replicate_id to match the daily diff. This can get somewhat tricky, as you want to avoid skipping some data, as well as not loading older object versions over newer ones.

I’d recommend to check out the exact timestamps via “osmium fileinfo -e” for your planet file, and then make sure that the diff timestamps match (=they don’t differ by hours, in particular don’t leave any gaps). In the worst case, you would have to process a few minutely diffs first, before switching to daily diffs.

Make sure you you keep a copy of your current data, so you can quickly iterate in case of any issues.

replicate_id gets automatically increased, the overall process is logged in apply_osc_to_db.log. That’s all handled by one of the shell scripts.

Reboot should work. I haven’t tested it often enough to rule out all potential issues, though.