OSRM in a System Container

I’ve been running OSRM as a Docker container for almost a year. While my current setup serves me “well enough”, I am also curious about other ways of running/hosting this service (locally). Would the routing service perform better when run in a system container (LXD/Incus)?

I am running OSRM on bare metal and for QA stuff i run in on demand in Docker container.

I dont think ANY of these containerization methods will have significant performance impact or gains compared to each other.

As soon as you start abstracting it onto a system/virtual machine image you will loose performance as a memory mapping operations need to traverse 2 system kernels and page tables on memory mapping operations. Thats the low digit percentage loss for virtualization.


That is definitely relevant for me, since my Docker host is a VM. I’m already taking on a (potential) performance hit, I suppose.

Your VM is the performance hit - but as i said - low digit percentage. Its not that it gets an n fold increase in speed.

If you are looking for a 50% speed increase thats definitly the wrong place to look at.


1 Like

If you are using this Dockerfile for running OSRM, I would suggest:

  • Upgrading to the latest Debian version (from bullseye to bookworm, or even the latest testing or unstable Debian version). This update will modernize the entire environment, including the latest gcc and clang compilers.
  • Creating CPU Architecture Optimized Binaries (like avx-512, armv9); adding building options such as --march=native.
  • Further optimization with Profile Guided Optimization (PGO).
  • If you are on x86-64 hardware, you could also try using Intel’s optimized clearlinux docker image, as it may offer additional performance benefits.
  • etc …

Additionally, you can experiment with Docker alternatives like podman and lxd/lcx
and you can optimize the docker command further.

1 Like