Need help configuring Multiple continent data within single instance

Hi there,

we have a need where North American data along with European data has to be setup within single AWS EC2 instance.
In future, in order to support globalization, the multi-continent data should be set-up for our App.

Right now, we have North American data. However, as our App need to be supportive in Europe as well, need to integrate European data as well within the same instance where North American Data has been set up. We tried downloading the European data within the same instance which has North American data, but we have some issues like “Geometry not found”.

Need help on the same. Appreciate if anyone chips in this and provide necessary data regarding the same!

Thanks in advance…

It would be very helpful if you could describe what software you’re using. We have no idea if you’re working with a map renderer, geocoder, router, or something else. You’ll also need to describe how you got to where you are (e.g. import scripts, installation guides, etc.).


We already have North America Planet OSM installed successfully and we are properly getting Speed limits when we hit the lat/longs.
However, we wanted to have South America Planet OSM also to be installed within the same instance.
The following errors occurred while doing the same:

compute_geometry: Way 22573975 used in relation 3725897 not found.

The script what we used is below:

#set -euo pipefail
set -o xtrace


aws_access_key_id=Confidential(my own key)
aws_secret_access_key=Confidential(my own key)

overpass_aws_access_key_id=Confidential(my own key)
overpass_aws_secret_access_key=Confidential(my own key)

for mail job

smtppassword=Confidential(my own key)

private ssh key for git

privateSSHKey=Confidential(my own key)


public ssh key for git

publicSSHKey=Confidential(my own key)

configuration setup for awscli access


setup command line access to aws


mkdir -p ~/.aws

cat < ~/.aws/credentials

cat < ~/.aws/config
sudo apt-get update
#sudo apt-get -y upgrade
sudo apt-get -y install python
sudo apt-get -y install python-pip
sudo apt-get -y install supervisor
sudo pip install awscli


sudo apt-get update
sudo apt-get -y install g++ make expat libexpat1-dev zlib1g-dev
sudo apt-get -y install apache2
sudo a2enmod cgi

tar -zxvf osm-3s_v$osmVersion.tar.gz

cd osm-3s_v$osmVersion

./configure CXXFLAGS=“-O2” --prefix=$EXEC_DIR

make install

#download planetfile india
#sudo mv asia-latest.osm.bz2 ./…/

#populate data


cat < ~/

export DB_DIR=~/osm-3s/v0.7.54/dbdir
export EXEC_DIR=~/osm-3s/v0.7.54/execdir
export PLANET_FILE=~/south-america-latest.osm.bz2
sudo rm -f /dev/shm/osm3s_v0.7.54_osm_base
sudo rm -f /home/ubuntu/osm-3s/v0.7.54/dbdir/osm3s_v0.7.54_osm_base
$EXEC_DIR/bin/dispatcher --osm-base --db-dir=$DB_DIR

cat < ~/overpassdispatcher.conf

cat < ~/default.conf
<VirtualHost *:80>
ServerAdmin webmaster@localhost
#ExtFilterDefine gzip mode=output cmd=/bin/gzip
DocumentRoot /var/www/html/

    # This directive indicates that whenever someone types [](
    # Apache2 should refer to what is in the local directory [YOUR_EXEC_DIR]/cgi-bin/
    ScriptAlias /api/ /home/ubuntu/osm-3s/v0.7.54/execdir/cgi-bin/

    # This specifies some directives specific to the directory: [YOUR_EXEC_DIR]/cgi-bin/
    <Directory "/home/ubuntu/osm-3s/v0.7.54/execdir/cgi-bin/">
            AllowOverride None
            Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
            # For Apache 2.4:
            #  Order allow,deny
            # For Apache > 2.4:
            Require all granted
            #SetOutputFilter gzip
            #Header set Content-Encoding gzip

    ErrorLog /var/log/apache2/error.log

    # Possible values include: debug, info, notice, warn, error, crit, alert, emerg
    LogLevel warn

    CustomLog /var/log/apache2/access.log combined


#supervisor configure
sudo cp ~/overpassdispatcher.conf /etc/supervisor/conf.d/
sudo supervisorctl start overpassdispatcher

#apache configure
sudo cp ~/default.conf /etc/apache2/sites-available/
sudo /etc/init.d/apache2 restart

You cant upload data from two different datasets reliably into the same planet_osm tables because there may be conflicts. The appropriate way to do this is to merge your north american data with your european data before importing it. Suitable tools for the latter operation are osmosis and osmconvert.

If you dont want to do this, the alternatives are:

  • Import Europe into differently named tables (eg eu_osm_). It is possible for a knowledgeable DBA to then alter names within the database and build views so that the two datasets are both queried.
  • Create a separate database on Postgis for Europe. Same problem as above, but no realistic possibility of merging data through views.
  • Use tricks within osmosis to convert current Europe data to look like an OSC update on North America. No idea if this would work. Requires deep knoweldge of OSM & osmosis.

Basically the best approach is to re-import your database with merged North America & Europe. Note adding Europe will increase DB size by as much as 200% and may impact import processes.

Thanks for the update. I will try and merge South American OSMfile with North American OSM with either with osmosis or osmium. And let you know whether we attain any resolution for the same.

Well, this question is about Overpass API, so there aren’t any planet_osm tables whatsover.

Those are warnings which occur during the import process. They shouldn’t abort the import itself.

Running Overpass API on an extract is challenging, using multiple extracts will probably expose other (new) issues, unless you manage to merge different extracts outside of Overpass, and provide proper .osc files. Major issue here is, that this setup is untested and unsupported!

Why don’t you go for a full planet instead and save you lots of effort to get this going?
Without attic data that somewhere in the 150GB range (including attic: 240GB).

Thanks for the valuable info. Just needed small info here. If i go for this full planet download instead which is of around 150 GB like what you have mentioned, is that after decompressed or with compressed?

That’s compressed (with zlib). You can see the actual file sizes in the clone directory:

As a first step, you can ignore all file names with an “attic” in there, as they are only needed if you request data for some point in the past. I don’t have the exact figure in mind, might be somewhere in the 100-150GB range.

Due to certains constraints with respect to the size of the AWS instance which is about 500GB, i am going for the complex approach of merging two different Planet OSM files.
This is the approach am following:

  1. Northamerica.osm.bz2 - decompress it to .osm using bzip2 -d command (bz2 size - 12 GB and decompressed .osm size is 161 GB)
  2. Southamerica.osm.bz2 - decompress it to .osm using bzip2 -d command (bz2 size - 2.33GB and decompressed .osm size is 31 GB)
  3. Merge Northamerica.osm and Southamerica.osm using osmosis command to get merged.osm (merged.osm size is 168 GB)
  4. Again zip merged.osm to bz2 due to the limitations in size of my AWS instance
  5. Them Import this merged.osm.bz2 to respective DB directories accordingly

Among the above steps, i got stuck at 4th step where i tries to zip the merged.osm to merge.osm.bz2

So many times the operation got aborted. Dont exactly know whats the issue.
Can someone help in this context?

Honestly, if you cannot afford the cost of a more powerful AWS instance that will scale into the future as well with the ever growing OpenStreetMap database in mind, why not setup your own local server in your organization? This will not break the bank, and has the added advantage of you gaining detailed knowledge of running your own (virtualized) PostgreSQL database server, instead of relying on a resource constrained pre-configured expensive cloud server.

I setup my own 1.4 TeraByte Oracle VirtualBox Ubuntu 18.04 / PostgreSQL 10.6 / PostGIS 2.4 instance (soon to be upgraded to PostgreSQL 11.1), running on a lowly 4-core desktop i5 with 16 GB RAM serving as a Windows 10 host for VirtualBox, but with additional 80 GB Windows “virtual memory” set on a SATA connected SSD (the Windows virtual memory is actually not necessary to run the VirtualBox instance, as that has its own swap space defined in Ubuntu, but I set it just in case for Windows to have ample memory).

The VirtualBox instance runs from a 2TB Samsung EVO external drive, connected through USB 3.1 gen 2 (10 Gb/s), and has 12 GB RAM set for the instance, and additionally 80 GB swap space set in Ubuntu. This all runs fine as a backend database server for local testing.

I connect to the database over a 1Gb switch from my i7 laptop, and manage the server via remote desktop from the laptop.