I’m building a Next.js web app that uses IoT collars to track animals, and I want to render a 3D viewer of a user’s property. The idea is: the user enters a location, and the app renders a ~2km² 3D map of the surrounding area.
I originally explored OpenStreetMap data for this (especially land use and roads), but had trouble integrating it effectively with a 3D scene using Three.js / React Three Fiber. I’m also using elevation data (Terrarium PNGs) and land cover (IO-LULC COGs) from AWS Open Data.
My main goals:
- Map my own custom textures to different land types (grass, woods, water, etc.)
- Add custom 3D models (e.g., animals, buildings) on the terrain
- Load only a small area around a lat/lon (not the entire globe like CesiumJS does)
I’d love to hear if anyone has:
- Experience integrating OSM landuse/road data into a 3D rendering pipeline?
- Advice on working with vector tiles or converting OSM extracts for 3D use?
- Recommendations on tools or workflows that play well with React/Three.js?
I’m asking here because I’d still like to use OSM for roads and land use, if possible, but I’m unsure how best to extract and tile the data for this use case.