TL;DR:
I benchmarked my OpenStreetMap processing pipeline (Osmosis → Osmium → Tilemaker) across four machines. Updating the planet takes ~16–21 minutes, while generating worldwide vector tiles (z14) takes between 1h45 and 6h18 depending on hardware and parameters. With tuned Tilemaker parameters and sufficient RAM I was able to reduce tile generation to ~1h45.

The Full Story
Across my various ResultMaps for OpenStreetMap (OSM) contributions, several different tools and data formats appear in my processes and workflows. Since around mid 2024, vector tiles have been part of my setup. Initially, my motivation for learning about them was the idea of providing such tiles in the context of disaster response (for example during the 2023 Türkiye earthquake). Later, however, I started integrating vector tiles directly into several of my quality assurance tools (for example NeisBot in 2024). My goal with this was: A) to reduce the load my services place on the OSM API and B) to become more independent from other sources such as the Overpass API. However, in this blog post I want to share some processing times and show how long different OSM tools take to update and process data on various hardware setups.

Why I Prefer Planet Files Over a Database for OSM Processing
In several of my talks and workshop sessions I often say: the OSM ecosystem provides such good tools for keeping OSM data up to date that my first choice is not a database. Previously I used Osmosis, and currently I rely mostly on Osmium. Both are excellent tools that simply deliver, as you can see later in the timing results of this post. I use Osmosis to download the latest OSM replication changes. After that, I update my planet file and history dump using Osmium. For generating my vector tiles I use Tilemaker.

Setup and Hardware
In total I tested four different machines to represent different hardware configurations. The smallest machine was a Mac Mini (2023) with an M2 Pro processor and 32 GB RAM. The second machine was a Mac Studio (2023) with an M2 Max processor and 96 GB RAM. The third and fourth machines were Ubuntu desktop systems: Ubuntu machine 1 (2024): AMD Ryzen 7965WX, 512 GB RAM, two NVMe drives and Ubuntu machine 2 (2025): AMD Ryzen 9955WX, 256 GB RAM, two NVMe drives. For updating OSM data I use: Osmosis to download replication files and Osmium to update my planet and history planet files. Both Osmosis and Osmium were installed either through distribution packages or compiled from source via GitHub. For my custom worldwide vector tiles I use Tilemaker. This was compiled from the latest GitHub sources on all four machines. My configuration and Lua scripts are almost entirely based on OpenMapTiles [Config] [LUA]. During processing I generate worldwide tiles only for zoom level 14. For my QA/QS workflows this has proven to be a very reasonable compromise between detail and processing effort.

Benchmark Results
Osmosis command to fetch the latest OSM changes: osmosis –rri –wxc changes.osc.gz
Average runtime over seven days:

Mac Mini Mac Studio Ubuntu1 Ubuntu2
Seconds 40 45 44 40
Std. Dev. 9 9 8 6

Osmium command to apply changes: osmium apply-changes -v planet-old.osm.pbf changes.osc.gz -o planet.osm.pbf
Average runtime over seven days:

Mac Mini Mac Studio Ubuntu1 Ubuntu2
Minutes 21:34 21:32 21:19 16:03
Std. Dev. 9 sec 4 sec 8 sec 2 sec

Tilemaker command to generate vector tiles: tilemaker –shard-stores –store ./store –input ./planet.osm.pbf –config ./my-config.json –process ./my-process.lua –output ./planet.mbtiles
Average runtime over three days:

Mac Mini Mac Studio Ubuntu1 Ubuntu2
Hours 06:18 04:09 02:50 03:11
Std. Dev. 2 min 1 min 1 min 1 min

According to the documentation, I could also use the –compact parameter (see Running). However, since I want to include the OSM element IDs in my tiles, this option is currently not suitable for my use case. I also experimented with the number of threads used by Tilemaker. The best results were achieved when setting the number of threads to roughly 50% of the available CPU threads.
Tilemaker using 50% of system threads: tilemaker –thread 24 –shard-stores –store ./store –input ./planet.osm.pbf –config ./my-config.json –process ./my-process.lua –output ./planet.mbtiles
Average runtime over three days:

Ubuntu1 Ubuntu2
Hours 02:25 02:49
Std. Dev. 0 min 0 min

With my configuration and the larger RAM setup on one of the Ubuntu machines I achieved the following result: tilemaker –thread 24 –fast –no-compress-nodes –no-compress-ways –materialize-geometries –input ./planet.osm.pbf –config ./my-config.json –process ./my-process.lua –output ./planet.mbtiles

Ubuntu1
Hours 01:45
Std. Dev. 1 min

This is currently the command I use to generate my tiles. It is worth mentioning that this setup uses the Tilemaker sources from around February 2024. With the latest sources I was somehow unable to reproduce this speed (see GitHub issue). As an additional note: on my second Ubuntu server, Osmium currently takes about 32 minutes on average to update a historical planet file (~160 GB).

Conclusion and Future Improvements
I think the results show that Tilemaker is influenced by CPU and memory bandwidth. While the Apple machines perform well, the Ubuntu system with a high amount of RAM benefits from aggressive Tilemaker parameters and multi-threading. Anyway, I hope this blog post and the numbers presented here can be helpful to others. I am confident that I did not mix up the processing times, but of course mistakes are always possible. I am also definitely not a Linux or parameter tuning expert, although according to several GenAI prompts there should still be some room for further optimization. I am not sure how much impact using multiple NVMe drives has in this setup. Based on my server monitoring I do not really observe significant I/O bottlenecks when comparing it with the RAM-heavy setup.

When it comes to Linux kernel tweaks such as memory cache tuning, hugepages, or filesystem optimization, I am completely out of my depth. If you have experience with kernel tuning, filesystem optimizations, or Tilemaker parameter tuning for large OSM datasets, I would love to hear about your setup.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *