At the recent Hack Weekend in Karlsruhe i made some progress on a project i had been contemplating for quite some time and with further work in the last weeks first results can now be presented.
The main motivation for this was the problem of representing waterbodies in the OpenStreetMap standard style at low zoom levels. For a long time the standard OSM map has been showing the coastlines at all zoom levels based on data processed by Jochen and me but other water areas only on zoom level 6 and higher. The reason for this limitation is that rendering them at the lower zoom levels in the same way as at higher zoom levels would be very expensive regarding computing ressources.
Various solutions – or better: workarounds – have been proposed for this problem:
- Using a different low detail data set for the low zoom levels – this is the usual lazy approach taken by many maps but with poor results regarding accuracy and cross-scale consistency. Commonly used data sets for this purpose like Natural Earth are often of exceptionally bad quality by todays standards.
- Applying aggressive polygon size filtering, i.e. only rendering the largest geometries – an approach that is not advisable for OSM data because the way water areas are mapped in OSM and that would also be highly biased and distorting.
- Tagging large or important waterbodies in a different way, either as coastline or with a newly created tag – of course manipulating the database to address some technical shortcomings of the rendering system is not a very good idea.
Generally speaking my techniques for geometric generalization of map data do already solve this problem but the subjective choices involved in this make using such an approach a sensitive matter. And a decent generalization of inland waterbodies is not really possible without a structural analysis of the river network which is a time-consuming endeavour that can not easily be performed on a daily basis. So the approach had to be less expensive and also more conservative and neutral in its results. The solution to this i now implemented has been in my mind for quite some time but until recently i never really found the time to fully work this out.
Looking back at things now makes me realize that what ultimately came out of this project is actually fairly peculiar from a technical perspective. This is likely useful for many people who render digital maps at coarse scales – but the fact that this approach makes sense also says something fairly profound about the way we currently render maps and the limits of these methods.
If this sounds confusing you can read up the whole background story – there you can also find links to the (for the moment still somewhat experimental) processed data.
The implementation of the technique introduced there is also available.