Dark area in the Caribbean Sea in the Green Marble 4

July 21, 2024
by chris

Color representation and noise in satellite images

In my announcement of the Green Marble 4 global satellite image mosaic i mentioned that i am moving to a 32 bit per channel color representation in processing of the data. I here want to explain the background of this development a bit.

Color representation basics

Color images in computer systems, for example on websites like this, are commonly represented with 8 bit per channel. That allows for 256 different levels of intensity or for 16.7 million different colors in a color image as it is commonly advertised. That is fairly coarse and only works reasonably well because these intensity levels are defined non-linearly in a way that happens to roughly match the physiology of human color perception. I won’t go into the details of that here – it has to do with the physical characteristics of the CRTs which were used as computer displays.

Anyway – for recording images in cameras this has long been insufficient and most digital cameras use 12 or 14 bit color representations (equivalent to 4096 or 16384 levels), older models sometimes also 10 bit (1024 levels). Earth observation satellites roughly match this development.

These raw values are typically cast into a 16 bit representation for further processing – both in digital photography and with earth observation satellites. When you download optical satellite imagery today for analytic applications this is almost always in the form of 16 bit per channel data.

Reflectance representation conventions

The most common form of distributing optical satellite imagery is as reflectance values. Reflectance is a unit-less quantity where a value of 1.0 means at a certain point in the image as much light is recorded as you’d expect to come from a horizontal surface that is a perfect diffuse reflector under the lighting conditions the image is recorded at.

By almost universal convention these values are scaled with a factor of 10000 for representation in 16 bit values. Sometimes, in addition, an offset is applied as well to be able to represent negative reflectances in unsigned 16 bit values – but that is not of much interest here.

Many readers might ask: Why use factor of only 10000 when the full range of 16 bit values (65536 levels) is available. The reason is that reflectance values routinely exceed 1.0. This can be easily understood based on the definition i gave above. If you have a low sun position a mountain slope facing the sun will reflect significantly more light than a horizontal surface. So even if it is not a perfect diffuse reflector it will frequently exceed a reflectance of 1.0. Hence you practically need significant headroom above a reflectance of 1.0 – which is why a scale factor of 10000 makes sense.


The next question you might ask: Is this representation of reflectance values (integers with a scale factor of 10000) sufficient for an accurate representation of the recorded data?

The answer to that is yes – as long as

  • we are talking about individual images.
  • we are talking about data in the visible range of the spectrum.

And most importantly: This is also still going to apply in the future with further improvements in sensor technology.

The reason for that lies in the Earth atmosphere. Whenever a satellite image is recorded it will inevitably contain not only light from the Earth surface but also from the Earth atmosphere. We can try to compensate for the atmosphere part when processing the images – but that compensation is for the bulk effect only. It does not eliminate the noise.

All signal recorded by a satellite image sensor, whether it comes from the Earth surface or from the atmosphere, is subject to noise. And with noise here i do not mean noise from the sensor or from the signal processing in the satellite, i am talking about noise that is already present in the light before it reaches the satellite. This noise is unavoidable and inherently limits the dynamic range of satellite image data. Because of that a dynamic range of 10000 in the data representation is – under the constraints i listed – more than sufficient for an accurate representation of satellite image data.


That is not the end of the story of course. The photon shot noise i discussed follows a well known mathematical characteristic: It is proportional to the square root of the signal. In other words: You can reduce the amount of noise relative to the signal and thereby improve the signal-to-noise ratio and the dynamic range by recording more light. But because of the square root relationship you need a lot more light to have a substantial effect.

There are two potential ways to exploit this possibility:

  • You can build larger satellites with larger optics. That is rather costly of course.
  • You can combine multiple images.

The latter is what i do when producing a pixel statistics mosaic like the Green Marble. And when combining thousands of individual images in areas with very low surface reflectance you can reach the limits of the standard integer representation of reflectance values with a scale factor of 10000. Here is a practical example to illustrate that. First the area in standard tone mapping.

Caribbean Sea rendering in the Green Marble 4 with standard tone mapping

Caribbean Sea rendering in the Green Marble 4 with standard tone mapping

This shows several fairly dark reefs in an even darker sea area in the Caribbean Sea between Jamaica and Nicaragua/Honduras. With a brighter tone mapping this becomes better visible.

Caribbean Sea rendering in the Green Marble 4 with brighter tone mapping

Caribbean Sea rendering in the Green Marble 4 with brighter tone mapping

The open ocean away from the reefs is of a very dark blue color with an extremely low red color reflectance. And if we further contrast emphasize this area we can actually get to see the residual noise and work out the difference between the Green Marble 3 and 4 here.

Caribbean Sea rendering in the Green Marble 4 with strongly emphasized contrast

Caribbean Sea rendering in the Green Marble 4 with strongly emphasized contrast

Caribbean Sea rendering in the Green Marble 3 with strongly emphasized contrast

Caribbean Sea rendering in the Green Marble 3 with strongly emphasized contrast

In comparison the Green Marble 4 has a lower noise level overall but in particular note it lacks the posterization in contrast to the Green Marble 3. The banding visible in both images is the result of images with different viewing direction being combined with less-than-perfect compensation for the differences in view geometry.


Practically reaching the limit of standard integer surface reflectance representation in visible range satellite image aggregation – as i have demonstrated here – marks a significant milestone in satellite image processing methodology. So far practical relevance for users of the Green Marble is small. Even users of the linear surface reflectance data who do their own custom color processing will in most cases be able to work with the 16 bit version as before without measurable disadvantages.

What this, however, demonstrates is that pixel statistics mosaicing methods – in addition to the main function of assembling a uniform, representative depiction of the Earth surface from individually incomplete and flawed images due to less-than-perfect viewing conditions – are in principle capable to generate images of higher inherent quality than the original individual images that are used as a source.

Green Marble 4 with synthetic relief shading

July 20, 2024
by chris

Announcing the Green Marble 4

I am happy to announce another update of my global satellite image mosaic – the Green Marble.

The Tian Shan mountains in the Green Marble 4

The Tian Shan mountains in the Green Marble 4

Version 4 of the Green Marble does not differ as much visually from its predecessors as version 2 and 3 since it continues to use the same principal data sources as before, that is Sentinel-3 OLCI data and MODIS images. It still represents a significant improvement in quality, in particular with substantially reduced noise levels in the water depiction. It updates and expands the source data for both the water and land depiction until late 2023.

Since the range of dates of source images processed is quite large to ensure high quality results, short term changes on the Earth surface do not usually manifest strongly in the Green Marble. So the data update is primarily visible in the form of long terms trends, which are often relatively subtle. Here an example of irrigation farming in southern Egypt:

Southern Egypt in Green Marble 3Southern Egypt in Green Marble 4

Changes in irrigation farming in southern Egypt (large: GM 2, GM 3, GM 4)

In addition to the data update there are also significant changes in data processing, fixing various smaller flaws and improving color uniformity. The biggest change is the move to a 32 bit per channel color representation in assembly of the mosaic. This avoids accumulation of rounding errors that has led to some color distortion in the previous versions on the water depiction and it avoids running into limitations of the standard 16 bit integer reflectance value representation in satellite image processing. I am going to discuss the background of this in a separate blog post.

Amazon river mouth in Green Marble 3Amazon river mouth in Green Marble 4

Improvements of coastal water color depiction between Green Marble version 3 and 4 (large: GM 3, GM 4)

The Green Marble 4 global satellite image mosaic is available for licensing for use in your projects now. Have a look at the Green Marble product page for more information and sample images.

The demo maps with the Green Marble in different rendering variants are updated as well:

Eastern India and Southern China in the Green Marble 4

Eastern India and Southern China in the Green Marble 4

Patagonia in the Green Marble 4

Patagonia in the Green Marble 4

Twenty years OpenStreetMap - revisiting observations and predictions

July 17, 2024
by chris

Twenty years OpenStreetMap – revisiting observations and predictions

Five years ago i wrote about the 15th anniversary of OSM and my outlook into the next fifteen years. We now have the 20th anniversary. This is a bit early to review my predictions (with merely 1/3 of the time span covered having passed) – still i want to newly consider some of the topics i contemplated five years ago.

My big topic for the 15th anniversary was the idea that a separation might occur in the future between the original core idea of OpenStreetMap, the sharing and communication of local geographic knowledge by a cross cultural community of mappers through egalitarian, self determined cooperation and the project with the name of OpenStreetMap.

This scenario consisted of two components:

  • the (back then increasingly visible) trend for OpenStreetMap to move away from the fundamental ideas and values it was created on.
  • the possibility that this development might lead people to share and exchange their geographic knowledge increasingly through means outside of OpenStreetMap.

I want to look at both parts here a bit through the somewhat different perspective i have today, five years later.

OpenStreetMap changing

Previously my hypothesis that OpenStreetMap is – at large – moving clearly away from its original ideas and values was primarily backed by the observable trends in mapping, that the data in OpenStreetMap as well as the edits made were increasingly not founded in personal local knowledge of the people making the edits and their own, intrinsic desire to share that knowledge. It seems quite clear to me today, five years later, that this trend has not reverted, although i also have the distinct impression that is has not accelerated either. OpenStreetMap continues to attract mappers sharing their local geographic knowledge and it is good to see that this happens with an increasingly diverse geographic and cultural distribution. Bulk data additions and systematic volume edits not backed by local geographic knowledge have expanded significantly as well – hence no reversal of the trend – but it can’t be said that they are substantially further displacing local knowledge based mapping at this time. Or in other words: You could say the situation has stabilized a bit.

So much for the actual development of mapping in OpenStreetMap. Things look very different when you look at how OpenStreetMap is communicated in public and hence how the wider public perceives OpenStreetMap these days. This means public communication by individual community members, by the OpenStreetMap Foundation as well as – and this is the largest part probably – by third party organizations, be that businesses of all sizes, public institutions or other organizations of all kinds.

If you study this body of public communication about OpenStreetMap these days then the traditional ideals and values of OpenStreetMap are almost completely absent. You can find these being discussed in communications of individual community members with no organizational affiliations as well as some academic studies from the social sciences sector. But the vast majority of organizational communication about OpenStreetMap (and that is what tends to have the largest reach) presents OpenStreetMap these days as a collection of useful geodata that is largely produced by volunteers (with volunteers meaning unpaid workers and not implying either local knowledge or self-determination in what they map and how they map it).

Recent public communication of the OpenStreetMap Foundation is a good example for that – the page created for the 20th anniversary is framing OpenStreetMap exactly along these lines.

Normally you’d expect such a clear and almost universal trend in public communication to lead to mappers increasingly internalizing this framing of OpenStreetMap. Interestingly this does not seem to be the case though. As mentioned OpenStreetMap these days continues to recruit new mappers who map in the traditional way sharing their local knowledge of the areas around them in a self determined fashion and in egalitarian cooperation with the other mappers around them. They must have learned this either

  • from legacy documentation that still exists
  • from other mappers by imitation/direct communication
  • from their local community through decentralized small group communication
  • through their own bootstrapping process as the way to approach mapping in a way that seems natural to them.

That this is possible and practically happening at scale despite organized public communication in most cases drawing a very different picture of OpenStreetMap is a very positive trend.

The separation

The second element of my five years old prediction was the idea that the people who value the original idea of OpenStreetMap of cooperative collection and sharing of local knowledge might move to doing this increasingly outside of OpenStreetMap. So far there is not much concrete tendency in that direction visible. And, as discussed already, at the moment the traditional mapping values seem astonishingly strong in practical mapping in OpenStreetMap.

What this, however, indicates – and this is different from what i discussed five years ago – is that there is a widening schism within OpenStreetMap between how practical mapping and the social interactions that facilitate it actually happen and the perception of OpenStreetMap and mapping in the organizational world around OpenStreetMap.

It is best not to see this schism as a complete separation of communication bubbles with one being completely disconnected from the other, it is more like a strong cultural divide.

If this schism is the precursor to a true separation as discussed in the past or if it can be a stable way to combine the social necessities of cross cutural cooperation in the form of the traditional values of mapping with the economic necessities of capitalism is not yet decided. It will probably largely depend on if people on the organizational side of the schism develop enough respect and appreciation for the social necessities on the other side. I am sceptical here (as are probably many of my readers). But i also like to point out that a lot of the organizations around OpenStreetMap have a lot of smart people working for them who are, in principle, capable of understanding these fragile inter-relationships and accordingly are able to make responsible decisions. The question is going to be: Will they?

An experiment in social cooperation

The ultimate long term question (and long term here means this is not a matter that can be determined reliably in a 15 or 20 years time frame) is if the cooperative collection of local geographic knowledge across cultures in an egalitarian and self determined fashion is something that can work at scale. As hinted at in the comments of my 15th anniversary post this is not clear, even if it has worked amazingly well during much of the past 20 years. The null hypothesis on this would be: It can’t, OpenStreetMap will necessarily either vanish or turn into a classical organization with a social hierarchy (i.e. become non-egalitarian) – at best with a framework of representative democracy of some sort.

I would very much like to see the thoughts of others on this question, especially if they differ from the null hypothesis.

Trends for the future of map design in OpenStreetMap

April 20, 2024
by chris

Trends for the future of map design in OpenStreetMap

After having looked at the history of digital map design in general in the first part of this short series and at the current situation of map design in OpenStreetMap in the second part, i am now, in this last part, going to specifically look at what the OpenStreetMap Foundation seems to be pursuing in that field at the moment and in what direction that is likely going to go.

The OSMF’s history in providing maps

The OSMF has for some time followed a quite erratic course with regards to rendering and providing maps for the OSM community. Historically, a significant percentage of the OSMFs IT resources have gone into rendering the standard map layer on And while there has been discussion and ultimately a policy was developed on who is allowed to use that tile service under what conditions, the OSMF has never really stated a clear purpose for that tile service or the conditions for choosing OpenStreetMap Carto as the style for it. There is a set of requirements for tile layers to be included on – which are mostly hosted externally – but nothing to this date defines what qualifies OSM-Carto to be the style the OSMF invests substantial resource in to render as a map. I have, as an OSM-Carto maintainer, repeatedly criticized this omission, because it also means the OSMF takes no responsibility for the decision to do that.

Back in 2021 a poll the OSMF board made of the OSM community included the question about the future of map services provided by the OSMF, for the first time indicating to the OSM community that they are contemplating making changes in that field. The question was very vague and ill-defined (which i criticized back then) and the answers were accordingly inconclusive. The term vector tiles had already become a magic word back then without much of a well defined meaning but where everyone seemed to project their hopes and dreams into.

Later that year the OSMF announced that they would – on the operational level – do some testing in that field. But even upon inquiry no overall strategy was mentioned this initiative would be part of and it became clear that this was an initiative from operations and not part of a larger strategy of the OSMF at that time. The interesting part here is that, while this announcement was only about a relatively small operational investment into testing with nothing in terms of money being involved at all, there were quite immediate reactions from economically interested parties substantially lobbying for their pet frameworks being considered by the OSMF as well.

Long story short: Back in 2021 it was clear that (a) the OSMF had no substantial strategy for the future of the map services provided by the OSMF, (b) there were already a lot of people projecting their hopes into the vague concept of vector tiles and loudly calling for the OSMF to make these hopes to magically come true while the whole matter became further political because economically interested parties began to see the chance to profit and started lobbying the OSMF to adjust plans in their favor – and (c) that actual map design was not going to play even the most marginal role in the whole process. After that there was not much publicly visible progress for some time. The strategic plan decided on in July 2021 included only a development platform for vector tiles by volunteers – which seems to reflect the board’s reading of the survey results back at that time to leave that to the community to pursue.

Something changed at some point in 2022. In November there was a short mentioning of the matter in one of the meeting minutes. From then on a fairly remarkable timeline in discussions and decisions can be reconstructed from the minutes that paints quite a rich tapestry of the inner workings of the OSMF. Although this is of little relevance for the topic of this blog post i highly recommend taking the time to read through all of that.

The meaningful information w.r.t. the topic of this post is that, when the OSMF contracted Paul Norman to do the work that these days seems to be coming to an end, they still apparently did not actually have a strategy on their own regarding the OSMF’s map services. And it is unclear if they have de facto adopted what Paul sketched in his proposal (published here, which is rather vague in terms of strategic goals) as the OSMF’s de facto strategy or if they have meanwhile developed some other kind of plan. What is, however, quite self evident is that they will certainly want to roll out the results of spending EUR 42k in some form. And this is what i will base the following considerations on, i am not making any further assumptions beyond that.

The map serving process

What i am going to do here is discussing what that likely means for the OSM community. To explain that, here first how the current setup of the standard tile layer on is run (based on OSM-Carto):

The practical map rendering process and mapper feedback loop with OSM-Carto (or similar styles) - link goes to larger version

The practical map rendering process and mapper feedback loop with OSM-Carto (or similar styles) – link goes to larger version

I illustrated which parts of this setup are controlled by who: The gray rectangle titled Map design is controlled by the OSM-Carto map designers and maintainers. The Mappers using the map decide which parts to view and what to map and the OSMF (Operations) is in control of the OSM database and the map rendering chain until the rendered tiles are delivered to the map user. Some might be irritated because i split up the rendering process with the tile data (internal only) to make the process better comparable to the vector tiles one shown below. This split is not visible to operations, it exists only internally in Mapnik. It is, however, visible to the map designer who separately designs the query to generate the tile data and the rules to draw the tile from this data as formulated in the CartoCSS code.

The important thing to note is that the rendering database in case of OSM-Carto is deliberately kept generic. This allows you to render several completely different styles from the same database, to make almost any style changes without reloading the rendering database and it also simplifies systematically testing the style as a whole because the database import does not need to be part of this testing.

In case of OSM-Carto the processes within the map design domain are relatively simple. For comparison: Here is how this looks like with the Alternative Colors style.

Differences with a more complex style like the AC-Style - link goes to larger version

Differences with a more complex style like the AC-Style – link goes to larger version

Like a lot of other map styles the AC-Style extends the rendering database with some style specific data. In my case this is some SQL functions, mostly for tag interpretation and geometric processing, and the tree symbology (which needs to be in the database to work around limitations in Mapnik – see the blog post discussing tree rendering). But this is static data, it does not change when the main database content changes.

Here is now how the whole process would change if Paul’s work would be rolled out as developed (based on my understanding from what has been communicated – i have not actually tested this setup myself).

The expected map rendering process and mapper feedback loop with minutely updated vector tiles (as currently pursued by the OSMF) - link goes to larger version

The expected map rendering process and mapper feedback loop with minutely updated vector tiles (as currently pursued by the OSMF) – link goes to larger version

From the operational side this means – in a nutshell – that the last part of the traditional rendering chain as shown for OSM-Carto above is cut off and outsourced to the map user. This substantially reduces the resources required for serving the tiles and at the same time allows customizing the actual rendering of the tiles without this requiring additional resources.

That is – in short – the whole magic of vector tiles. In reality it is of course not all that simple – for multiple reasons.

First: For the outsourcing of rendering to the map viewer to work you need to massively reduce the volume of data transferred and involved in rendering. In the diagrams shown i indicated high volume data with the wide red arrows. And the tile data internally processed by Mapnik is typically even substantially larger volume than what is in the rendering database (which tends to be many hundred GB for a full planet import) because the queries do geometry processing that frequently adds data volume. In case of the client side rendered vector tiles this volume needs to be massively boiled down with the help of lossy data compression. And despite these efforts (and their negative side effects) the bandwidth requirements for viewing one of these styles is usually much higher than for a traditional raster tile based map.

Second: The rendering database is not generic any more. That is not inevitable but practically i am not aware of any client side rendered vector tile setup that uses a generic rendering database.

Third: Map design as a domain in control of parts of the whole process is gone. That requires a more detailed discussion of course.

Above, i explained that one of the main promises of client side rendered vector tiles is, that the actual rendering can be customized for every map viewer. But the ability to customize the actual rendering does not mean that you can fully customize the map design. In reality, most of the map design decisions are implemented in the vector tile generation, partly because of the need to keep the tile sizes small. This means that everything that is not necessary for the specific map design chosen needs to be stripped from the data. Or seen the other way round: Trying to introduce any level of map design genericity into client side rendered vector tiles will often have massive impact on bandwidth requirements.

Practically, all client side rendered maps i have seen so far have a division between the people/organization in control of the vector tile generation and those developing the client side part of the map design (which i labeled style design in the diagram). And the vector tile generation furthermore tends to be under control of software developers and not map designers – hence i labeled that part software development. In the setup Paul has developed this part is further split into multiple components under control of different people.

What this means is that map designers would, in such a setup, get marginalized to mere style designers who can play a bit with colors and line width selection and designing point symbols. The real control over map design would sit with the software developers in control of schema and vector tile production, whose main concern is not going to be map design but operational efficiency and keeping the size of the vector tiles small – the technocratic direction as i described it in my discussion of OSM-Carto. Theoretically, you could argue, you could put the style designers hierarchically on top of the software developers in control of the vector tile schema and require the latter to adhere to the wishes of the former. But in today’s OSMF this has zero chance to happen as evidenced by the fact that map design aspects played no role at all in the whole process of developing this setup. As i wrote in the discussion of OSM-Carto, the only way from my perspective for sustainable progress in map design would be to bring the technocratic and the artistic direction together in a single cooperative project with clear common goals everyone subscribes to.

Beyond the magic

So what remains of the magic of vector tiles and its promises in light of this? The lower resource requirements for operating a tile server are indeed there. But only if you look at it purely from the micro-economic perspective of the tile server operator. From a macro-economic perspective the resource requirements are actually higher in sum because the actual rendering is done separately for every map user. Client side vector tiles is not Tiles@home.

But don’t get me wrong, there are of course benefits of client side rendered maps. One major one is the ability to implement more interactive maps based on the semantic data transferred to the client. And if you invest the bandwidth to include additional data in the tiles, customizing the map can be a real benefit. But there is no magic here, this all comes with costs.

The real question, however, is, what this means for map design. If i put aside the organizational division between style design and vector tiles generation for the moment what remains is what changes if you design for a client side renderer compared to a Mapnik based rendering chain. There are still several really big issues i am afraid. I already explained those in more detail in my post on what rule based map design requires from the tools it uses.

But now comes the twist some readers probably don’t expect: All of this does not mean the OSMF did something wrong by investing into implementing a toolchain for minutely updated vector tiles. Of course apart from the board epically failing in the procurement process for this on so many different levels. If i ignore that for the moment then investing in this is a completely sound decision. Client side rendered vector tiles are – independent of their practical relevance for the OSM community – currently the main technique for producing interactive web maps used by commercial map providers. And these commercial actors, while having invested substantially in the toolchains for this style of map production, have very little interest in minutely updates. These are a relatively unique interest of the OSM community. Hence it is a reasonable idea that the OSMF should chime in to serve this specific need.

If the OSMF now feels that this reasoning is not sufficient to justify the project and feels instead the need to roll out a cheaply thrown together postmodern OpenMapTiles/Mapbox clone with minutely updates as the main map on that would be regrettable. But it would not really mean much in the grand scheme of things. It would surely not solve the structural problems in practical map design in and around the OSM community that i explained in the previous post.

The current state of map design in OpenStreetMap

April 18, 2024
by chris

The current state of map design in OpenStreetMap

In the previous post i discussed a bit the history of digital map design, in particular also outside of OpenStreetMap. I also wrote in the past about the history of OpenStreetMap Carto, the cartographic face of OpenStreetMap and without doubt the most influential map design project in OpenStreetMap – today as well as historically. But it is neither the only map design endeavor in and around OpenStreetMap, nor is it going to be in its current position forever. In this post i am going to discuss various other map design projects – historic and present – that exist in and around OpenStreetMap and that substantially have shaped or are shaping map design in OpenStreetMap.

Beyond OSM-Carto

I am going to leave a more concrete discussion of commercial OSM based maps out of this post. As discussed in the previous post about the history of digital map making, the domain of web maps is strongly dominated by big tech companies and their digital platforms. These have mostly optimized their maps for cost efficiency – in terms of operational costs for serving the maps just as in terms of development costs. Summarily, these maps are largely simplistic in design, dominated by relatively coarse visual elements and with a lack of conscious attention to detail and semantic nuances. Seen as art they feature elements in particular of primitivism and dadaism – as well as a kind of digital technological surrealism. It is fitting in my opinion to say that these maps form the core of post modern cartography.

Maptiler Mapbox

Examples of commercial OpenStreetMap based web map styles: Maptiler and Mapbox

Because of their dominance in today’s practical use of maps they have a significant influence on contemporary map design but at the end of the day they contribute very little to the advancement of the state of the art in map design in general. The big problem is – as i have pointed out in the past – that the technological basis of today’s digital map design is largely financed by these companies, which hampers the development of map design beyond that.

What i do want to mention in that context is the probably most important precursor of today’s corporate cartography, the avant-garde of post modern cartography you might say. That is quite clearly the web maps work of Stamen from – in parts – more than 10 years ago.

Stamen Watercolor Stamen Terrain Stamen Toner

Early map styles by Stamen

There is another quite substantial set of maps that are commercially developed, usually by smaller businesses, for specific use cases that i am also going to skip here. For most of these, the main connection to OpenStreetMap is that they use OSM data. Collectively, this body of map design work is quite significant for the OSM community as it demonstrates a broad range of use cases for OpenStreetMap data to the wider public and the mapper community. But individually, these maps tend to be of rather limited reach, compared to the maps of the big tech platforms, their individual significance for OpenStreetMap design wise is rather small, especially also since the vast majority are not open source. Reviewing this set of map styles from a map design perspective would probably also be interesting, but i consider this out of scope here.

And finally i am also going to leave out various specialized QA maps and overlays produced by the OSM community – some of these feature noteworthy design techniques – but overall, their purpose is so much different from that of normal maps that they are not really that relevant here.

Historically the origin of OSM based map design was Osmarender. Osmarender was the primary platform of visualization of OSM data and mapper feedback in the early days of the project and has enormously shaped mapping in OpenStreetMap as well as OSM based map design – which can still to some extent be observed in present day maps. Unfortunately, documentation of these early days of OpenStreetMap is generally still very poor.

Historic example of the Osmarender map design

Historic example of the Osmarender map design

Especially also together with the Tiles@home framework facilitating distributed rendering of maps, Osmarender was a fairly revolutionary project which anticipated many later developments and already tried to address some problems that we still struggle with today. It was also the first attempt of the OSM community at cooperative map design and the experiences made with that significantly influenced the approach to subsequent projects.

I won’t discuss OSM-Carto and its XML predecessor style here – for that please refer to my previous posts. I am going to look at the various OSM-Carto derivatives a bit though that form a significant part of the OSM map design ecosystem.

I will start with the German OSM-Carto fork, maintained by Sven Geggus. This OSM-Carto variant features various color and design differences to the parent project, in particular

  • A road coloring scheme derived from traditional German road maps
  • Various different symbols specific to German cartographic culture
  • Green instead of purple boundaries
  • Rendering tracktype=1 tracks similar to service roads
  • Different rendering of footways/cycleways/paths

In addition, the German style features a sophisticated label language selection and transliteration system. This is implemented during database import, meaning the style uses a different database schema than OSM-Carto.

The German OSM-Carto fork is the only of the larger forks that is regularly kept in sync with upstream changes. This constrains the ability to implement larger and more invasive design changes because such would make synchronizing with upstream changes more difficult. Some of the color changes in land cover and water coloring, that have been implemented in the style at some time, have been dropped again because of that.

German OSM-Carto fork

German OSM-Carto fork

The French map style, in contrast to the German one, is a hard fork that has split off from OSM-Carto a longer time ago. It therefore retains in particular many of the original colors of early OSM-Carto for buildings, landcover and roads. It is maintained by Christian Quest and, apart from some symbology adjustments to French cartographic tradition, comes with a number of innovative feature extensions and rendering techniques. Some have been adopted by OSM-Carto in the meantime – like the depiction of trees and the rendering of details on golf courses. Others remain unique to the french style. Specifically:

  • The rendering of street lamps using circular symbols similar to those for trees.
  • Rendering of pedestrian crossings on roads.
  • Rendering of sport specific linework on sport pitches.
  • Normalization and abbreviation of French names.
  • Low zoom landcover depiction based on scaled raster rendering.
French OSM-Carto fork (high zoom level)

French OSM-Carto fork (high zoom level)

French OSM-Carto fork (low zoom level)

French OSM-Carto fork (low zoom level)

Then there is the AJT-Style by Andy Townsend as an OSM-Carto fork with a distinct British and pedestrian focus. The AJT-Style departed from OSM-Carto design wise even before the French one. It is noteworthy for being likely the most feature rich style in OpenStreetMap in total, with a strongly differentiated rendering of various attributes of roads and paths (like British rights of way) and a huge number of different classes of point features being shown with point symbols.

Design wise the style is relatively traditional with pixel painted raster symbols. And much of the symbology uses relatively cryptic and not necessarily very intuitive encodings to represent semantic differences. In contrast to OSM-Carto, which is explicitly designed to be intuitively usable without a legend, the AJT style deliberately accepts to require a legend to be used and puts formal clarity in differentiation above intuitive readability.

Technically, the most remarkable aspect of the AJT-Style is the way tag interpretation is performed at data import through successive conditional in-place manipulation of attributes in a more than 10k lines long LUA script.

Britisch/Rural pedestrian focused OSM-Carto fork by Andy Townsend

Britisch/Rural pedestrian focused OSM-Carto fork by Andy Townsend

And for completeness: There is of course also my own Alternative Colors style – which i have discussed extensively here in past posts so i will only mention it in passing in this post.

Finally, i want to mention one OSM-Carto fork that is not strictly open source – which i cover here primarily because it is meanwhile featured on the OpenStreetMap website. I am talking about the Tracestrack style. This is an OSM-Carto derived collage of design components from OSM-Carto and OpenTopoMap remixed/overlayed with separately designed additional features and design modifications. The add-on features and design modifications have a distinct urban European cultural focus – contrasting quite clearly with the global orientation of OSM-Carto.

I call the Tracestrack style not strictly open source because, although they have published some code components of the style and licensed them under an open license, this entails only parts of the style. The openly licensed parts do not allow you to independently reproduce the map they offer. While this is in principle totally legitimate as far as OSM-Carto is concerned (which is CC-0), the only partial open licensing is potentially at odds with the more constraining license of OpenTopoMap (CC-BY-SA).

OSM-Carto derived style by Tracestrack

OSM-Carto derived style by Tracestrack

So far these were OSM-Carto forks – which are implementation wise based on the OSM-Carto code. There is another class of OSM-Carto derivatives that are mere OSM-Carto look-alikes which try to imitate the OSM-Carto design appearance-wise to some extent but which do so with an independent implementation. The depth of imitation in those varies, some merely try to resemble the color scheme in broad strokes while others go into more depth to resemble the design of OSM-Carto. The ESRI client side rendered demo is one of the earliest and the most ambitious projects in that regard.

Client side rendered OSM-Carto look-alike by ESRI based on a proprietary tool chain

Client side rendered OSM-Carto look-alike by ESRI based on a proprietary tool chain

The trend to imitate OSM-Carto design in technically independently developed map design projects is testimony to two things:

  • That the visual design of OSM-Carto has turned into a kind of reference standard in OSM map design meanwhile independent of the concrete technical implementation.
  • That designing a rich OSM based map similar in scope to OSM-Carto is a lot of work that requires significant map design skills and therefore avoiding this substantial investment and building on what OSM-Carto has developed over many years is an attractive option (which does not seem to become less attractive with the increasing age of the project and the design).

Outside the range of OSM-Carto derivatives we have another particularly noteworthy project: OpenTopoMap. OpenTopoMap is being developed and maintained by Stefan Erhardt and Max Berger. Technologically, the project is remarkable because it is the only actively maintained style written directly in Mapnik XML. This makes editing the style rather awkward but compared to CartoCSS based styles it has the advantage that it can make full use of the Mapnik feature set while CartoCSS styles are limited to the subset of Mapnik features supported by Carto.

Design wise OpenTopoMap is different from other OSM map styles because it aims to resemble and follow many of the design paradigms of traditional topographic maps. This means in particular deliberately using only a limited set of colors. But it goes beyond a mere imitation of traditional maps, it extends the traditional design paradigms with OSM specific ideas.

OpenTopoMap also features a number of other substantial innovations:

  • context dependent orientation of train station and saddle symbols
  • visualization of view directions of viewpoints
  • importance rating of peaks and parkings based on isolation
  • curved labeling of polygons
OpenTopoMap at low zoom levels

OpenTopoMap at low zoom levels

OpenTopoMap at higher zoom levels

OpenTopoMap at higher zoom levels

In context of maps with relief depiction it is also important to mention the oldtimer in that context – TopOSM by Lars Ahlzen. The map is no more operational and its development ceased more than ten years ago, but it was a significant step in the development of OSM based maps with relief depiction.

The historic TopOSM style

The historic TopOSM style

Another noteworthy contributor to the OSM map design ecosystem is the style of – an outdoor oriented map style that is written in a unique mixture of TypeScript and Mapnik XML by Martin Ždila. It is one of the most carefully designed OSM based map styles with relief shading (which is otherwise often not very well executed in Mapnik based styles)

The outdoor map style

The outdoor map style

Then we have as a relative newcomer to the field of OSM based maps, the OpenStreetMap Americana project. This is a client side rendered style based on the commercially developed and maintained OpenMapTiles data schema. Because OpenStreetMap Americana has been newly created and not derived from a pre-existing map style it is not yet really a complete map style. Technically, it is fairly unique as a style written in Javascript – some would even say it is therefore strictly seen not really a map style but a piece of software implementing map rendering on top of Maplibre GL as a rendering library.

Its use of the OpenMapTiles data schema and client side rendering through Maplibre GL design wise puts the project relatively close to the post modern ideas of corporate maps discussed above. Since the US cartographic tradition that Americana tries to connect to in its design even in the pre-digital manifestations has substantial post modern elements (much more than European cartography at least) this is a relatively good match. But it is also visible, that despite Americana being a relatively young and not yet feature complete project by its own goals, it already struggles significantly with the technical limitations of the map rendering framework used and the commercially developed data schema it is built on.

OpenStreetMap Americana style

OpenStreetMap Americana style

One particularly noteworthy design innovation of OpenStreetMap Americana is the highway shield library it contains. Pictorial highway shields are an important component of the US cartographic tradition and Americana meanwhile contains the most extensive collection of highway shield designs in digital form, with coverage way beyond the US. This work is unfortunately fairly strongly tied to the Javascript based framework of the style and therefore not easy to reuse in other map design projects.

I also want to mention Map Machine by Sergey Vartanov – which is both a map rendering engine and a map style and is noteworthy in particular because of the extensive set of point symbols for a highly differentiated depiction of features in the map that competes with the AJT-Style mentioned above in that regard.

Map Machine rendering example

Map Machine rendering example

And finally this list of maps is closed with another rather unusual project – the Straßenraumkarte Neukölln by Supaplex030. This is kind of a personal niche project so far, being limited to a small area in Berlin and – while it is in principle open source – not really being suitable for an independent deployment. But it is – map design wise – pretty innovative. It is more of a feasibility study on how large scale map design based on OSM data could look like.

Straßenraumkarte Neukölln

Straßenraumkarte Neukölln


So much as a glimpse into the current state of map design in and around the OpenStreetMap project. This is not meant as a complete list, more like a cross section with the aim to include most of the projects where major innovation in map design is happening or has happened during the past years in direct connection to OpenStreetMap. Still: If anyone feels that i have missed an important project please mention it in the comments below.

What does this tell us? That there is a reasonable diversity of map design work to be found around OpenStreetMap if you look carefully enough, although much of this is in a pretty precarious situation. Many of the projects mentioned show no or very little activity in recent years and practically all of them in some way struggle with the technical limitations of the tools used and quite a few resort to pretty exotic methods of script generating style rules to be able to develop the design they use. Also worth pointing out is of course that map design work in and around OpenStreetMap is massively dominated by European and North American projects and designers – much more so than mapping and also software development.

This situation is quite definitely not due to the lack of people interested in and with the talent and ambition to work in map design. The problem is that of the countless people that start looking into designing OSM based maps every month very few reach a level where they actually become innovative and produce something that is expanding the state of the art in OSM based map design and could therefore realistically make it on the list here. The reason for that is the map design tools we have are universally insufficient for that. Or in other words: Essentially, everything that can be reasonably done with the tools at hand for map design and map rendering has been already done a long time ago and to actually do something new you typically have to navigate around an obstacle course of limitations and flaws in rendering engines and styling languages – a requirement that a large percentage of to-be map designers are typically going to pass on.

What the main limitations are that, from my perspective, stand in the way of moving map design in OpenStreetMap to the next level and to practically get talented and motivated people into OSM map design, i have explained some time ago.

I am not going to speculate when and how something along these lines might actually happen. Obviously i would very much like to see progress here. But if such progress should come out of the OSM community (and not – like most of the currently used map design and map rendering tools – originate largely from the outside from the investments of commercial map producers) that would require a broader realization from within the OSM community of the necessity to invest here. And that is clearly a long shot.

What i am going to do in the next post (which is going to conclude this short series on map design) is explaining a bit where the short term trend in map design in OpenStreetMap might be heading in light of the recent activities of the OpenStreetMap Foundation around map rendering.

April 16, 2024
by chris

Digital map design history

Regular readers of this blog know that i have in the past pointed out repeatedly and with strong emphasis that map design is a vital component in the cross cultural communication within the OpenStreetMap community. Accordingly the ability to further develop and innovate in that field in a self determined fashion independent of commercial users of OSM data is of critical importance for the future of OpenStreetMap.

Naturally, others have different views of what is specifically important in map design than me. That is not only something that i very much respect, i also think (and have pointed out in the past as well) that diversity in ideas and strategies in map design is likewise crucial for a healthy development of OpenStreetMap.

However, what i frequently notice when i see OSM community members talking about map design and related topics – like development of software related to map design – is a remarkable lack of awareness of the historic context, often combined with a tunnel vision on certain very short term economic interests rather than a long term strategic view of the matter.

The history of OpenStreetMap-Carto is something i have already covered quite in depth in past texts. I want to supplement that now with some broader looks at the history of digital map design in general. And while this is going to focus in particular on what is relevant for OpenStreetMap, it can certainly also be of interest for map designers outside of the OSM world.

My hope is that these thoughts will help OSM community members interested in map design a bit to overcome the simplistic perspective on map design matters that is too often prevalent in OpenStreetMap these days and to have a meaningful discussion on how to nurture and value innovation and quality in community maps in the project in a sustainable fashion. And if that should not come to pass, then these thoughts might still be of value for digital map designers in general in understanding how this field of work came to be in the state it is in now and where this might develop in the future.

The origins of digital maps

In principle, digital map design is much older than OpenStreetMap. But use of digital methods in map production in the early years happened predominantly not in the field of actual map design, but either in the data processing preceding the design work (like in the form of calculating statistics on measurements etc.) or in the physical production of the map, that is use of digital methods in reproduction and printing.

One interesting observation you can make from that period (which i would put roughly in the 1970s and 1980s) is that digital technology started to influence map design even before it was practically used in design work on a routine level. For example:

  • Use of color changed, it became more common to use a larger number of colors in maps because printing and reproduction technology made this possible.
  • Maps moved to using more abstract and more regular symbols and patterns. This trend has been fueled in particular by three things: (a) the widespread use of mechanical typewriters around that time and in the decades before and the relatively simple, standardized design in typesetting this introduced, (b) the use of dry-transfer methods like Letraset to create high quality labeling and other symbology in pre-digital map production with a constrained set of shapes and (c) the influence that early computer UI design had on design habits and fashions.

As you can see, there was a significant connection between the development in typesetting and the development on map design and this influence continued as digital methods got also introduced more broadly into actual design work. Therefore, i want to have a short look at the development of digital typesetting at this point.

Excursion into digital typesetting

Typesetting is interesting as a point of comparison for the development of digital map design, so i want to quickly explain its history here.

Professional typesetting remained broadly a fully analog domain until the late 1970s. After that followed a rapid development of digitalization along three different lines:

  • The digitalization of high end professional typesetting.
  • Digital takeover of the typewriter market using general purpose personal computers as technological basis.
  • TeX and the underlying concept of fully automated typesetting based on a semantically structured definition of the content and generic styling rules.

The first line was a classical digitalization process in the form of virtualizing previously mechanical processes and then achieving productivity increases by better re-using previously performed work steps in virtualized form. This is very similar to countless other digitalization processes in other industrial production fields. Known software products in that field were for example Aldus PageMaker and QuarkXPress.

The second line forms a significant part of IT history with the early innovators WordStar and WordPerfect not having a long term economic success and the late copycat from Microsoft (Word) being able to dominate the market until today. What largely fueled this digitalization of the typewriter were two factors:

  • The use of general purpose personal computers instead of specialized machines promised much higher profit margins for software companies.
  • The promise to eliminate the costs of professional typesetting. That this remains an empty promise in many aspects to date did not prevent tools of this line (commonly called word processors) to take over huge parts of the professional typesetting market – resulting in a substantial decline in average typesetting quality of printed documents over the last 50 years.

The third line is different to the other two and fairly unique in the history of digital technology overall. In contrast to most other digitalization endeavors, Donald Knuth did not simply try to virtualize a pre-digital process paradigm but newly automatized the full path to get from the manuscript of the author, formulated in a way that is convenient for the author, to the end product – a book with high quality typesetting.

That was a quick look into the history of digital typesetting – which i will came back to for comparison later.

Early digitally designed maps

Introduction of digital methods in actual map design on a larger scale came only somewhat later in the 1990s and 2000s. This process showed strong similarities to the development of digital word processing, in the sense that it came with the promise and was often rolled out with the aim to eliminate the craft of professional map design (in analogy to professional typesetting). Accordingly, the sector that largely pioneered digital map design in the early phase was Geo-sciences. Essentially, every geo-sciences department at a university around the world at that time had their map design office staffed by professional cartographers tasked with producing maps to illustrate the work of the scientists. Now the scientists were already using digital methods to process the data they were working with and often thought of themselves as the better cartographers and professional cartography as minor mechanical auxiliary work. Therefore these scientists were an easy and thankful target for the developing software sector around geodata processing. The tools that were developed and essentially sold like hotcakes at that time were focusing on interactive geodata manipulation and analysis and were offered with the promise they would also take care of any visualization needs without the need of specialized staff with specific map design knowledge. You can literally see this development in map design featured by contemporary scientific publication, where visualizations created with such tools are becoming more and more common during that time. Overall, the GIS software sector with its technical focus then substantially displaced and marginalized the distinct professional domain of cartography in the same way as digital word processing displaced the typography centered professional typesetting sector – at the cost of a substantial decline in map design quality.

The public mapping agencies followed essentially the same trend with a slight delay of a few years. Here cost cutting stood in the foreground as the driving factor. Over centuries in many countries official cartographic institutions had been an important device to demonstrate and project power and sovereignty – internally, but also externally, in particular in context of colonial and imperial ambitions. These efforts in many cases reached their peak in World War II but continued to stay important during the cold war period. With the end of the cold war and the spread of neo-liberal political agendas in western countries many of these national cartography programs were massively cut back – to a point that even maintaining national base cartography to previously established standards and keeping it up-to-date on that level became difficult.

Because this development in institutional map production came before or at the very beginning of digitalization we did not see a separate line of digitalization in professional cartography in the same way as we saw it in typesetting. Instead, digitalization in institutional cartography followed roughly the same route as in geo-sciences around GIS tools that focus on technical analytic and data manipulation capabilities. Some mapping agencies developed their own proprietary frameworks independent of or supplementing commercially available systems but they did not, for the most part, develop significant unique innovations, especially not in the domain of visual design.

In the products of public map agencies, these initial digitalization efforts came together with a decline in visual quality and design sophistication as well, in many cases leading to a complete loss of certain cartographic techniques in the repertoire and the institutional memory of these organizations. In some cases you can observe in the published maps how the map producers struggled with that and kept using old hand produced pre-digital layers and design components in their map long after digital production techniques had been rolled out otherwise because they lacked the tools to produce these digitally without an obvious major regression in the feature set of their maps. This in particular often applied to terrain depiction.

There is an additional line of digitalization specific to map production, that cannot equally be found in typesetting, that you can observe in the field of high visual quality maps and map like visualizations, usually produced by smaller independent design bureaus and independent cartographers. Here, digitalization came much later and made use of techniques and tools primarily developed in the field of digital art and graphics design. This is the only field so far where we have seen visual design innovation on a larger scale compared to the pre-digital state of the art. There has been some cross-over back from this domain into institutional map production, but cases where this happens are relatively rare.

The rise of interactive web maps

The big, disruptive development that hit the field of map production after the early digitalization steps that i have sketched above is the advent of automatically rendered interactive maps. This started in the mid 2000s and had a massive influence on all kinds of map design from the 2010s onward.

The analogy to typesetting becomes more murky here – though you could say that the equivalent to this development in typesetting is the spreading of hypertext and the World Wide Web as a major medium of publication and consumption of typeset text.

The main characteristics of this development were:

  • Maps becoming completely detached from a permanent physical manifestation and being produced primarily for consumption on digital display devices.
  • In connection to that, abandoning the map sheet concept and moving to a seamless and/or tile based production paradigm.
  • Replacing the limited set of scales that maps are produced in which varied between different cartographic traditions and that each had their own specific map design paradigms with a continuous sequence of scales with a factor of two between.
  • Introducing the concept of navigating the map interactively, both in the spatial domain and across scales.
  • Abandoning the diversity in map projections used in traditional cartography in favor of universally standardizing on a single projection (and consequentially in most cases essentially abandoning polar region cartography altogether)

As interactive web maps gained practical importance, the tech companies offering these became the main producers of practically used maps, taking over that role from public map agencies and traditional commercial map publishers. This is interesting because the tech companies of course, especially initially, depended completely on the traditional map producers for the data they used to create these maps. And as the traditional map producers became increasingly aware of this development, they began to view themselves increasingly less as map producers and more as the producers and owners of the underlying data, often leading to a tighter grip on said data by these institutions. This is the situation from which OpenStreetMap was born and became popular. But i don’t want to write about the history of cartographic data production here but about map design – so this just as a side note.

The automated production of maps is not initially an inherent part of this trend. But it soon became a major reason why this development was so disruptive. The characteristics of web maps as described above made automated rendering of the map very attractive and accordingly the early years of web maps in the late 2000s saw the establishment of many of the basic automated rendering techniques that are ubiquitous in digital maps today, like the drawing of roads with round line caps and line joins as a simple method to create a visually consistent depiction of a road network from a data representation as a simple line graph without context dependent adjustment of the drawing method. And, like in case of GIS software, most of the underlying paradigms did not come from traditional cartography or graphics design, but from technical applications – like CAD systems. Overall, the graphical paradigms that production of web maps was based on back then and that largely continues to form the main structure of frameworks today, was roughly what formed the basic feature set of high level 2d drawing libraries at that time. In a nutshell: Think SVG 1.0, not PostScript. This is interesting, in particular when you look how rendering frameworks today often try to retrofit these paradigms into the much lower level WebGL framework (often with rather limited success).

This second step in digitalization of map design came with a further decline in design capabilities. During the first digitalization phase, it was mostly techniques that could not be expressed efficiently in digital form using the constrained technical feature set of the tools used, that were abandoned. Now, with automatically rendered maps, the problem was that everything that was to be shown in the map needed to be derived from a data representation and a generic set of drawing rules. Those techniques that either required a complex or scale specific data representation or drawing rules too complex to be efficiently formulated in the languages employed for that purpose were dropped in this second phase.

Interactive web maps continue to expand in their domain of application these days, in particular within public mapping agencies. Significant progress has been made in the 2010s and the last years in expanding the interactive features of the web map paradigm in various forms, but in terms of map design capabilities, development has essentially plateaued. I have discussed where the limitations in automated map design are currently and what would be needed in terms of tools and their capabilities to move map design to the next level. For the large tech companies that continue to dominate the field of interactive web maps, however, innovation in actual map design capabilities is not a very lucrative field of investment.

This is where – as i pointed out in the past on several occasions – the FOSS and the OSM community could and should come in but unfortunately do not so far. Where development in OpenStreetMap is at the moment in terms of map design i am going to discuss in the next post.


That was a quick (and certainly selective) run through the history of digital map design and i am sure in the eyes of many knowledgeable readers i have missed important parts of this history. One important bottom line i tried to point out is that the whole process of digitalization, with its undeniable advantages in increasing efficiency and making maps more accessible to a huge number of people, came at the cost of substantial losses in design abilities and cartographic techniques, many of which have been developed and refined to very high standards in the centuries before. Many of the design methods that have been lost (or given up – depending on your point of view) already went out of use several decades ago, so that the last people who mastered these technique are no more alive or at least retired and not practicing these methods any more.

And it is not that these abandoned methods are inherently incompatible to digital application or even to use in automated processes. In most cases simply no one has so far invested in developing these methods for digital application or even the precursor to that: Developing the frameworks and languages to formulate such methods in digital form.

Coming back to the initially discussed analogy between map design and typesetting: Donald Knuth and TeX have been a truly extraordinary blessing for the development of digital typesetting that continues to set the bar for others in that field to this date and that forms the basis of a remarkable collection of high quality typographic tools. And this was not just luck – Donald Knuth was the right person with the needed background, skills and motivation at the right time that provided him with the freedom and the resources to pursue his project. Even if there was a Donald Knuth 2.0 today who was into map design, the social and economic circumstances today would make it unlikely for him (or her) to develop a TeX for maps. But that is not a reason to give up hope – even if practically useful progress could take significantly longer than i would like. My main concern here is that with every passing year the collective memory of traditional cartographic techniques – abandoned not because they are obsolete but because we lack the ability so far to continue using and further developing them digitally – fades away more and more.

Addresses and Entrances

April 11, 2024
by chris
1 Comment

Addresses and Entrances

In this post i am going to introduce what i have been working on at the last Karlsruhe Hack Weekend in terms of map design and some further work i did continuing that work in the weeks afterwards.

What i worked on is improving the depiction of addresses and entrance nodes.


With entrance nodes OpenStreetMap documents the location of entrances in buildings and their function. A node is placed on the outline of a building polygon and tagged entrance=*, potentially in addition with access tags and address information.

Entrance nodes have been used for quite some time in OpenStreetMap (since around 2012). They are an element of in-detail mapping of builtup areas and naturally come after the mapping of buildings. But in contrast to buildings – where data imports and automated mapping are fairly widespread in OpenStreetmap – entrances are almost exclusively the domain of on-the-ground mapping based on local knowledge so far.

OpenStreetMap Carto has shown entrance nodes since 2018 with small square symbols. entrance=main and entrance=service are rendered with different symbols, entrance=home and entrance=staircase identical to entrance=yes.

Entrances in OSM-Carto at z18 and z19 - link goes to double resolution version

Entrances in OSM-Carto at z18 and z19 – link goes to double resolution version


Addresses are among the three most widely mapped things in OpenStreetMap – after buildings and roads/paths.

A lot has been written on address mapping in OpenStreetMap by others, in particular on their importance for the usefulness and success of the project and on the matter how to determine valid addresses of features in OpenStreetMap from the data with the variety of address systems used in different parts of the world.

The peculiar thing is that – while there is broad agreement what tags to use for the different components of addresses, including various regional particularities – there is no agreement on the question if an address is a feature (a distinct data object that has meaning on its own – and that abides by the One feature, one OSM element principle) or if it is a set of secondary attributes to be applied to other features.

This lack of consensus on one of the most widely mapped things in OpenStreetMap has quite substantial consequences on map rendering – which will be a significant part of what this blog post deals with.

Of the address data in OpenStreetMap, OSM-Carto primarily renders housenumbers (addr:housenumber). It meanwhile also shows (in addition to addr:housenumber if that is tagged):

  • addr:housename (used when there is no housenumber and an individual address is identified by a locally unique name of the building)
  • addr:flats
  • addr:unit (only if there is no addr:flats)

All of these are shown in one single uniformly formatted labeling string when tagged on either a node or a building polygon without any consideration of context. They are shown with lower priority than most other labels and symbols.

Address label rendering in OSM-Carto at z19

Address label rendering in OSM-Carto at z19

Rendering issues

There are a number of issues with the current rendering of addresses and entrances in OSM-Carto:

  • The rendering scheme for addresses incentivizes mappers to map addresses with separate address nodes – a mapping practice that is common, but not universally followed – and to position these addresses as labeling nodes where, between the other features in the map, there is space for an address label.
  • Combining several different address tags into a single labeling string leads to confusing labels with unclear meaning. See 3568.
  • The address labels start with low priority – but at z17, meaning that POI symbols/labels starting later will hide the address. See 3904.
  • More generally: Address labels are shown with lower priority than POI symbols/labels, but at the same location. This means that an address tagged on a POI that is rendered is not typically shown.
  • If a housenumber is tagged on multiple features – like on a building polygon, an entrance node and a separate address node – it is potentially shown several times. While this can be considered useful feedback to indicate the duplicate tagging, it does not actually point out most of the duplicates because they are on POIs – where the housenumber is usually not shown (see previous points).
  • Long addr:housenumber/addr:flats/addr:unit values containing lists of several numbers lead to large labels. See 4155, 4160.
  • Addresses tagged on entrance nodes (which is a common practice) lead to address labels drawn over the entrance symbol. See 3038.
  • Additional address information on entrance nodes is not shown. See 4687.
  • Some well defined values for entrance=* (entrance=garage, entrance=shop) are not shown.
Some of the issues with OSM-Carto address and entrance rendering.  From left to right: Ambiguous semantics of compound address labels, overlapping entrance symbol and address label, housenumber shown in duplicate when tagged on entrance and building/address node, housenumber not shown for POI, manual address labeling with separate address node

Some of the issues with OSM-Carto address and entrance rendering. From left to right: Ambiguous semantics of compound address labels, overlapping entrance symbol and address label, housenumber shown in duplicate when tagged on entrance and building/address node, housenumber not shown for POI, manual address labeling with separate address node

Improved entrance node symbology

The square symbol for entrance nodes has proven to be a simple but effective method to show entrances that well integrates into the style overall. I have extended this concept to differentiate more entrance types than OSM-Carto and use enlarged symbols at z20+ for better readability.

The different entrance types and their visualizations at z19 and z20 - link goes to double resolution rendering

The different entrance types and their visualizations at z19 and z20 – link goes to double resolution rendering

The symbols for entrance=entrance, entrance=exit and entrance=staircase are context dependent, adjusted to the orientation of the building edge they are placed on. How this works can be seen below. For the staircase variant the symbol is selected from two horizontally mirrored variants so the stairs symbol crosses the building outline at an angle that leads to best readability

Context adjusted entrance symbols for entrance=entrance, entrance=exit and entrance=staircase at z20

Context adjusted entrance symbols for entrance=entrance, entrance=exit and entrance=staircase at z20

New address labeling concept

More extensive than the entrance node symbology improvements are the changes made to address labeling. Here is the basic scheme of the new labeling.

New labeling scheme for addresses an address nodes and entrance nodes in various contexts at z19 - link goes to z20 version

New labeling scheme for addresses an address nodes and entrance nodes in various contexts at z19 – link goes to z20 version

What is changed compared to OSM-Carto is in particular the following:

  • Address labels on entrance nodes are not centered any more but drawn next to the address node.
  • addr:housename labels are not shown on entrance nodes (where they don’t really make sense).
  • addr:unit and addr:flats are shown with separate labels at different directions relative to the labeling location (top right, bottom right) and with somewhat different label style. If there is no housenumber to be tagged then a small dot is shown as reference point.
  • On entrance nodes in addition ref is shown.
  • Multiple identical housenumbers on or within the same building polygon are de-duplicated.
  • addr:housename modifies the label arrangement to ensure sufficient space for a longer name.
  • Label sizes and arrangement are adjusted for z20+ for better readability.

In addition, housenumbers are now also shown when there is a different symbol or label preventing an address label to be shown at the usual location. This is achieved by trying different offsets of the label in different directions with the help of mapnik anchors, which i discussed in previous posts already.

As currently implemented the de-duplication of housenumbers is only active between the building polygon, entrance nodes on and POIs within the building. Several POIs with the same housenumber within a building without an identical housenumber get separately labeled if space permits. This ensures that in buildings with no unique housenumber it is shown which POIs have which housenumber – since some might share one number while others have a different number.

Display of housenumbers on POIs with de-duplication (z19)

Display of housenumbers on POIs with de-duplication (z19)

From left to right we have here:

  1. Building with addr:housenumber and entrance node with same addr:housenumber – the housenumber is only shown once on the building.
  2. With name tag on the building in addition – building name is displayed, as well as housenumber above.
  3. Generic shop type (shop=*) tagged on building in addition – shop gets shown with symbol and name label, housenumber left of symbol.
  4. Same with shop=supermarket – housenumber location gets adjusted for the larger symbol.
  5. shop=supermarket as a separate node instead (also with same addr:housenumber) – housenumber gets displayed for the building only, moved because of the symbol.
  6. same with a different housenumber on the shop – both housenumbers are shown.

The logic to display building names is modified to avoid name labels being shown in case the name tag is ambiguous. If a shop or office is tagged on a building, then it is not clear if the name tag is the name of the building or the shop/office. You can see that in case of offices, where OSM-Carto at z17 shows the name as the building name – while on the higher zoom level the same name is shown as the office name. My change avoids this:

Same as above, but for offices and at z17, z18 and z19 - showing how names on buildings are only labeled if they non-ambiguously indicate a building name

Same as above, but for offices and at z17, z18 and z19 – showing how names on buildings are only labeled if they non-ambiguously indicate a building name

And long lists in addr:housenumber, addr:unit and addr:flats are reformatted and shortened as needed:

Abbreviation and re-formatting of long lists of numbers in addr:housenumber, addr:unit and addr:flats

Abbreviation and re-formatting of long lists of numbers in addr:housenumber, addr:unit and addr:flats

Real world examples

Here a few practical examples based on actual real world data, links go to OSM-Carto rendering at

Spiekeroog, Germany at z18

Spiekeroog, Germany at z18

St. Petersburg, Russia at z18

St. Petersburg, Russia at z18

Verona, Italy at z19

Verona, Italy at z19

Portland, US at z19

Portland, US at z19

Freiburg, Germany at z19

Freiburg, Germany at z19


What i have shown here are two things:

The first is the entrance symbol modifications, where i demonstrated how a relatively compact symbol can be modified in subtle ways to transport more differentiated classifications, including context dependent designs based on the orientation of the building wall the symbol is placed on.

The second is the improvement of address labeling in various aspects that i discussed above in more detail. This relies substantially on the use of mapnik anchors – which i discussed and used previously – for conditional rendering of labels. Techniques used here are in particular

  • De-duplication of multiple identical housenumbers on the same building.
  • Adaptive arrangement of different address label components depending on which tags are present (address tags and entrance tag).
  • Trying alternative placements of housenumber labels in the presence of other blocking labels.
  • Not interpreting name tags as building names when ambiguously tagged together with POI tags.
  • Abbreviating long lists of numbers in address tags.

There is still a lot of room for improvement with the address labels, in particular the strategy for trying alternative placements to show housenumbers next to POIs is pretty basic and could be improved in particular in dense labeling situations at the lower zoom levels, where many housenumbers currently end up pretty far away from their mapping location to a point where it can be confusing.

The most controversial change might be the de-duplication. A point can be made that this paves over inconsistencies in tagging in violation of the One feature, one OSM element principle. But as i already discussed, the OSM community is divided on the matter. And even if there was agreement among mappers that addresses are features and not secondary tags – implementing such an agreement against the massive collective pressure and resources of the SEO crowd is likely not going to be successful. OpenStreetMap will most certainly have to live with the fact that different semantic concepts will be used in parallel for address mapping for the foreseeable future. De-duplication is needed to deal with that in map rendering. And in contrast to other de-duplication problems we here have the advantage that we have the building as a clear geometric frame of reference.

As usual the style modifications discussed here are available on the Alternative Colors style repository.

Antarctic images for Mapping

March 18, 2024
by chris
1 Comment

More Antarctic images for mapping

I processed another set of satellite images of the Antarctic for mapping in OpenStreetMap – further extending the fraction of the Antarctic for which up-to-date images are available to mappers.

As before – depending on how quick the editors are in adding the new images to their databases these might be soon available in your editor of choice – but you can also add them manually based on the links provided on my preview map.

A few highlights:

Representation of mappers in OSMF membership

February 4, 2024
by chris

Representation of mappers in OSMF membership

As i have indicated in my pre-AGM comment end of last year i intend to look at developments in the OpenStreetMap Foundation less on an acute level and regarding current events and more focus on long term developments, trying to help people better understanding those. This is the first post i am writing under that paradigm.

In 2019 i last looked at the membership structure of the OSMF and how much it represents the active mappers in OpenStreetMap in their geographic distribution. Since then others have analyzed the numbers as well – but i thought that after nearly five years having a look along the same lines as back then might be useful.

Now there are people who dismiss this kind of analysis as irrelevant because it is just about mapper representation and does not consider non-mapping contributors to OpenStreetMap. But in most cases this argument seems primarily used to justify a continuation of the existing cultural dominance in the OSMF. Because, evidently, non-mapping contributions to OpenStreetMap typically much more than mapping require knowledge and familiarity with the English language. So in my eyes, basing the geographic representation in the OSMF on the geographic distribution of mapping would be as close as it practically could get to a balanced representation of the OSM community in that regard overall.

The numbers used are taken from recently published data. Unfortunately the OSMF still has no automated regular reporting of membership statistics and i did not want to take up time from the MWG volunteers by asking them for numbers specifically for me. Keep in mind that there are different concepts of membership that can be analyzed – either the paid up members (those who are hypothetically eligible to vote in a general meeting at the moment) and all formal members (including ones in grace – that means who have last renewed their membership between one and two years ago and who are therefore still considered formal members despite not being eligible to vote).

Like in 2019 i put the OSMF memberships per country of residence in relation to OSM mapping contributor statistics published by Pascal Neis. With those – keep in mind that they are based on where mappers are active, not where they are from (which is typically not known). And since Pascal meanwhile publishes also estimates how many of the active mappers are part of organized mapping activities, i did my analysis separately for either all mappers or only the non-organized mappers.

Otherwise, the columns are mostly identical to those in 2019:

  • OSMF members: number of OSMF members from that country (normal and associate, paid and active contributors) according to data provided by the MWG
  • Mappers/Day: average number of mappers from that country active per day according to Pascal’s statistics (averaged over the last 52 weeks)
  • expected: expected number of OSMF members from that country assuming proportional representation and same total member count (1929)
  • representation: percentage of actual representation compared to expected (100 indicates proportional representation)
  • mismatch: difference between expected and actual number of OSMF members, negative values indicate too few members for proportional representation

Because the total number of OSMF members has significantly increased since 2019 the absolute numbers are a bit difficult to compare, it is mostly the representation numbers that can be looked at in that regard.

What i included in addition this time is the other countries not on the list where there are mapping activities but from where there are no OSMF members at all. This is the gray line. Of course for these the representation is zero.

Here are the numbers, sorted by average active mappers/day:

Representation of mappers in OSMF membership - 2024

Representation of mappers in OSMF membership – 2024, link goes to larger version

Here a CSV file with the data from that table.

What i see when looking at the data is in particular:

  • The strength of the over-representation of the most strongly represented countries has, overall, substantially decreased. If we exclude the countries with very few mappers and <5 OSMF members (for which the representation calculation is extremely inaccurate) there is only one country now with more than 300% representation (Luxembourg, 413%/402%) and the two largest countries - both in terms of mapping and in terms of OSMF membership numbers (Germany and US) - have both decreased in their over-representation (the US very strongly, Germany somewhat less). The UK is on exactly the same level as in 2019 (254%) while the Netherlands have increased quite a bit (from 170% to 207%).
  • More importantly the representation of the most severely under-represented countries in 2019 (Poland, Indonesia and Russia) has increased substantially. This is most impressive for Poland (8% to 75%) but also for Indonesia (9% to 29%, 34% if you exclude the organized mapping activities). Russia went from 11% to 28%.
  • The most under-represented single countries with a large number of active mappers now are Russia, China and Iran. Japan has more or less retained its level of representation, but is still quite low among the more active countries with 41% representation.
  • The most important observation is the same as in 2019 – that the countries with relatively small mapping activity (the long tail) are collectively severely under-represented. This is most obvious in form of the line for ‘others’ – which represents all the countries with no member in the OSMF at all. These countries together, based on their mapping activities – would deserve about 150 seats in the OSMF membership based on their mapping activities.

The most significant factor that has influenced the representation in the past years is without doubts the active contributor memberships that remove the hurdle of needing to pay for being an OSMF member.

This just as a quick look, readers should do their own observations and draw their own conclusions of course. Feel welcome to comment with your thoughts below.

All of this is of course only on the subject of geographic representation. And even if this aspect is further improved and we’d ultimately achieve a proportional representation of the whole OSM community also in other aspects that is no guarantee for the OSMF being run according to the needs of the community.

A big issue, which i already hinted at by separating the organized mapping activities, is the over-representation of people with an OSM related business or career interest. Among mappers this is more or less equivalent to organized mapping – and although we have no fully reliable data on which mapping activities belong to that, thorough analysis of mapping behaviour can provide a good estimate. We do not, however, have any reliable information on which OSMF members have OSM related business or career interests. What we can see meanwhile is that among people active in the OSMF in a form visible to the outside observer (both volunteers out of their own initiative as well as among members of appointed bodies) the hobbyists are clearly in a minority now and their percentage is further decreasing. But if this can be extrapolated to the OSMF membership is not quite clear. If this is the case then that might be a more severe representation problem than the geographic distribution. If not the mismatch between the social structure of the OSMF members and people active in the OSMF is likely to create problems on its own.

3d view of the Pyrenees based on Musaicum EU-plus

January 15, 2024
by chris
1 Comment

Musaicum EU-plus – additional layers

Back in July i introduced the Musaicum EU-plus – a 10m resolution satellite image mosaic of Europe. I have now finalized two additional variants of this image that i would like to introduce here.

Both of them are fairly specialized data sets, therefore i did not so far list them in my products. I mostly produced these for my internal use in the production of my own higher level visualization products. But they are also available for external use on request.

Shading compensated image mosaic

I already provided a preview for this when introducing the standard mosaic in July. In addition to the standard surface reflectance image product, i am offering also a shading compensated version. This is not based on the very poor quality L2A data provided by ESA but uses a custom algorithm developed by me specifically for visualization applications.

Musaicum EU-plus shading compensated in the Western Alps

Musaicum EU-plus shading compensated in the Western Alps – click for larger version, original, not shading compensated for comparison

Musaicum EU-plus shading compensated in the Pyrenees

Musaicum EU-plus shading compensated in the Pyrenees – click for larger version, original, not shading compensated for comparison

This shading compensated version of the mosaic has now been produced and evaluated for the full coverage area of the Musaicum EU-plus.

Musaicum EU-plus shading compensated - coverage area

Musaicum EU-plus shading compensated – coverage area

Vegetation and water map

In addition to the visual color images i also produced a fractional landcover data set with the same grid specifications – comparable to similar data sets i had produced for regional mosaics before.

This data specifies the fractions of a pixel covered by

  • herbaceous vegetation
  • woody vegetation
  • water
  • bare ground

Visualized using different colors for the different landcover classes this looks like the following:

Fractional landcover data visualization - Western Alps

Fractional landcover data visualization – Western Alps

Fractional landcover data visualization - Pyrenees

Fractional landcover data visualization – Pyrenees

This data is likewise available for the full coverage area of the Musaicum EU-plus but should be regarded more experimental w.r.t. the herbaceous/woody vegetation, in particular at higher latitude where low sun angles and the resulting differences in illumination make a consistent classification difficult.

Application examples in maps

Here a few examples how this new data can be used in map production. These are in web mercator projection at z13 and z12.

Musaicum EU-plus image - click for larger area

Musaicum EU-plus image – click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading - click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading – click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading and contours - click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading and contours – click for larger area

Landcover rendering based on Musaicum EU-plus fractional landcover layer - click for larger area

Landcover rendering based on Musaicum EU-plus fractional landcover layer – click for larger area

Musaicum EU-plus image - click for larger area

Musaicum EU-plus image – click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading - click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading – click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading and contours - click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading and contours – click for larger area

Landcover rendering based on Musaicum EU-plus fractional landcover layer - click for larger area

Landcover rendering based on Musaicum EU-plus fractional landcover layer – click for larger area

Musaicum EU-plus image - click for larger area

Musaicum EU-plus image – click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading - click for larger area

Musaicum EU-plus shading compensated with artificial northwest shading – click for larger area

Landcover rendering based on Musaicum EU-plus fractional landcover layer - click for larger area

Landcover rendering based on Musaicum EU-plus fractional landcover layer – click for larger area

Application examples in 3d

One of the main uses of the shading compensated imagery is of course the production of 3d views with a freely adjustable lighting independent from the light direction when the images were taken. Here a demonstration of that.

Pyrenees with early morning lighting

Pyrenees with early morning lighting

Pyrenees with afternoon lighting

Pyrenees with afternoon lighting

Pyrenees with late afternoon lighting

Pyrenees with late afternoon lighting

Pyrenees with evening lighting

Pyrenees with evening lighting

And another example from the Alps – compare also to this older view.

Bernese and Uri Alps with afternoon lighting

Bernese and Uri Alps with afternoon lighting

Bernese and Uri Alps with morning lighting

Bernese and Uri Alps with morning lighting

Further 3d rendering examples:

Ordesa Valley, Pyrenees

Ordesa Valley, Pyrenees

Pennine Alps, Italy/Switzerland

Pennine Alps, Italy/Switzerland

Mt. Etna on Sicily, Italy

Mt. Etna on Sicily, Italy

Apart from the Musaicum EU-plus these visualizations are using data from: OpenStreetMap contributorsODbL, L’Institut national de l’information géographique et forestière (IGN) – Licence Ouverte 2.0, Istituto Nazionale di Geofisica e Vulcanologia (INGV) – CC-BY 4.0, Bundesamt für Landestopografie (swisstopo), Centro Nacional de Información Geográfica (CNIG)