November 15, 2022
by chris
1 Comment

The OpenStreetMap Foundation in 2022 – looking back

We are approaching the end of the year 2022 and that means the annual general meeting of the OpenStreetMap Foundation is coming up and it is time again to look at what happened in the last year and to update my outlook for the future of the organization.

I am – as i indicated previously in other blog posts – increasingly reluctant to write more in depth on OSMF matters. Apart from the increasing commercialization of OSMF politics and the increasing dominance of lobbying in the political deliberation in the OSMF that i have mentioned and discussed many times before, i also feel there is a progressing decline in the extent and the standard of public discourse on OSMF politics. Outside of the commentary i provide and a few sporadic, isolated remarks of individuals on various – often volatile – channels, substantial analysis and open exchange of diverging views and of the arguments behind them has almost completely vanished from the public sphere of communication. I will get to some aspects and consequences of that development in the following discussion of the last year in more detail.

The funny thing is that i of course have my own business interests in the wider context of OpenStreetMap so i imagine i could (and maybe should?) feel at home and among like-minded people in today’s OSMF. But i don’t. I find the whole idea of political lobbying for economic interests both appalling and pointless in the context of a project like OpenStreetMap. The idea that i could use anything but arguments and reason presented in open public discussion (open in particular to be countered and refuted by anyone interested) to influence decisions made within the OSMF to my – direct or indirect – benefit is completely out of the question for me. That applies to both decisions potentially for my direct benefit (i.e. trying to get into the inner circle of people whose work we know and enjoy) and to trying to influence policy decisions in the OSMF in general in a direction that seems likely to be to my benefit.

Clouds over water with sailboat

I like to emphasize that i am not naive in that regard. Of course in the business world arguments and reason are often not the main factors in decisions that are made. My argument against this kind of lobbying is not only on the moral level but also that it is pointless. As i have outlined in my past discussion of the corporate takeover of the OSMF – if the process of making policy and money spending decisions in the OSMF is a matter of (mostly non-public) negotiations of interests rather than an open exchange of arguments and reason, it would be an illusion to believe that a small actor like me (or any SME for that matter) could successfully lobby for interests that do not align fully with those of the big corporate actors with multi-million Euro lobbying budgets at their disposal. In business, you have to pick the battles you can win. And i can hold my ground in an open struggle of arguments and reason even against a multi-billion Euro giant with a lobbying team of a hundred people, but i stand no chance if this is not about who has the best arguments for their ideas but who puts most power behind their interests.

And the question where the OSMF stands on that scale between decision making based on arguments and reason and negotiation of interests is, as usual, a significant part of what i will be looking at and discuss in the following.

Changes in the 2022 board

The past year in the OSMF (and i mean that as the political year of the organization, starting with the annual general meeting (AGM) rather than the calendar year) started with the former board chairperson, Allan Mustard, leaving the board. As i have already indicated in a previous blog post, with Allan departing the last board member with a distinctly non-technical background has left the board. And while this on its own is not such a major event (he was just one of seven board members after all), it marked the conclusion of a long term trend in the OSMF for many years.

I disagreed with Allan on a lot of questions of OSMF politics and i have articulated that disagreement on many occasions – as those of you who regularly read this blog have probably seen. But at least Allan was regularly exposing and presenting his views to the public scrutiny. As a relative outsider, he knew that while he brought in valuable perspectives that were otherwise missing on the board, he also depended on the input of people with more specific inside knowledge to make meaningful decisions, even if that meant listening to people (like for example me) who you fundamentally disagree with on many things.

That openness to actively seek and listen to diverse and diverging views and perspectives is of course only the first step in the process of making good governance decisions. The next one would be to put into question your own views and preconceptions based on the diverse arguments received and to discuss the conflicting ideas broadly and with an open mind to ultimately make good decisions based on balanced reasoning and an open struggle of arguments. This second part, in particular putting into question your own views and preconceptions and to discuss the conflicting ideas broadly, is what – according to what i observed – Allan struggled more with. But, overall and in retrospect, his public communication can be considered a shining light compared to that of the current board members.

The reason why i emphasize this here is that in the upcoming board election and in the process of four new OSMF board members starting their terms that will follow, these abilities could be something that might be good to look at.

clouds over water with sailboats

What happened in the past year

Overall quite a lot of things have happened in the OSMF during the past year and largely due to the very poor public communication of the board and due to the lack of interest from the OSMF membership, much of this has flown under the radar. The remarkable circumstances of some of the decisions (i will get to some cases in the following) led me to conclude that hardly any of the OSMF members currently feels inclined to do any substantial supervision of the work of the board these days. So a comprehensive writeup of what has happened from an independent outside perspective would be crucial probably. I, however, do not have the time or inclination to really do that so what follows is just a somewhat subjective and sparse cross section.

clouds over water

Present throughout the whole year was the never-ending story of takeover protection/membership prerequisites. In short: In the 2020 AGM the membership, on suggestion of the board, passed two resolutions: (a) One mandating the board to work on a set of proposals to require OSMF members to have contributed to the project in some meaningful way before they can be accepted as members and (b) to investigate the risk of paid voting in the OSMF membership and potential measures to safeguard against that possibility. Both resolutions mandated a completion of the tasks within a year.

The board members seemed not to be able to agree on how to proceed on these resolutions so they appointed a special committee on takeover protection to investigate the paid votes and takeover risks. Nothing came out of that committee that was publicly disclosed though. There was apparently a report prepared internally end 2021/early 2022 but no information about that was disclosed to the members.

Before the 2021 AGM, the board realized they can hardly go into the AGM without having in any way acted in substance on the resolutions of the members – which, as mentioned, ironically were originally approved by the members based on the initiative by the board. So they went into the AGM with the promise that there would be an extraordinary general meeting in April when the board wanted to provide a proposal for membership prerequisites for the members to decide on. It was already kind of foreseeable that this would be unlikely to happen and not surprisingly the plans for the April general meeting were cancelled. Now in late 2022 we are in a way in a similar situation as last year, so what the board did in the October board meeting was to rush doing a survey of the OSMF members (so not a binding vote like they had intended for April) about what the members think of the decision the board has already made in June, which however has so far apparently not been implemented. And this is also not what either of the original 2020 resolutions called for – which are proposals to be submitted to the members at an AGM.

This probably sounds to outsiders like i am making this up as a parody – but i am not.

clouds over water with sailboats

My own take on this: Adding the requirement for a voting member of the OSMF to have shown some interest in and to have gained some at least superficial familiarity with what OpenStreetMap is all about is prudent and 15 lifetime mapping days is a reasonable cutoff for that. But this could have been done a long time ago without making a fuzz and dragging it on taking everyone’s time with repeated votes and surveys for years. If the board wanted to ask the members what they think about this idea then they could have done what they were mandated to do in the 2020 resolutions and make a resolution proposal for the AGM and let them actually decide. Making a non-binding survey about a single policy question a few weeks before the AGM, where they could have presented it as a resolution proposal, is disrespectful and patronizing. By first dragging their feet for two years, then making a decision that is decidedly not what the resolutions called for but not actually implementing it and asking the members instead in a survey what they think about it the board is seriously damaging their relationship with the OSMF members i think.

And no one should for a second believe that such requirement will substantially limit corporate influence on the OSMF.


A rather significant development during the past year was the rollout of the behavior regulation system and of the centralized communication platform. I have commented on both more specifically already. As indicated in previous comments, observing the hierarchical and centralized community management ideas from the OSMF colliding with some of the more diverse and independent parts of the OSM community is rather interesting – though technically challenging because you have to either use the mailing list mode with the broken references (so without any threading as a flat stream of messages) or the inefficient, patronizing, unstructured and poorly readable web interface. While traditional communication infrastructure provided by the OSMF in the past (specifically the forum and mailing lists) have been managed strictly from the bottom in a decentralized fashion, the new platform has been rolled out with a tight centralized management and a community manager paid by HOT. Every local group of people that wants to have their space on the platform needs to undergo a formal procedure under the auspices of this central management before they get accepted and are obliged to continue to abide by the centrally imposed procedures also after that. Despite these constraints, there has been quite a bit of uptake of the new platform by some local communities (including the German one) but it is also visible that quite a lot of filtering out of people happened as a side effect, partly because of the requirement to accept the top-down imposed social norms and procedures, partly because of the lack of a truly self determined way to participate on these channels.

A lot more could be said about what can be observed there these days in terms of social dynamics, but i will leave that for another time.

clouds over water with sailboats and freight ship

The board has in May started the first board committee that includes non-board membersthe fundraising committee. The non-board members selected for that by the board are all Americans – evidently people whose work we know and enjoy of the Fundraising Committee chairperson. The Committee’s scope was later extended to also include budgeting – meaning that the OSMFs finances are now not only externally (through the main financiers of the organization), but also internally, firmly in American hands.

clouds over water

Another thing that happened in the past year was that the OSMF board has gone ahead with hiring their first employee. In doing so the board follows the pattern established with Martin Reifer (contracted as iD developer in 2021) to (a) sign the contract without a formal board decision and (b) to not publish the contract terms. This is remarkable because an employment contract creates substantial obligations for the OSMF as a whole for a longer time. Note the OSMF also has no regular financial auditing with a financial auditor appointed by the members independent of the management. So all of these contracts and long term financial obligations are done without any independent oversight. Apart from that, one of the important side effects is that this hiring decision probably has put an end to the considerations to move the OSMF outside the UK in the future, at least to do so in full (because you can hardly have an employee in a country where you have no legal presence as a business). The whole moving the OSMF idea is one of the things where nothing apparently happened during the past year – which is unfortunate, because as discussed in the past this would have been the chance to implement some really important structural changes in the OSMF that are difficult to impossible under UK corporate law.

clouds over water

Speaking of what has not happened during the last year – there is a long list of such things. And with the OSMF board being volunteers you of course have to be careful to manage expectations here. So i will only pick a few things where the board has in the past put obligations to do things on themselves to then fail to follow these or where these obligations derive from the core of the OSMF mission. The takeover protection/membership prerequisites is already discussed above in more depth.

Not much happened also on the front of implementation of the FOSS policy. The board in late 2021 had asked the FOSS special comittee to do another review of the (mostly unchanged) status quo in 2022. It is unclear if anything happened on that front but it is clear that the main uses of proprietary software and services remains unchanged, in particular the use of Google for Mail (i am kind of curious what would happen by the way if there ever was a board member elected who refuses to get a Google account…).

Nothing of substance has happened on the front of getting HOT to finally obtain a trademark license from the OSMF for use of the OpenStreetMap trademark. There seems to have been a lot of behind the scenes talks but the bottom line seems to be that HOT still refuses to acknowledge the OSMF’s exclusive right to the OpenStreetMap name and their need to license it and that the OSMF board seems to let them get away with that, maybe in exchange for a wad of money.

Equally nothing of substance has happened on the front of making the OSMF corporate members follow the Attribution Guidelines. This has been an open issue at least since the board has adopted the Guidelines in 2021 (prior the argument was: we need to finish the Guidelines first). There was quite a bit of pushing the matter back and forth between the Licensing working group and the board apparently. There have been pushes from the community for the board to finally do something – with no effect so far. This kind of confirmed my assessment from last year that the OSMF is now inherently unable/unwilling to do anything substantially against the interests of its major financiers, even if it is something at the core of the OSM community’s interests.

clouds over water

Also the whole matter of the software dispute resolution panel for the iD editor, that has been boldly introduced in 2020 and that was finalized in early 2021 came to a screeching halt soon after. None of the formal aspects of that panel as it was introduced was subsequently followed by the board and the panel:

  • The staggered two year terms of office of the panel members are not followed, the board instead has recently decided (after nearly two years) to summarily extend the terms of all the original members indefinitely.
  • No conflict of interest rules have been created for the panel so far.
  • No discussion of the panel and the decision to install it with the OSMF membership has happened so far (it was meant to happen after one year).

As a reminder: Originally, when the panel was created, the board had asked the working groups if any of them would be interested in taking over that function, the Data working group had indicated interest, but the board had rejected that and instead decided to create a dedicated panel appointed by them.

I think it is completely understandable that in light of this and of the now completely unclear governing rules of the panel, no other software project decided to opt into being supervised by it.

seagull flying

Something i have already mentioned in the past: The newly created Engineering working group has made a rather promising start developing a fairly elaborate project funding framework during the first part of the year to then start handing out money directly through single tender action without an open call for tender as mandated by the framework.

clouds over water

Final point – and i have left the most remarkable for last – the OSMF board continues to ignore the need for more meaningful anti-corruption rules and oversight. I have discussed the general problem extensively in the past and also more recently pointed out that one of the main problem seems to be the board’s inability recognize their inherent inability to reliably identify and acknowledge their conflicts of interest and appropriately act upon them. But the October non-public mid month meeting from this year moved this to a completely new level. During the meeting a board member declared a conflict of interest on a matter discussed (a funding request by a third party) because the requesting party are “good friends” of his. Then, in the circular following that discussion meant to decide to offer money, he not only participates in the vote, he also casts the deciding vote (4 to 3) in support of the funding request.

In general, anti corruption laws in European countries and the UK tend to be what we in German call toothless tigers. Even the least capable among all corrupt people will usually be able to weasel around these laws and avoid to violate them directly while doing their thing without serious constraints.

I am not claiming any board members to have any bad intention here – yet i cannot help but state that apparently the board managed to run afoul of anti-corruption rules in the most ostentatious way i can think of and none of the board members seems to have seen any problem with that. And if that is the case, how on earth can anyone seriously assume that the practical procedures followed by the board would prevent any substantial and serious corruption from happening?

clouds over water with ships

Some positive things to end with

As in past years, this first part with the analysis of past year’s events in the OSMF is going to be followed by a discussion of trends visible and likely perspectives for the next years. But i don’t want this post to end on such a drastically negative point.

Not everything in the OSMF was bad in the past year. In particular i saw some positive trends in some of the working groups. I already mentioned that the Engineering working group had a fairly good start initially and with some more diligent fiduciary responsibility and oversight the mentioned failure could have been avoided. The irony is that in the current constellation that oversight would have been the responsibility of the board – finally something where the board would have had a good reason to intervene with the work of a working group, but they did not.

I was also impressed that recently the State of the Map working group showed a remarkable amount of backbone towards the board’s attempts to intervene with their work behind the scenes. I had renewed my past criticism of the current concept of the SotM conference this year again and while i don’t know if my comments played a role, it seems that for the specific question of the location of SotM 2023, the working group members discussed in depth their options and the various arguments that should play into that choice, and made the decision that they deemed to be advised by the arguments, despite knowing that it is going to be unpopular among many loud voices – for Cameroon. This would have strongly pointed in the direction of what i had in the past suggested as a possible future concept for international SotM conferences, namely to hook onto different regional conferences in a rotating fashion all over the world with the international visitors coming in as guests visiting an event of regional design, rather than a traveling conference of ‘western’ design moving around the world to places they consider compatible with their style of conference. And when the OSMF board then intervened, they made the prudent decision to not give in and instead said: Either we do it the way we consider to be the right one – or not at all. The lesson to learn from this: You need to accept to step on peoples’ toes if you want to initiate meaningful change on a larger scale. If this bold initiative will carry fruit will of course still remain to be seen in the coming year and i am not too optimistic about that.

And finally, it was nice to observe that the Licensing working group this year has actually started to more actively reach out to OSM data users to provide proper attribution as required under the ODbL. They understandably excluded the big corporate members, some of which fail to properly attribute, from that effort and pushed the responsibility to deal with those to the board – where, as discussed above, it still sits. This is of course highly problematic because it communicates the message: If you are a small OSM data users you have to abide by the license and the OSMF might come after you if you don’t. However, if you are large enough and have paid your dues to the OSMF you get a free pass. That is not the fault of the working group of course. This activity is in particular noteworthy since with the Attribution Guidelines the board back in 2021 had not accepted the LWG draft as is but made substantial modifications to it.

This is the first part of a series of blog posts on this year in the OSMF – see also the second part with some thoughts on this year’s board election and a third part with observations on current trends and outlook.

clouds over water with sailboat

Green Marble 3 southwest China example

October 15, 2022
by chris

Green Marble version 3

I am pleased to announce a major update to my global satellite image product Green Marble.

The Green Marble is a medium resolution whole earth image, offering an unsurpassed uniformity in quality across the whole planet surface on both water and land coloring, based on recent data, and a hundred percent cloud free.

Green Marble 3 in southwest China

Green Marble 3 in southwest China (GM 2.1 for comparison)

The newly produced version 3 provides a complete update to the land surface depiction – based now primarily on Sentinel-3 data (like the water rendering as it was already in version 2.1) – and using a completely new aggregation methodology, based on experiences derived from earlier versions of the Green Marble, as well as techniques developed for regional mosaics.

From the user perspective, version 3 is also a huge improvement in quality and i will provide some examples for that in the following.

Data sources

The first version of the Green Marble was produced exclusively from MODIS data. With both satellites carrying MODIS instruments reaching the end of their life it has however become important for a future proof update path to move to other image sources. For water surfaces i had already moved to use of Sentinel-3 data in version 2 of the image. Apart from the foreseeable end of the supply of new MODIS observations, MODIS data also comes with various issues. In addition to the various problems stemming from the age of the instrument and the fact that only one visible light spectral band (red) is available in high spatial resolution, the more recent versions of the MODIS surface reflectance data available are subject to some pretty severe systematic errors. These essentially make the data unusable for visualization applications without investing significant effort into mitigating them. This has already made production of the Green Marble from MODIS data quite difficult in version 2 and is also likely one major reason why you hardly see any newer larger area visual color mosaics made from MODIS data any more.

Cascade Range and Palouse example (large: GM 2.1/GM 3)

Sentinel-3 land surface reflectance data has its own issues (as i discussed) but most of the inaccuracies are random in nature and therefore not too troublesome when you do pixel statistics. The real problem is that Sentinel-3 Synergy data is incomplete due to the fairly silly masking of water areas. Because of that, i moved to processing Sentinel-3 images from the Level-1 data using Synergy Level-2 data for calibration of the atmosphere compensation. This requires processing a much higher volume of data of course. Overall, about 750 TB of data were downloaded and processed for the production of the Green Marble version 3 – including 120 TB of MODIS data, 320 TB of Sentinel-3 OLCI Level-1 data, 170 TB of Sentinel-3 Synergy Level-2 data and 140 TB of Sentinel-3 OLCI Level-2 water reflectance data.

MODIS data is still used primarily for the following purposes:

  • Cross calibration of colors with Sentinel-3 to improve color accuracy and reduce systematic errors in atmosphere compensation.
  • Supplementing Sentinel-3 data at high latitudes. Because Sentinel-3 records images at a lower sun position and has a stricter recording limit based on sun elevation, it provides less useful data at high latitudes.
  • Rendering of the Antarctic. Sentinel-3 data is incomplete for the Antarctic interior due to the orbit constraints, existing upstream data processing (cloud detection, atmosphere correction) is poor in this area and the ice shelves are largely not included in Synergy processing. Combined with the general high latitude constraints (see previous point) use of MODIS data was therefore much more practicable for the Antarctic.
  • Rendering of sea ice. Since sea ice is not included in either water or land data processing workflows of Sentinel-3 use of Sentinel-3 data here would have required significant additional preprocessing work.

Northern Ural Mountains example (large: GM 2.1/GM 3)

As you can see in the samples, despite the switch of the primary data source, not that much has changed about the overall appearance of the image in terms of colors at small scale compared to the previous version – which is testimony to the highly consistent and accurate color depiction. Both the differences in atmosphere compensation and the remaining systematic errors in that and the different spectral characteristics of the two sensors lead to some color shifts in the results. Ultimately, neither the MODIS nor the OLCI instruments are ideal for accurate visual color representation.

Egypt example (large: GM 2.1/GM 3)

Processing improvements

In addition to the switch in the primary data source, the land data processing methodology was completely redesigned for the version 3 mosaic. This has lead to quite significant improvements in the results, despite using a more narrow data basis in terms of number of years covered.

Apart from the improvements in quality that i will show examples of in the following, i first want to mention that the whole processing – and as a result the final product – are now available with both the illumination and shading as recorded and in a shading compensated version representing the illumination independent color of the earth surface.

Egypt example (large: original shading/shading compensated)

In the previous versions of the Green Marble i had not produced these different variants, except for the polar regions in version 2, because by combining data with a morning and afternoon observation time frame (from the Terra and Aqua satellites) most shading effects in the image data were already eliminated. With the move to using predominantly Sentinel-3 data with a constant earlier morning recording time this changed.

Scotland example (large: original shading/shading compensated)

Based on the shading compensated image variant, renderings with customized shading can be now produced in much better quality.

Kamchatka example (large: original shading/custom shading)

Substantial quality improvements are in particular visible at higher resolution because the noise levels have been substantially reduced almost everywhere.

Scotland example (large: GM 2.1/GM 3)

Chersky Range example (large: GM 2.1/GM 3)

This is the case even in desert regions where it is often not that readily visible in the standard tone mapping, but where you can see a significant difference in contrast enhanced rendering.

Ennedi Range example (contrast enhanced, large: GM 2.1/GM 3)


To wrap up this announcement, i want to provide some historic and market context for the Green Marble as a product.

It has now been more than eight years since i announced the first version of the Green Marble in 2014. During these years, and over the different updates and improvements i provided to the image, it has stayed unique in its market segment. Essentially all market competitors concentrate on higher spatial resolution products but with lower quality in just about every other aspect (like lack of visible clouds, color quality and consistency, completeness in coverage, noise levels). That puts me in many ways in a comfortable position but it also means that i have rather limited information about the market needs and where users and potential users of the Green Marble see deficits and room for improvements.

The improvements i developed in version 3 and before were designed based on user feedback and my own assessment of where there are deficits and where things should be improved. But i regard this to be a rather limited perspective on the product and its value for the user. So i would be highly interested in feedback of potential users of the Green Marble – either in the comments below or via mail. If the principal idea behind the Green Marble as a global satellite image mosaic is appealing to you and for your use case, especially that it does not focus on high spatial resolution but puts other aspects of quality in the foreground, what dimensions of quality are the most important for you? This would be very interesting to know, in particular if it includes aspects that i have so far not put focus on.

Mauretania in Green Marble 3

Mauritania in Green Marble 3

As usual you can find the updated specifications page for the Green Marble mosaic on Existing customers with a Green Marble license are eligible for a reduced price update to the new version. If you are interested in using the Green Marble in your application contact me through the form there or via email. An interactive map in Mercator-Projektion for further browsing can be found also on

Patagonia in Green Marble 3

Patagonia in Green Marble 3

Nettilling Lake in early October 2022

October 5, 2022
by chris

Autumn and spring in polar regions

In addition to the recent autumn colors images here two views of the polar region autumn and spring on the northern and southern hemisphere. The first shows the Ellsworth Mountains with a low sun position just after the end of the southern winter and the beginning of the observation season in September.

Ellsworth Mountains in late September 2022

Ellsworth Mountains in late September 2022

The second one shows southern Baffin Island with the Nettilling Lake after the first show in early October.

Nettilling Lake in early October 2022

Nettilling Lake in early October 2022

Both views are produced from Landsat 8 data and you can find them in the catalog on

September 30, 2022
by chris

Blog moving

As you maybe have already noticed this blog has moved to a new location – from to This is mostly to support also https connections (for which a separate subdomain is inconvenient). The move was made difficult by the blog location being not easy to change in WordPress. I now finally got around implementing that – together with an upgrade of the WordPress software, which is always quite a hassle meanwhile as well.

All links going to the old location should get redirected to the new place.

Hope everything works as before. If there are any problems please let me know in the comments.

Shiveluch, Kamchatka in Autumn 2022

September 22, 2022
by chris

Autumn colors 2022

Autumn 2022 is starting on the northern hemisphere so here two impressions of early fall colors, the first from Kamchatka:

Shiveluch, Kamchatka in Autumn 2022

Shiveluch, Kamchatka in Autumn 2022

Shiveluch detail

Shiveluch detail

The second from Canada, from the lower end of the Great Slave lake near Fort Providence where the Yellowknife Highway crosses the Mackenzie River:

Fort Providence and Mackenzie River in Autumn 2022

Fort Providence and Mackenzie River in Autumn 2022

Fort Providence detail

Fort Providence detail

Both are based on data from Sentinel-2 and you can find them in the catalog on

Eastern Himalaya by Landsat 7 on 2022-08-12

September 21, 2022
by chris

Landsat 7 last images

According to plans announced some time ago the USGS is going to stop recording images with Landsat 7 at the end of September, ending routine operation of the satellite after more than 20 years.

Like EO-1 where i discussed this matter in more depth, Landsat 7 has run out of fuel to maintain its orbit and has started drifting to earlier morning recording times. This drift has not progressed as much as with EO-1 in 2017 yet but it is visible in the images recorded in the form of longer shadows quite well.

Eastern Himalaya by Landsat 7 on 2022-08-12

Eastern Himalaya by Landsat 7 on 2022-08-12

Eastern Himalaya by Landsat 8 on 2022-08-13

Eastern Himalaya by Landsat 8 on 2022-08-13

In addition the USGS has lowered the orbit of the satellite some time ago to make space for Landsat 9.

Landsat 7 is in particular noteworthy historically because it is Landsat 7 imagery that has shaped the public perception of Landsat as a source of earth observation images more than any of the other Landsat satellites. That is mostly because Landsat 7 was the newest Landsat satellite and the main source of Landsat imagery when the Landsat program moved to an open data distribution policy and during much of the period of popularization of Landsat data that followed (see the history of the Landsat satellites).

The strange thing about this is that Landsat 7 was struck by a major failure in its imaging system (known as the SLC failure – standing for scan line corrector) only four years after beginning of observations in 1999. The failure massively reduced the usability of Landsat 7 data for many, in particular for visualization application. So while the public image of Landsat has largely been formed by Landsat 7 recordings, it has predominantly been images from 1999-2003 that have contributed to that. These about 20 year old images are still widely used in popular map services these days, most notably Bing Maps, but also nearly universally by almost all map services for the Antarctic (though you can now get a more up-to-date alternative meanwhile of course ;-))

Malaspina Glacier by Landsat 7 on 2022-09-05

Malaspina Glacier by Landsat 7 on 2022-09-05

Malaspina Glacier by Landsat 9 on 2022-09-06

Malaspina Glacier by Landsat 9 on 2022-09-06

Technologically and in terms of image quality Landsat 7 is a bit of a relic from another time. Even compared to other satellites launched around the same time (like Terra in 1999 and EO-1 in 2000) Landsat 7 was using conservative technology for its imaging system, with only smaller improvements compared to what had already been used for Landsat 6 in 1993, which failed on launch.

The most significant constraint of Landsat 7 data in terms of image quality is the rather limited dynamic range of the sensor and as a result the high noise levels in dark areas and frequent overexposure in bright areas. The design tried to mitigate that limitation by offering two different amplification settings of the sensor which were chosen based on the expected brightness of the earth surface in the area. As a result of this limitation the USGS stopped recording areas with particularly high contrast (that is especially polar regions) and concentrated the recording capacity of Landsat 7 on the main lower latitude land masses.

Another noteworthy aspect of Landsat 7 was that its high resolution panchromatic spectral band extended across both the visible range and the near infrared, which made its use for producing high resolution natural color images difficult and subject to artefacts in case contrast in the near infrared is significantly different from that in visible light. It shares this characteristic with many higher resolution commercial satellite sensors – both back then but even today. You can see that in the images shown here comparing the appearance in Landsat 7 and Landsat 8 images.

Despite these limitations – for the needs of most data users it seems to have been fairly sufficient for most of the operational history of the satellite. The rather slow adoption of Landsat 8 by data users after 2013 pretty strongly supports this impression. In a way that is also the business model of Landsat overall so far – continuity before innovation.

Chersky Range by Landsat 7 on 2022-09-05

Chersky Range by Landsat 7 on 2022-09-05

Chersky Range by Landsat 8 on 2022-09-06

Chersky Range by Landsat 8 on 2022-09-06

The future of Landsat

Landsat 7 is being replaced directly by Landsat 9 in its orbit and therefore recording schedule. I have yet to write about Landsat 9 in more depth which i have not yet gotten around to. That means we now have a two Landsat satellite constellation with a combined revisit time of 8 days and with both satellites having almost the same capabilities.

What comes after Landsat 9 is still unclear. The most recent publicly available update i could find is here – but this is not providing much more in substance than this and is still rather vague and at the same time it is unclear to what extent what is presented there as what will be (which in a nutshell looks very much like a Sentinel-2 clone with additional spectral bands) is actually already decided on the actual decision making level. In particular everyone should keep in mind that during the early considerations for Landsat 10/Landsat Next the idea was discussed to depart from the full open data model. And there is no clear statement so far that this is not under consideration any more. The existence of Sentinel-2 makes this somewhat unlikely (because with a free alternative there would not be much of a business case for non-free data in a very similar quality and timeliness range). The 10m resolution number that has been widely communicated as target around future Landsat systems is interesting in that regard because this is what now – thanks to Sentinel-2 – is the division line between what is available as open data and what is only commercially available. If Landsat does not go beyond that the status quo would be kind of preserved. If Landsat would move to higher spatial resolution, however, that would massively cut into the domain of commercial satellite operators. In other words: It would probably be politically unfeasible for Landsat planners to try going beyond 10m spatial resolution.

On the other hand a future Landsat with a 10m base resolution would mean an increasing gap between low and high resolution systems with nothing in sight so far to bridge that gap. Let me explain: Over the past decades (essentially since 1999) the main global open data visible light image sources we had were:

  • MODIS (since 1999/2002, near daily coverage at two times of day) at 250m resolution
  • Landsat (since 1999, every 16 days – 8 days with Landsat 5/7 and after Landsat 9 became operational) at 15m reolution (30m multispectral)

With both MODIS satellites reaching their end of life now and no direct replacement being planned we will in the future have from the US:

  • VIIRS (since 2011/2017 + another planned for 2022, daily coverage at 2(3) times of day) at 375m resolution
  • Landsat Next (probably 2030+, probably less than one week revisit interval) at 10m resolution

From the EU we have in addition:

  • Sentinel-3 OLCI with near daily coverage at 300m resolution
  • Sentinel-2 with 5 day revisit (but no complete recording of all land masses in practical operation) at 10m resolution

There is not much in between though, just the Japanese GCOM-C at 250m resolution and Amazonia-1 from Brazil at 60m resolution (which is not operated for world wide recording apparently). But neither of them offers a substantially better revisit frequency than the two satellite Sentinel-2 constellation. This means as a data user you practically have to either work with a resolution of 375m or 300m or go to a much higher resolution but deal with the disadvantage of a a low recording frequency and a large data volume to process.

September 3, 2022
by chris

Thoughts on the OpenStreetMap data model

In this post i want to write a bit about the OpenStreetMap data model. This has obviously been influenced by the discussion in the OSMF to make changes to that model i already mentioned in a previous post.

I am a bit reluctant to write about this here and i want to explain the reasons for that first. I am writing about this out of intrinsic motivation. On the one hand for the intellectual challenge of discussing a highly complex interdisciplinary topic. It involves engineering aspects obviously but also natural sciences (in the form of the physical geography that a large portion of the OSM data represents) and social problems (through the role the data model plays in social interaction in the OSM community). On the other hand i also believe that my thoughts on the matter can be valuable considerations for the OSM community so sharing them publicly could be of benefit for the project.

However, the OSMF has started the public discussion of their plans to make changes by moving the whole matter on the commercial level (by commissioning a paid study on the subject). That means taking part in the discussion with the OSMF on the matter will inevitably involve engaging in discussion with people who have economic motives for representing their views. While i do not categorically reject doing that (doing so can still provide better insights into the subject and can be of benefit for the OSM community at large), we are in the field of diminishing returns here. Pro bono fighting an uphill battle against economic interests, defending my views not against arguments and reason but against people who have an economic interest not to change their view, is not a sustainable strategy in any way and is typically not in support of what i described above as what motivates me to write about this matter here.

Long story short: I am writing this not to engage in a discussion with the OSMF of their commercial project to change the OSM data model but to have an open intellectual exchange with anyone interested in the hope to advance our collective understanding of the subject and to educate people who are interested in aspects and context of the topic they might not be aware of.

Writing this while knowing that many of the people who have the most influence on how the OSM data model will develop are – based on past observations – not very likely to be open to arguments and reasoning that challenge their views on the matter is painful – hence my reluctance of doing so. I still do this because i know that quite a few people in the OSM community are interested in diverse views of topics like this and value being confronted with ideas and perspectives especially also if they are different from their own.

So much for introduction – let’s get on the subject.

What is the OSM data model?

I want to start by defining what i want to actually talk about – because that seems to be quite a bit different from what is being discussed in the OSMF.

The OSM data model i want to discuss here is the form in which mappers engage in the act of mapping in OpenStreetMap. Technically it is the data format of the OpenStreetMap API. That is the interface through which mappers receive data from the central OpenStreetMap database and through which they submit back their changes of that data to the project.

On the social level the OSM data model is the language in which mappers in OpenStreetMap from all over the world perform the act of cooperative mapping. No matter what kind of tools a mapper uses or in what human language the UI of that tool is labeled, the OSM data model is the underlying common standard based on which mappers communicate through the act of mapping itself. That should give a bit of an idea of how fundamental it is for the functioning of the project on a very basic level.

The OSM data model is neither necessarily the format in which OpenStreetMap data is distributed to data users (which at the moment happens to be the same) nor is it necessarily the same format in which data is stored in the central OSM database (which at the moment is close to the API data model – though there are smaller differences – like the form in which coordinates are represented).

How does the OSM data model currently look like?

The OSM data model is – in its basic paradigm and compared to other common forms in which geo-data is represented – a very generic, low level format. If i try to describe that in relatively simple terms i get:

  • Geographic locations (positions on the surface of the earth) as the fundamental components of geographic information are represented with so called nodes with coordinate pairs (latitude, longitude) as attributes.
  • Relationships between different objects and concepts that are modeled with more than a single geographic location are represented by so called relations – which contain references to other objects, potentially in a certain order and potentially with a certain role.
  • Sequences of geographic locations in a certain order can also be (and are widely) represented by so called ways which contain references to nodes.
  • All of these fundamental objects can have any number of attributes in the form of free form key-value pairs – so called tags.

Most of this data model is – as mentioned – very generic. That means it makes only very few assumptions about the way geographic information is represented beyond the basic paradigms of geography itself. It is beyond doubt that this characteristic has largely contributed to the success of OpenStreetMap over the past >15 years. The positive effect of the generic nature is usually seen primarily in the free form tagging system. But that is mostly because the OpenStreetMap mapper community has concentrated on that in their activities and is mostly using tags to develop innovative ways to represent geographic information. Many of the other advantages of this model are so far severely underused and i will get to why that is the case further down.

What are the issues with the current model?

I think i already indicated in the way i presented the current OSM data model above that the concept of the ways is kind of an outlier in the model. With the plans of the OSMF in mind i would go a step further and speak of the ancestral sin of the OSM data model.

First of all ways as a concept are superfluous in the OSM data model, a relation can be used to represent what is represented with a way just as well. Or you can look at it the other way round, a way is just a hard-coded and technically more restrictive type of relation:

  • it can only have nodes as members
  • it can not have more than 2000 of them
  • the members cannot have different roles

Beyond that ways are severely under-defined. Ways are usually interpreted to represent a sequence of straight line segments between its member nodes – but nowhere is it defined what that actually means. The most natural interpretation would be to consider the straight line between two nodes to be the shorter segment of the great circle running through those two nodes. Most tools processing ways however interpret a straight line to be a straight line in equirectangular projection (that is geographic coordinates interpreted as cartesian) or in Mercator projection – but that is not defined or documented in any way. If ways were just a type of relation defined on top of the basic OSM data model through consensus in the OSM community and could equally be changed or amended in their meaning through revised consensus, that would be different – we have plenty of similarly under-defined concepts in tagging schemes and relation types in OpenStreetMap. But as a hard-coded element of the low level data model this is highly problematic.

The other main technical or formal issue of the model is that nodes can have tags. Discussing this could get us quite a bit into an abstract philosophical domain. I will try to keep it brief but be aware that this is going to selectively only present some of the arguments.

There is a continuum (or an infinite number of) geographic locations on the planet obviously. That act of mapping consists essentially of two parts:

  • identifying locations which have meaning.
  • documenting what meaning these locations have.

These two activities are not necessarily identical and the same location can have meaning in different contexts – for example a location at the corner of a building has both meaning as a point on the corner of the building as well as one end of the artwork painted on the side of the building and as a (concave) corner of the pedestrian area surrounding the building. That the OSM data model separates these two activities by separately recording the geographic location (the node) and the meanings it has (by being a member of ways/relations representing building, artwork and pedestrian area in the mentioned example) is one of the huge advantages of it. But this is only the case when the meaning is represented through ways or relations. When you have for example a path (way with highway=path) crossing a fence (way with barrier=fence) with a gate (node with barrier=gate) the node doubles as a location and as a carrier of meaning. That is not ideal because the two mapping activities described above are then not separately represented in the data.

Both of these issues are things you can hardly blame anyone for retroactively because they are the result of the historic development of the data model.

The OSM data model would improve a lot in its inner consistency as well as in practical handling if these two issues were fixed. That means the new data model would look like this:

  • Nodes would just contain coordinate pairs and have no tags, Real world features that are to be modeled geometrically with a single coordinate pair would be represented with a relation with the single node as member.
  • Ways would be eliminated and converted to a type of relation.

I am of course not the first one to identify these two things as key points where the OSM data model can be improved. Ilya for example mentioned these ideas recently independent of me. Most who analyze the OSM data model from the perspective of mapping will likely come to the same conclusion (or have done so in the past already).

Are these the only significant issues with the OSM data model that are worth addressing? Probably not. There are quite a few further things, mostly related to the way changes in the data are being represented and managed (with versions and changesets – things i kept out of the description above). But to keep things simple i intend to limit this post to matters of representation of geographic knowledge itself.

The struggles of working with a generic data model.

Of course the structure of the low level data model is only half of the story.

The other side of the topic are the difficulties resulting from having such a generic and low level data model as we have in OpenStreetMap for the process of building consensus on how we practically document geographic knowledge with that data model.

Again this topic is historically mostly discussed in the context of the free form tagging system in OpenStreetMap – which is the practically most visible unique characteristic of the OSM data model compared to ways to represent geographic information elsewhere. There is consensus that the free form tagging system was and is an important basis for OpenStreetMap having developed the way it did (in the positive sense). But, as i already discussed in a different post, this has also resulted in problems, and as OpenStreetMap grows it is going to be of fundamental importance to have a meaningful discussion on how the development of tagging in OpenStreetMap can scale and continue to function in a growing and increasingly diverse OSM community. I presented Tagdoc as a sketch for what i think would be a valuable component to facilitate better understanding and as a result more qualified decision making in tagging and tag development.

There is a subset of tagging that is of fundamental importance so that OpenStreetMap is able to use its innovative data model to its true potential and the OSM community’s struggles to actually develop a meaningful discussion and consensus building process on that has significantly kept back OpenStreetMap for more than a decade.

This subset of tagging i am talking about are the relation types. As i outlined above, relations are the core feature of the generic OSM data model to implement higher levels of abstraction to represent concrete geographic knowledge.

Technically relation types are just tags. But the social dynamics around them in the OSM community are very different from normal tags.

For normal tags the conventions how they are used and interpreted are influenced primarily by the following actors:

  • mappers through the use of the tags.
  • wiki activists (people engaged in editing of tag documentation on the OSM wiki and participating in proposal processes).
  • editor developers through decisions they make with tagging presets.
  • map style developers through decisions what tags to interpret and how to interpret them.
  • other data users through their tag interpretation practice.
  • QA tool developers.

For relation types the primary influences are very different:

  • editor developers through decisions for what types of relations they offer an editing interface to.
  • developers of data interpretation tools through decisions which types of relations they support and how they interpret them.
  • to some limited extent QA tool developers.

As you can see there is a fundamental difference. The practical use and meaning of tags is influenced by a wide range of actors with different backgrounds. This sometimes leads to chaotic situations – which some people despise and call for more authoritarian tagging paradigms. But overall it has served OpenStreetMap quite well – even if there are and continue to be problematic concentrations of power and influence in that system as well.

For relation types however there is a clear gatekeeper role of a very small set of people who practically decide (though they might not actually be aware of that) which relation types are ‘permitted’ (and as a result are practically used in significant volume) and what the mapping conventions are for those. And all of these people are software developers.

As a result of this we essentially have only a handful of relation types in OpenStreetMap that are (a) practically used in significant volume and that are (b) used with a significant level of consistency that allows data users to interpret them in a meaningful way. And all of those have been around already for more than ten years now.

This is the big practical problem in my eyes. In contrast to tags where the power and influence on conventions is widely distributed and grassroots inventions of new tags have a reasonable chance to be successful (and where separating good and bad ideas by mappers voting with their feet usually works) this is not a working mechanism for relation types. This cripples the OSM community’s ability to develop the way it maps the world wide geography in an innovative fashion and prevents it from making full use of the potential of the data model. This is not a problem of the data model itself but stems from the low level nature of it, making it necessary to develop higher level conventions on top of it – which the OSM community is, one a social level, currently unable to do.

This is a hard problem to solve and I have no solution for this to present here. But i know what is no solution: To give up and to move from the generic and low level data model we have to a constrained higher level model of points, linestrings and polygon geometries.

There are quite a few secondary problems that have turned up as a result of the OSM community essentially having been unable to advance on the development of relation types – or of other higher level conventions for representing geographic relations. One prominently visible problem is the overuse of multipolygon relations for representing concepts that could be represented much more efficiently and easier to maintain otherwise. As a simple (though maybe somewhat tongue-in-cheek) example: Think of large islands (like Greenland or Madagascar). These are currently represented in OpenStreetMap with very large multipolygon relations with thousands of member ways. These multipolygon relations however contain almost no information at all that is not already in the database otherwise. If you’d transfer the tags of the relation to a node – either anywhere within the island or on its coastline – that would contain all the information already in a much more compact form. I am not necessarily saying that this is a change that should be made specifically for islands, arguments could be made for keeping the relation here, even from a mapper perspective. My point is that the OSM community currently lacks the fundamental ability to make such a change (or any substantial change in higher level mapping conventions beyond the atomic tagging of individual features). And this is essentially a different side of the same problem. Technically fiddling with the low level data model will not help with that.

What about the other problems?

If you have read the report on changing the OSM data model commissioned by the OSMF you might wonder why so little of what is identified there as problems is discussed here and why most of what i discuss here is not of much importance in that study. That is because i look at the OSM data model purely as the language in which mappers communicate and exchange their work while that study:

  • is almost exclusively concerned with the needs and the difficulties of global scale data processing on the data user side,
  • makes the up-front assumption that the mapping data model and the format of distribution of OSM data en bulk to the data user necessarily need to be identical and
  • essentially proposes to eliminate the function of the OSM data model as the language of exchange between mappers. Instead it suggests that the data editing tools create an abstraction layer between the data format of the API and the paradigms of representing geographic information used by the mappers.

This is the wrong approach in my eyes. The solution to issues on the data user side in processing OSM data on a global scale is simple: Distribute the data in a form that avoids these problems. Doing so would probably be simplified quite a bit if the changes to the OSM data model i sketched above (elimination of ways and of tags on nodes) would be implemented. Also the problem of the two different means of representing polygons with closed ways and multipolygon relations would be eliminated by that.

Where we need to work on solutions is the social problem of developing and maintaining higher level conventions on top of the low level data model OpenStreetMap is based on. That is not an engineering problem of course. While technical means and tools are likely to be useful in implementing solutions to that once we have identified and agreed on them, people with technical skills and qualifications should resist the urge to focus on the hope of finding primarily technical solutions to this. That is not going to work.


That was – despite having grown to a lengthy text again, as you have become quite used to from me probably – a very quick run through a rather complex interdisciplinary topic that only scratches the surface obviously. That means it is going to be easy for people who want to selectively maintain a different perspective on the matter to dismiss my thoughts as too superficial. And that is fine. But make no mistake: That i try to present the topic in a brief and condensed (and hopefully not too cryptic) way does not mean i have not thought it through more in depth. Those who have discussed the OSM data model with me in the past know that i have always approached the topic with an open mind. And i still do. I present no solution that i claim is the right one, i merely present an analysis of the problem.

The main points you as a reader should probably take away from this post are:

  • If you look at the matter from the most important side to consider – that is from the perspective of mapping – it looks very different from how the OSMF seems to so far have looked at it.
  • The main problems are not with the technical structure of the OSM data model (though there are some quite obvious things where this could be improved) but in the ability of a massively growing and increasingly diverse OSM community to productively use the possibilities of the data model to be innovative and efficient in cooperatively mapping the world wide geography.

August 17, 2022
by chris

Over engineering

In the past years i have more and more moved to post my commentary on political developments in the OpenStreetMap Foundation towards the end of the year before the OSMF board elections instead of timely commentary of things as they are happening during the year. I am going to make an exception here by writing down some observations and thoughts on more recent trends and developments i consider particularly noteworthy.

If you have followed some of my more recent OpenStreetMap related posts here you might have observed that i have put an increasing emphasis on pointing out that the value put in the OSM Community on technical work in contrast to non-technical, in particular intellectual work, is quite seriously out of balance and that this is increasingly affecting the OSM Community’s ability to handle the various challenges it faces.

The recognition of this trend and its effects on my side has been emphasized in particular also by the OSMF more recently moving strongly towards an increasing dominance of technical interests and viewpoints. On the level of the OSMF board this in particular manifested in the departure of Allan Mustard from the board end of last year. Allan was the last remaining board member with a distinctly non-technical professional background. Everyone on the board now has a technical professional background and most more specifically in the domain of IT and software development. Allan’s departure from the board is of course not the singular cause of this shift in the OSMF, this is more the conclusion of a long term trend overall which – on the board level – started much earlier with people with a broader non-technical perspective on the board increasingly resigning and the OSMF membership increasingly electing people with technical backgrounds.

At least to me it became increasingly clear in the last months how much of a paradigm shift this is and how much of an impact on actions and decisions of the OSMF this could have in the future.

Beyond the composition of the OSMF board this trend is best visible in form of the shift in expenses for paid work. Until late 2020 the OSMFs main regular expenses for paid work were administrative assistance and accounting. This has completely shifted by contracting a software developer for iD and hiring a sysadmin since then. The exact amounts of money spent here are not known (contracts are not published any more and we will probably have to try to reverse engineer this info from the financial reports at the end of the year). In addition there has also been an increasing volume of non-regular paid technical work (in particular the three hand picked projects in the aftermath of the microgrant program). The newly revived Engineering Working Group now has a EUR 50k budget for paid work – the highest of any working group after Operations if i am not mistaken.

A project of the Engineering Working Group is also what i want to discuss in more detail here. Earlier this year the EWG has decided to contract a study for changing the OSM data model.

The specific details and motives for this are unclear – the minutes do not reveal substantial information on that. What we know is that this study was contracted through single tender action (the contract terms are not disclosed, not even the exact aim of the study) and not through the EWGs project funding framework.

The study the EWG has contracted has now been published and this is what i want to discuss a bit more in depth here. I am not going to comment on the technical aspects in substance but i want to share a few thoughts on the economic and social context of the whole thing.

I am – by education – an engineer myself, with a master and a PhD in mechanical engineering. During my early years in engineering i had – like many other engineers – a tendency to look down on consulting companies doing feasibility studies and receiving quite significant amounts of money for those while having no real street credibility so to speak in the domain of engineering. Furthermore these consulting companies often did not predominantly seem to employ engineers but people with a background in economics or social sciences.

This negative view of the young engineer in me has significantly changed since then. And in case you wonder: This change has happened already before i became a consultant myself ;-). While i think many larger generic consulting companies (which is what many people have in mind when they hear the term consulting) have very questionable business practices and models, i meanwhile have a significant appreciation of the work of smaller specialized consultancies, in particular in producing feasibility studies, and consider their role in our society with its highly specialized competencies and division of labour to be quite essential.

My impression is that what the OSMF apparently did here is the approach of a naive young engineer towards doing a larger project: Contracting an experienced engineer with practical work experience in implementing this kind of project for a feasibility study with the aim to find out how they can make the project work.

There are very good reasons why this approach is not typically taken.

One obvious reason is that the contracted engineer has, if they are also a likely candidate for being contracted to implement the project, a clear conflict of interest. This is one of the main reasons why feasibility studies tend to be done by independent consultancies who have no stake in the actual implementation of the project.

The second important reason is that in most larger engineering projects the main risks and obstacles are typically not technical in nature. To truly assess the feasibility of the project, the risks involved and the resources required, you need experience outside the domain of engineering. You need to regard the broader social and economic context of the project. This is one reason why consulting companies frequently employ social scientists and humanists.

I am not sure if it is realistic to expect people in the EWG and on the OSMF board to realize they took the wrong approach here and recognize the need to revise that. But i know that there are plenty of people in the larger OSM community who have a broader perspective on the matter and who will therefore likely be interested in a more critical commentary on the approach taken to this project here. In any case i thought it is prudent to make this comment early to give everyone the chance to consider it.

What impact does this have on the actual matter of the OSM data model and its future development? I obviously have my thoughts on that too. But this is beyond the scope of this blog post.

Florence in Spring 2022

August 11, 2022
by chris

State of the Map 2022

In about a week the OSMFs State of the Map conference is going to start and to avoid anyone looking for me in vain there (i have been at the last four State of the Map conferences – the last two of which were virtual events) – i am not going to be there this year.

This has multiple reasons, one of the most relevant being that this year’s event is – more than in any of the last four years – an event quite ostentatiously targeted at wealthy people and people whose visit is paid for by a third party. It is perfectly fine to have such an event, and i wish everyone at the conference an enjoyable and successful time, but this makes it much less attractive for me to visit.

With the strong limitations of the last two years to meet in larger groups, i would this year have liked to meet in person with various people involved in OpenStreetMap whom i could not meet in the past years, including quite a few who are likely going to be in Florence despite the fairly steep costs of a visit there during the height of holiday season with hotel and travel prices at their peak. But to be frank – what i enjoyed most in previous years was in particular meeting and talking to economically more marginalized people.

I played a bit with the idea of visiting Florence but not going to the commercialized conference and instead just meeting with people outside the organized event and visiting and getting to know the city otherwise. But as said there were other reasons speaking against it and also, while i could have afforded a visit to Florence in August, that is not equally the case for many other people active in OpenStreetMap. And choosing Florence in August as the place and time where people from the OSM community have the chance to meet me in person would also have made a statement as to whom i am mostly interested in meeting, even when done detached from the SotM conference.

In past years i repeatedly have criticized that affordability of the visit is not a criterion in the planning of the conference, not even on paper, and neither for picking the place nor for picking the date of the conference. This fact is of course perfectly in line with the OSMFs diversity statement – which disallows discrimination by almost anything, but quite explicitly not by wealth.

Anyway – for those in the OSM community who would like to meet me in person – i plan to be at the Karlsruhe Hack Weekend in September, after also having been absent there for the past two years. Despite the limited number of participants allowed, there is still room for people to come there, Karlsruhe is just a two hour train ride from Frankfurt Airport and there is a good selection of decent and affordable places to stay in Karlsruhe.

I also generally look with interest at the announcements of any other events in the OSM community that have been planned with the visible intend and desire to be affordable and accessible also for less wealthy people and being truly welcoming for a diverse audience in the original sense of the word. For the past two years the options to visit such events have been rather limited, but i hope this will get better in the future again.

Florence in Spring 2022

Florence in Spring 2022

Concluding remarks on tree rendering

June 5, 2022
by chris
1 Comment

Concluding remarks on tree rendering

This is part four of a four part series of blog posts on the depiction of trees in maps – see part 1 for a discussion of tree rendering in traditional maps and plans, part 2 for mapping of trees in OpenStreetMap and rendering in OSM based maps and part 3 for the discussion of a new concept of tree rendering.

The tree rendering implementation i showed and explained in the previous part is essentially just a sketch demonstrating a few of the design concepts you can explore once you truly disregard the technical constraints of common automated map rendering tools. Is this too much for a general purpose map style? Maybe.

As i mentioned recently again the whole world of automated rule-based map design is, once you leave the pre-made paths scouted already by tech companies and tool developers working for them, largely unexplored territory. Working on the tree rendering once more made me realize how much that is true especially for large map scales and high zoom levels. Once you start truly designing maps specifically for z19/z20 you are well into the traditional domain of architectural drawings and site plans – which is a whole new field of considerations and design ideas, as i tried hinting at in the first part with some examples. Most digital interactive map designers so far simply treat this scale range as an extrapolation of the smaller scales. This even more applies to all the modern vector tile maps where everything above z16/z17 is literally just the same rendering blown up to larger scales plus more POI symbols and labels.

And i think this series of blog posts also once more demonstrates that the inherent, high level capabilities of currently available map rendering tools are very limited compared to what in principle would be possible and desirable to have to produce well readable maps. The CartoCSS + Mapnik toolchain i use here has the advantage that it provides the means (via PostGIS code) to implement this on a low level by hand but practically such low level techniques are probably beyond reach for many map designers. And using this method has its own limitations – apart from the obvious efficiency issues you also cannot combine this kind of manual symbol processing with use of symbol/label collision detection and prevention for example.

So once again my strong suggestion to the OSM and the FOSS community that i have already stated on various occasions: You need to strategically invest in advanced map design tools for automated rule-based map rendering. And if you do so while looking at things purely from a software developer perspective or following the footsteps of big tech companies you will fail. The proprietary software producers have learned this lesson. They actively involve and actively listen to map designers in their strategic planning and research. Their target users so far are predominantly still the traditional map creators to whom map design means manual processing of concrete data and not the development of automated rules to process generic data like i discuss it here. But this is going to change. So do not miss this window of opportunity.

Flowering tree