Imagico.de

blog

On imitated problem solving

| 0 comments

As many of you know for a few years now we have a new trend in remote sensing and cartography that is called Artificial Intelligence or Machine Learning. Like many similar hypes what is communicated about this technology is little based on hard facts and largely dominated by inflated marketing promises and wishful thinking. I here want to provide a bit of context to this which is often missing in discussion on the matter and which is important to understand when you consider the usefulness of such methods for cartographic purposes.

AI or Machine Learning technologies are nothing new, when i was at University these were already pretty established in information sciences. The name has been misleading from the beginning though since Intelligence and Learning implies an analogy to human intelligence and learning which does not really exist.

A good analogy to illustrate how these algorithms work is that of a young kid being mechanically trained: Imagine a young kid that has grown up with no exposure to a real world environment. This kid has learned basic human interaction and language but no significant experience in the larger world and society beyond this.

Now you start watching TV with that kid and every time there is a dog on screen you call out Oh, a dog and encourage the kid to follow your example. And after some time you let the kid continue on its own as a human dog detector.

This is pretty much what AI or Machine Learning technologies do – except of course that the underlying technological systems are still usually much less suited for this task than the human brain. But that is just a gradual difference and could be overcome with time.

The important thing to realize is that this is not how a human typically performs intellectual work.

To use an example closer to the domain of cartography – imagine the same scenario with the kid above with detecting buildings on satellite images. And now consider the same task being performed by a diligent and capable human, like the typical experienced OpenStreetMap mapper.

The trained kid has never seen a real world building from the outside. It has no mental image associated with the word building called out by its trainer except for what it sees on the satellite images.

Experienced OSM mappers however have an in depth knowledge of what a building is – both in the real world as well as in the abstract classification system of OpenStreetMap. If they see an empty swimming pool on an image they will be able to deduce that this is not a building due to the shadows – even if they have never seen a swimming pool before. This typical qualified human interpretation of an image is based on an in depth understanding of what is visible in the image connecting it to the huge base of real world experience a human typically has. This allows humans to solve specific problems they have never been confronted with specifically before based on knowledge of universal principles like logic and the laws of physics.

As already indicated in the title of this post in a way AI or Machine Learning are the imitation of problem solving in a cargo cult like fashion. Like the kid in the example above who has no understanding of what a dog or a building is beyond the training it receives and tries to imitate afterwards. This is also visible from the kind of funny errors you get from this kind of system – usually funny because they are stupid from the perspective of a human.

Those in decision making positions at companies like Facebook and Mapbox who try to push AI or Machine Learning into cartography (see here and here) are largely aware of these limitations. If they truly believed that AIs can replace human intelligence in mapping they would not try to push such methods into OSM, they would simply build their own geo-database using these methods free of the inconvenient rules and constraints of OSM. The reason why they push this into OSM is because on their own these methods are pretty useless for cartographic purposes. As illustrated above for principal reasons they produce pretty blatant and stupid errors and even if the error rate is low that usually ruins things for most application. What would you think of a map where one percent of the buildings are in the middle of a road or river or similar? Would you trust a self driving car that uses a road database where 0.1 percent of the roads lead into a lake or wall?

What Facebook & Co. hope for is that by pushing AI methods into OSM they can get the OSM community to clean up the errors their trained mechanical kids inevitably produce and thereby turn the practically pretty useless AI results into something of practical value – or, to put it more bluntly, to change OSM from being a map by the people for the people into a project of crowd sourced slave work for the corporate AI overlords.

If you follow my blog you know i am not at all opposed to automated data processing in cartography. I usually prefer analytical methods to AI based algorithms though because they produce better results in case of the problems i am dealing with. But one of the principles i try to follow strictly in that regard is never to base a process on manually post processing machine generated data. The big advantage of using fully automated methods is that you can scale them very well. But you immediately loose this advantages if you start introducing manual post processing because this does not scale in the same way. If you ignore this because crowd sourced work from the OSM community comes for free that indicates a pretty problematic and arrogant attitude towards this community. Computers should perform work for humans, not the other way round.

If you are into AI/machine learning and want OSM to profit from it there are a number of ways you can work towards this in a constructive way:

  • make your methods available as open source to the OSM community to use as they see fit.
  • share your experience using these methods by writing documentation and instructions.
  • make data like satellite image available under a license and in a form that is well suited for automated analysis. This is particular means:
    • without lossy compression artefacts
    • with proper radiometric calibration
    • with all spectral bands available
    • with complete metadata
  • develop methods that support mappers on solving practically relevant problems in their work rather than looking for ways to get mappers to fix the shortcomings of the results of your algorithms.

In other words: You should do exactly the opposite of what Facebook and Mapbox are doing in this field.

I want to close this post with a short remark regarding the question if we will in the future get to have machines that can perform intelligent work significantly beyond the level of a trained kid? The answer is: We already have that in the form of computer programs programmed to solve specific tasks. The superficial attractiveness of AI or Machine Learning comes from the promise that it can help you solve problems you might not understand well enough to be able to specifically program a computer to solve them. I don’t consider this something that is likely to happen in the foreseeable future because that would not just mean reproducing the complex life long learning process of an individual human being but also the millennia of cultural and technological evolution of the human society as a whole.

What is well possible though is that for everyday tasks we will in the future increasingly rely on this kind of imitated problem solving through AIs and this way loose the ability to analyze and solve these problems ourselves based on a deeper understanding in the way described above. If that happens we would obviously also loose the ability to recognize the difference between a superficial imitated solution and a real in depth solution of the problem. In the end then a building will simply be defined as that which the KI recognizes as a building.

Leave a Reply

Required fields are marked *.



By submitting your comment you agree to the privacy policy and agree to the information you provide (except for the email address) to be published on this blog.