The project deals with the process of "map making with satellite imagery" and questions the
producer of this two mediums and their ability for
limitation/interpretation and influence/transmission/acceptance of data and power.
It intended to show the paradox of “pattern recognition” in which “satellite
imagery are producers for maps” and “maps are producers for satellite imagery” in a
circulation with different transformational processes, where machine learning is used to analyse a
huge datastream of satellite imagery.
The map is not the territory ..... but another version of reality. (Korzybski 1933)
Maps are as much a
reality of the world in a particular, codified form, that also has its own reality as an object with a
materiality, a temporality and as a thing with a meaning that is as evident as the visceral reality of
the world itself. Maps represent knowledge space, shown from a bird’s eye view. They visualize the
invisible by radically homogenizing something that is not homogenous by a medium consisting of
line, point and surface. Maps have the potential as an instrument of power for some intentions. They
substitute political and military power in a way that represents the state borders between territories
and they can repeat,legitimate and construct the differences of classes and social self-
As maps can not be realized without interpretation,generalization and simplification like the symbol
“tree” for a forest or a “red circle” for a city. It is the same for data which is used for “remote
Data is always translated to what they might be presented. The images, lists, graphs, and
maps that represent those data are all interpretations, and there is no such thing as neutral data. Data
is always collected for a specific purpose, by a combination of people, technology, money,
commerce, and government. The phase "data visualization," in that sense, is a bit redundant because
data is already a visualization.
Machine learning is used nowadays in the process of map making in geographic informational
systems in short “GIS” to observe the earth surface and detect abnormalities in real time and
identify changes in short timeframes for e.g. industrial trends, mine activities, gas & oil mining
area, urban trends, agriculture crop yield and so on. These systems are based on a model, where a
predefined knowledge is used to generate new knowledge which opens the topic of the “paradox of
pattern recognition”. To identify patterns means, the pattern itself must be predefined in some ways,
to be identified, for example to identify cloud formations for weather forecast means to classify
thousands of clouds.
The installation shows a neural network which is based on a self feeding circuit system, where predefined knowledge as
“training data” is used in the process of map making. Every new generated data is written directly into "training data"
were the network is getting the knowledge from, which means that every decision of the network is based on a decision that
where made before.
The machine draws a map and generates a satellite image out of it, which creates a new map and by that a new satellite image in an endless loop.
In this process it is pointing out the fact that we are highly focused on the numbers and tent to see them as objective, unambiguous and interpretation free. In doing so, a
blindness arises against the processes that data generates and the assumption that numbers speak for
themselves. Not only the collection of data provides an interpretative scope, but also computing processes allows further interpretations. Thus numbers are viewed as the world
itself, forgetting that the numbers are only representing a model of the world. This model, however,
means that people adapt their behavior to the model's expectations and concentrate on delivering the
A "Lexicon" is printed with the contend of the training date from the neuronal network, to give the visitor the insight on what "knowledge" the decisions of the network is based of.
Therefore the accuracy of the neural network depends entirely on where the training-data is coming
from, how old and how big it is and about the variation of the dataset.