|Polynesian Sailing Map|
Polynesian sailors developed elaborate techniques for long-distance sea travel long before their European counterparts. They mapped out the elevation of the stars; they followed the paths of migrating birds; they observed sea swells and tidal patterns. The techniques were often passed down from generation to generation through the medium of song. They are still taught to this day (in some locations). In 1976, there was a famous proof of their effectiveness when Mau Piailug, a practitioner of the techniques, steered a traditional sailing canoe nearly 3,000 miles from Hawaii to Tahiti without relying on more modern methods of navigation.
These Polynesian sailing techniques provide a perfect real-world illustration of distributed cognition theory. According to this theory, cognition is not something that takes place purely in the head. When humans want to perform cognitive tasks, they don’t simply represent and manipulate the cognition-relevant information in their brains, they also co-opt features of their environment to assist them in the performance of cognitive tasks. In the case of the Polynesian sailors, it was the migrational patterns of birds, the movements of the sea and the elevation of the stars that assisted the performance. It was also the created objects and cultural products (e.g. songs) that they used to help to offload the cognitive burden and transmit the relevant knowledge down through the generations. In this manner, the performance of the cognitive task of navigation became distributed between the individual sailor and the wider environment.
Generally speaking, there are three features of the external environment that can assist in the performance of a cognitive task:
Cognitive Artifacts: Intentionally designed objects that are used in the performance of the task, e.g. a map, a calendar, an abacus, or a textbook.
Naturefacts: Natural objects, events or states of affairs that get co-opted into the performance of a cognitive tasks, e.g. the paths of migrating birds and the elevation of the stars.
Other Cognitive Agents: Other humans (or, possibly, robots and AI) that can perform cognitive tasks in collaboration/cooperation with one another.
I think it is important to understand how all three of these cognitive-assisters function and to appreciate some of the qualitative differences between them. One thing that distributed cognition theory enables you to do is to appreciate the complex ecology of cognition. Because cognition is spread out across the agent and its environment, the agent becomes structurally coupled to that environment. If you tamper with or alter one part of the external cognitive ecology it can have knock-on effects elsewhere within the system, changing the kinds of cognitive task that need to be performed, and altering the costs/benefits associated with different styles of cognition (I discussed this, to some extent, in a previous post). Understanding how the different cognitive assisters function provides insight into these effects.
In the remainder of this post, I want to take a first step towards understanding the complexity of our cognitive ecology by taking a look at Richard Heersmink’s proposed taxonomy of cognitive artifacts. This taxonomy gives us some insight into one of the three relevant features of our cognitive ecology (cognitive artifacts) and enables us to appreciate how this feature works and the different possible forms it can take.
The taxonomy itself is fairly simple to represent in graphical form. It divides all cognitive artifacts into two major families: (i) representational and (ii) ecological. It then breaks these major families down into a number of sub-types. These sub-types are labelled using a somewhat esoteric conceptual vocabulary. The labels make sense once you have mastered the vocabulary. The remainder of this post is dedicated to explaining how it all works.
1. Representational Cognitive Artifacts
Cognition is an informational activity. We perform cognitive tasks by acquiring, manipulating, organising and communicating information. Consequently, cognitive artifacts are able to assist in the performance of cognitive tasks precisely because they have certain informational properties. As Heersmink puts it, the functional properties of these artifacts supervene on their informational properties. One of the most obvious things a cognitive artifact can do is represent information in different forms.
’Representation’ is a somewhat subtle concept. Heersmink adopts CS Peirce’s classic analysis. This holds that representation is a triadic relation between an object, sign and interpreter. The object is the world that the sign is taken to represent, the sign is that which represents the world, and the interpreter is the one who determines the relation between the sign and the object. To use a simple example, suppose there is a portrait of you hanging on the wall. The portrait is the sign; it represents the object (in this case you); and you are the interpreter. The key thing about the sign is that it stands in for something else, namely the represented object. Signs can represent objects in different ways. Some forms of representation are straightforward: the sign simply looks like the object. Other forms of representation are more abstract.
Heersmink argues that there are three main forms of representation and, as a result, three main types of representational cognitive artifact. The first form of representation is iconic. An iconic representation is one that is isomorphic with or highly similar to the object it is representing. The classic example of an iconic cognitive artifact is a map. The map provides a scaled down picture of the world. The visual imagery on the map is supposed to stand in a direct, one-to-one relation with the features in the real world. A lake is depicted as an blue blob; a forest is depicted as a mass of small green trees, a mountain range is depicted as a series of humps, coloured in different ways to represent their different heights.
The second form of representation is indexical. An indexical representation is one that is causally related to the object it is representing. The classic example of an indexical cognitive artifact would be a thermometer. The liquid within the thermometer expands when it is heated and contracts when it is cooled. This results in a change in the temperature reading on the temperature gauge. This means there is a direct causal relationship between the information represented on the temperature gauge and the actual temperature in the real world.
The third form of representation is symbolic. A symbolic representation is one that is neither iconic nor indexical. There is no discernible relationship between the sign and the object. The form that the sign takes is arbitrary and people simply agree (by social convention) that it represents a particular object or set of objects. Represented language is the classic example of a symbolic cognitive artifact. The shapes of letters and the order in which they are presented bears no direct causal or isomorphic relationship to the objects they describe or name (pictographic or ideographic languages are different). The word ‘cat’, for example, bears no physical similarity to an actual cat. There is nothing about those letters that would tell you that they represented a cat. You simply have to learn the conventions to understand the representations.
The different forms of representation may be combined in any one cognitive artifact. For example, although maps are primarily iconic in nature, they often include symbolic elements such as place-names or numbers representing elevation or distance.
2. Ecological Cognitive Artifacts
The other family of cognitive artifacts are ecological in nature. This is a more difficult concept to explain. The gist of the idea is that some artifacts don’t merely provide representations of cognition-relevant information; rather, they provide actual forums in which information can be stored and manipulated. The favourite example of this — one originally posed by the distributed cognition pioneer David Kirsh — is the game of Tetris. For those who are not familiar, Tetris is a game in which you must manipulate differently shaped ‘bricks’ (technically known as ‘zoids’) into sockets or slots at the bottom of the game screen so that they form a continuous line of zoids. Although you could, in theory, play the game by mentally rotating the zoids in your head, and then deciding how to move them on the game screen, this is not the most effective way to play the game. The most effective way to play the game is simply to rotate the shapes on the screen and see how they will best fit into the wall forming at the bottom of the screen. In this way, the game creates an environment in which the cognition-relevant manipulation of information is performed directly. The artifact is thus its own cognitive ecology.
Heersmink argues that there are two main types of ecological cognitive artifact. The first is the spatial ecological artifact. This is any artifact that stores information in its spatial structure. The idea behind it is that we encode cognition-relevant information into our social spaces, thereby obviating the need to store that information in our heads. A simple example would be the way in which we organise clothes into piles in order to keep track of which clothes have been washed, which need to be washed, which have been dried, and which need to be ironed. The piles, and their distribution across physical space, stores the cognition-relevant information. Heersmink points out that the spaces in which we encode information need not be physical/real-world spaces. They can also be virtual, e.g. the virtual ‘desktop’ on your computer or phonescreen.
The other kind of ecological cognitive artifact is the structural artifact. I don’t know if this is the best name for it, but the idea is that some artifacts don’t simply encode information into physical or virtual space; they also provide forums in which that information can be manipulated, reorganised and computed. The Tetris gamescreen is an example: it provides a virtual space in which zoids can be rearranged and rotated. Another example would be scrabble tiles: constantly reorganising the tiles into different pairs or triplets makes it easier to spot words. The humble pen and paper can also, arguably, be used to create structures in which information can be manipulated and reorganised (e.g. writing out the available letters and spaces when trying to solve a crossword clue).
This then is Heersmink’s taxonomy of cognitive artifacts. One thing that is noticeable about it (and this is a feature, not a bug) is that it focuses on the properties of the artifacts themselves, not on human uses. It is, thus, an artifact-centred taxonomy not an anthropomorphic one. Also the taxonomy does not divide the world of cognitive artifacts into a set of jointly exhaustive and mutually exclusive categories. As is clear from the descriptions, particular artifacts can sit within several of the categories at one time.
Nevertheless, I think the taxonomy is a useful one. It sheds light on the different ways in which artifacts can figure in our cognitive tasks, it makes us more sensitive to the rich panoply of cognitive artifacts we encounter in our everyday lives, and it can shed light on the propensity of these artifacts to enhance our cognitive performance. For example, symbolic cognitive artifacts clearly have a higher cognitive burden associated with them. The user must learn the conventions that determine the meaning of the representations before they can effectively use the artifact. At the same time, the symbolic representations probably allow for more complex and abstract cognitive operations to be performed. If we relied purely on iconic forms of representation we would probably never have generated the rich set of concepts and theories that litter our cognitive landscapes.