Pages

Sunday, September 18, 2016

Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis




David Krakauer seems like an interesting guy. He is the president of the Santa Fe institute in New Mexico, a complexity scientist and evolutionary theorist, with a noticeable interest in artificial intelligence and technology. I first encountered his work — as many recently did — via Sam Harris’s podcast. In the podcast he articulated some concerns he has about the development of artificial intelligence, concerns which he also set out in a recent (and short) article for the online magazine Nautilus.

Krakauer’s concerns are of interest to me. They echo the concerns of others like Nicholas Carr and Evan Selinger (both of whom I have written about before). But Krakauer expresses his concerns using an interesting framework for thinking about the different kinds of cognitive artifact humans have created over the course of history. In essence, he argues that cognitive artifacts come in two flavours: complementary and competitive. We are creating more and more competitive cognitive artifacts (i.e. AI), and he thinks this could be a bad thing.

What I hope to do in this article is examine this framework in more detail, explaining why I think it might be useful and where it has some shortcomings; then I want to reconstruct Krakauer’s argument against competitive cognitive architectures and subject it to critical scrutiny. In doing so, I hope to highlight the similarities between Krakauer’s argument and the others mentioned above. I believe this is important because the argument developed is incredibly common in popular debates about technology and is, I believe, misunderstood.


1. Complementary and Competitive Cognitive Artifacts
Krakauer takes his cue from Donald Norman’s 1991 paper ‘Cognitive Artifacts’. This paper starts by noting that one of the distinctive traits of human beings is that they can ‘modify the environment in which they live through the creation of artifacts’ (Norman 1991, quoting Cole 1990). When I want to dig a hole, I use a spade. The spade is an artifact that allows me to change my surrounding environment. It amplifies my physical capacities. Cognitive artifacts are artifacts that ‘maintain, display or operate upon information in order to serve a representational function’. A spade would not count as a cognitive artifact under this definition (though the activity one performs with the spade is clearly cognitively mediated) but much contemporary technology does.

Indeed, one of Norman’s main contentions is that cognitive artifacts are ubiquitous. Many of the cognitive tasks we perform on a daily basis are mediated through them. Paper and pen, map and compass, abacus and bead: these are all examples of cognitive artifacts. All digital information technology can be classified as such. They all operate upon information and create representations (interfaces) that we then use to interact with and understand the world. The computer on which I type these words is a classic example. I could not do my job — nor, I suspect, could you — without the advantages that these cognitive artifacts bring.

But there are different kinds of cognitive artifact. Contrast the abacus with a digital calculator. Very few people use abaci these days, though they are still common in some cultures. They are external scaffolds that allow human beings to perform simple arithmetical operations. Sliding beads along a wireframe, in different directions, with upper and lower decks used to identify orders of magnitude, can enable you to add, subtract, multiply, divide and so forth. Expert abaci users can often impress us with their computational abilities. In some cases they don’t even need the physical abacus. They can recreate its structure, virtually, in their minds and perform the same computations at speed. The artifact represents an algorithm to them through its interface — i.e. a ruleset for making something complex quite simple — and they can incorporate that algorithm into their own mental worlds.

The digital calculator is rather different. It also helps us to perform arithmetical operations (and other kinds of mathematical operation). It thereby amplifies our mathematical ability. A human being with a calculator could tell you what 1,237 x 456 was in a very short period of time. But if you took away the calculator the human probably wouldn’t be able to do the same thing on their own. The calculator works on an algorithmic basis, but the representation of the algorithms is hidden beneath the user interface. If you take away the calculator, the human cannot recreate — re-represent — the algorithm inside their own minds. There is no virtual analogue of the artifact.


The difference between the abacus and the calculator is the difference between what Krakauer calls complementary and competitive cognitive artifacts. In the article I read, he isn’t terribly precise about the definitions of these concepts. Here’s my attempt to define them:

Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.

Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.


Put another way, Krakauer says that complementary cognitive artifacts are teachers whereas competitive cognitive artifacts are serfs (for now anyway). When we use them, they improve upon (i.e. compete with) an aspect of our cognition. We use them as tools (or slaves) to perform tasks in which we are interested; but then we become dependent on them because they are better than us. We don’t work with them to improve our own abilities.

Here’s where I must enter my first objection. I find the distinction Krakauer draws between these two categories both interesting and useful. He is clearly getting at something true: there are different kinds of cognitive artifact and they affect how we perform cognitive tasks in different ways. But the binary distinction seems simplistic, and the way in which Krakauer characterises complementary cognitive artifacts seems limiting. I suspect there is really a spectrum of different cognitive artifacts out there, ranging from ones that really improve or enhance our internal cognitive abilities at one end to ones that genuinely compete with and replace them at the other.

But if we are going to stick with a more rigid classification system, then I think we should further subdivide the ‘complementary’ category into two sub-types. I don’t have catchy names for these sub-types, but the distinction I wish to make can be captured by referring to ‘training wheels’-like cognitive artifacts and ‘truly complementary’ cognitive artifacts. The kinds of complementary artifact used in Krakauer’s discussion are of the former type. Remember when you learned to ride a bike. Like most people, you probably found it difficult to balance. Your parents (or whoever) would have attached training wheels to your bike initially as a balance aid. Over time, as you grew more adept at the physical activity of cycling, the training wheels would have been removed and you would eventually be able to balance without them. Krakauer’s reference to cognitive artifacts that can eventually be replaced by a virtual/mental equivalent strike me as being analogous. The physical artifact is like a set of training wheels; the adept user doesn’t need them.

But is there not a separate category of truly complementary artifacts? Ones that can’t simply be taken away or replaced by mental simulacra, and don’t compete with or replace human cognition? In other words, are there not cognitive artifacts with which we are genuinely symbiotic? I think a notepad and pen falls into this category for me. I could, of course, think purely ‘in my head’, but I am so much better at doing it with a notepad and pen. I can scribble and capture ideas, draw out conceptual relationships, and map arguments using these humble technologies. I would not be as good at thinking without these artifacts; but the artifacts don’t replace or compete with me.




2. The Case Against Competitive Cognitive Artifacts
I said at the outset that this had something to do with fears about AI and modern technology. So far the examples have been of a less sophisticated type. But you can probably imagine how Krakauer’s argument develops from here.

Artificial intelligences (narrow, not broad) are the fastest growing example of competitive cognitive artifacts. The navigational routing algorithms used by Google maps; the purchase recommendation systems used by Netflix and Amazon; the automated messaging apps I covered in my conversation with Evan Selinger; all these systems perform cognitive tasks on our behalf in a competitive way. As these systems grow in scope and utility, we will end up living in a world where things are done for us not by us. This troubles Krakauer:

We are in the middle of a battle of artificial intelligences. It is not HAL, an autonomous intelligence and a perfected mind, that I fear but an aggressive App, imperfect and partial, that diminishes human autonomy. It is prosthetic integration with the latter — as in the case of a GPS App that assumes the role of the navigational sense, or a health tracker that takes over decision-making when it comes to choosing items from a menus — that concerns me. 
(Krakauer 2016)

He continues by drawing an analogy with the story of the Lotus Eaters from Homer’s The Odyssey:

In Homer’s The Odyssey, Odysseus’s ship finds shelter from a storm on the land of the lotus eaters. Some crew members go ashore and eat the honey-sweet lotus, “which was so delicious that those [who ate it] left off caring about home, and did not even want to go back and say what happened to them.” Although the crewmen wept bitterly, Odysseus reports, “I forced them back to the ships…Then I told the rest to go on board at once, lest any of them should taste of the lotus and leave off wanting to get home.” In our own times, it is the seductive taste of the algorithmic recommender system that saps our ability to explore options and exercise judgment. If we don’t exercise the wise counsel of Odysseus, our future won’t be the dystopia of Terminator but the pathetic death of the Lotus Eaters. 
(Krakauer 2016)

This is evocative stuff. But the argument underlying it all is a little opaque. The basic idea appears to work like this:


  • (1) It is good (for us) to create and use complementary cognitive artifacts; it is bad (or could be bad) to create and use competitive cognitive artifacts.
  • (2) We are creating more and more competitive cognitive artifacts.
  • (3) Therefore, we are creating a world that will be (or could be) bad for us.


This is vague, but it has to be since the source material is vague. Clearly, Krakauer is concerned about the creation of competitive cognitive artifacts. But why? Their badness (or potential badness) lies in how they sap us of cognitive ability and how they leave us no smarter without them. In other words, their badness lies in how we are too dependent on them. This affects our agency and responsibility (our autonomy). What’s not clear from Krakauer’s account is whether this is bad in and of itself, or whether it only becomes bad if the volume and extent of the cognitive competition crosses the threshold. For reasons I get into below, I assume it must be the latter rather than the former because in certain cases it seems like we should be happy to replace ourselves with artifacts.

Now that the argument is laid bare, it’s similarities with other popular anti-AI and anti-automation arguments becomes obvious. Nicholas Carr’s main argument in his book The Glass Cage is about the degenerative impact of automation on our cognitive capacities. Carr worries that over-reliance on automating, smart technologies will reduce our ability to perform certain kinds of cognitive task (including complex problem-solving). Evan Selinger’s anti-outsourcing argument is similar. It worries about the ethical impact of outsourcing certain kinds of cognitive labour to a machine (though Selinger’s argument is more subtle and more interesting for reasons I explore in a moment).
Krakauer’s argument is just another instance of this objection, dressed up in a different conceptual frame.

Is it any good?


3. The Changing Cognitive Ecology Problem
In a way, Krakauer’s argument is as old as Western Civilisation itself. In the Platonic dialogue The Phaedrus, Plato’s Socrates laments the invention of writing and worries about the cognitive effects that will result from the loss of oral culture:

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

Seems quaint and old-fashioned doesn’t it? Critics of anti-automation always point to this passage. They think it highlights how misguided and simplistic Krakauer’s (or Carr’s or Selinger’s) views are. No one now looks back and laments the invention of writing. Indeed, I think we can all agree that it has been enormously beneficial. It it far better at preserving culture and transmitting collective wisdom than oral traditions ever were. I think I can safely say that having access to high-quality written materials makes me a smarter, better person. I wouldn’t have it any other way (though I acknowledge some books have had a negative impact on society). I gain by having access to so much information: it enables me to understand far more of the world and generate new and hopefully interesting ideas by combining bits and pieces of what I have read. Furthermore, books didn’t really undermine memory in the way that Socrates imagined. They simply changed what it was important to remember. There were still (until recently anyway) pressures to remember other kinds of information.

The problem with Krakauer’s view is deep and important. It is that competitive cognitive artifacts don’t just replace or undermine one cognitive task. They change the cognitive ecology, i.e. the social and physical environment in which we must perform cognitive tasks. This is something that Donald Norman acknowledged in his 1991 paper on cognitive artifacts. There, his major claim was that such artifacts neither amplify nor replace the human mind; rather they change what the human mind needs to do. Think about the humble to-do list. This is an artifact that helps you to remember. But the cognitive act of remembering with a to-do list is very different from the cognitive act of remembering without. With the to-do list, three separate tasks must be performed: creating the list, storing it, looking it up when needs be. Without the list you just search your mind for the information (perhaps through the use of associative cues). The same net result is produced, but the ecology of tasks has changed. These changes are not something that can be evaluated in a simple or straightforward manner. The process of changing the cognitive ecology may remove or eliminate an old cognitive task, but doing so can bring with it many benefits. It may enable us to focus our cognitive energies on other tasks that are more worthy uses of our time and effort. This is what happened with the invention of writing. The transmission of information via the written word meant we no longer needed to dedicate precious time and effort to the simple act of remembering that information. We could dedicate time and effort to thinking up new ways in which that information could be utilised.

The Canadian fantasy author R Scott Bakker describes the ‘cognitive ecology’ problem well in his recent response to Krakauer. As he puts it:

What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us. 
(Bakker 2016)

And therein lies the rub for Krakauer et al: why should we fear the growth of competitive cognitive architectures when their effects on our cognitive ecology are uncertain and when similar technologies have, in the past, been beneficial?

It is a fair point but I think the cognitive ecology objection has its limitations too. It may highlight problems with the generalised version of the anti-automation argument that Krakauer seems to be making, but it fares less well against more specific versions of the argument. For instance, Evan Selinger’s objections to technological outsourcing tend to be much more nuanced and focused. I covered them in detail before so I won’t do so again here. In essence Selinger argues that certain types of competitive cognitive artifact might be problematic insofar as the value of certain activities may come from the fact that we are present, conscious performers of those activities. If we are no longer present conscious performers of the activities — if we outsource our performance to an artifact — then we may denude them of their value. Good examples of this include affective tasks we perform in our interpersonal relationships (e.g. messaging someone to remind them how much you love them) as well as the performative aspects of personal virtues (e.g. generosity and courage). By tailoring the argument to specific cases you end up with something more powerful.

In addition to this, I worry about the naive use of historical examples to deflate concerns about present-day technologies. The notion that you can simply point to the Phaedrus, laugh at Socrates’ quaint preliterate views, and then warmly embrace the current wave of competitive cognitive artifacts seems wrong to me. There may be crucial differences between what we are currently doing with technology and what has happened in the past. Just because everything worked out before doesn’t mean everything will work out now. This is something that has been well-thrashed out in the debate about technological unemployment (proponents of which are frequently ridiculed for believing that this time it will be different). The scope and extent of the changes to our cognitive ecology may be genuinely unprecedented (it certainly seems that way). The assumption behind the cognitive ecology objection is that humans will end up occupying a new and equally rewarding niche in the new cognitive ecology, but who is to say this is true? If technology is better than humans in every cognitive domain, there may be no niches to find. Perhaps we are like flightless birds on some cognitive archipelago: we have no natural predator right now but things could change in the not-too-distant future.

Finally, I worry about the uncertainty involved in the coming transitions. We must make decisions in the face of uncertainty — of course we must. But the notion that we should embrace rampant AI despite (or maybe because of) that uncertainty seems wrong to me. Commitment to technological change for its own sake seems just as naive as reactionary conservatism against it. There must be a sensible middle ground where we can think reasonably and rationally about the evaluative trade-offs that might result from the use of competitive cognitive artifacts, weigh them up as best we can, and proceed with hope and optimism. Throwing ourselves off the cliff in the hopes of finding some new cognitive niche doesn’t feel like the right way to go about it.

2 comments:

  1. Hello John,

    It took me forever to finally find your article again. I have no idea how I got to your blog to begin with, but I wanted to thank you for being so clear on this concept.

    I immediately adopted it in the search of "low tech" tools.

    Best,

    ReplyDelete
  2. Hello John,

    It took me forever to finally find your article again. I have no idea how I got to your blog to begin with, but I wanted to thank you for being so clear on this concept.

    I immediately adopted it in the search of "low tech" tools.

    Best,

    ReplyDelete