Pages

Wednesday, September 21, 2016

Pushing Humans off the Loop: Automation and the Unsustainability Problem



There is a famous story about an encounter between Henry Ford II (CEO of Ford Motors) and Walter Reuther (head of the United Automobile Workers Union). Ford was showing Reuther around his factory, proudly displaying all the new automating technologies he had introduced to replace human workers. Ford gloated, asking Reuther ‘How are you going to get those robots to pay union dues?’. Reuther responded with equal glee ‘Henry, how are you going to get them to buy your cars?’.

The story is probably apocryphal, but it’s too good a tale to let truth get in the way. The story reveals a common fear about technology and the impact it will have on human society. The fear is something I call the ‘unsustainability problem’. The idea is that if certain trends in automation continue, and humans are pushed off more and more productive/decision-making loops, the original rationale for those ‘loops’ will disappear and the whole system will start to unravel. Is this a plausible fear? Is it something we should take seriously?

I want to investigate those questions over the remainder of this post. I do so by first identifying the structure of the problem and outlining three examples. I then set out the argument from unsustainability that seems to follow from those examples. I close by considering potential objections and replies to that argument. My goal is not to defend any particular point of view. Instead — and as part of my ongoing work — I want to identify and catalogue a popular objection/concern to the development of technology and highlight its similarities to other popular objections.

[Note: This is very much an idea or notion that I thought might be interesting. After writing it up, I'm not sure that it is. In particular, I'm not sure that the examples used are sufficiently similar to be analysed in the same terms. But maybe they are. Feedback is welcome]


1. Pushing Humans off the Loop
Let’s start with some abstraction. Many human social systems are characterised by reciprocal relationships between groups of agents occupying different roles. Take the relationship between producers (or suppliers) and consumers. This is the relationship at the heart of the dispute between Ford and Reuther. Producers make or supply goods and services to consumers; consumers purchase and make use of the goods and services provided by the producers. The one cannot exist without the other. The whole rationale behind the production and supply of goods and services is that there is a ready and willing cadre of consumers who want those goods and services. That’s the only way that the producers will make money. But it’s not just that the producers need the consumers to survive: the consumers also need the producers. Or, rather, they need themselves to be involved in production, even if only indirectly, in order to earn an income that enables them to be consumers. I have tried to illustrate this in the diagram below.



The problem alluded to in the story about Ford and Reuther is that this loop is not sustainable if there is too much automation. If the entire productive half of the loop is taken over by robots, then where will the consumers get the income they need to keep the system going? (Hold off on any answers you might have for now — I’ll get to some possibilities later)

When most people think about the unsustainability problem, the production-consumption relationship is the one they usually have in mind. And when they think about that relationship, they usually only focus on the automation of the productive half of the relationship. But this is to ignore another interesting trend in automation: the trend towards automating the entire loop, i.e. production and consumption. How is this happening? The answer lies in the growth of the internet of things and the rise of ‘ambient payments’. Smart devices are capable of communicating and transacting with one another. The refrigerator in your home could make a purchase from the robot personal shopper in your local store. You might be the ultimate beneficiary of the transaction but you have been pushed off the primary economic loop: you are neither the direct producer nor the supplier.

It’s my contention that it is this trend towards total automation that is the really interesting phenomenon. And it’s not just happening in the production-consumption loop either. It is happening in other loops as well. Let me give just two examples: the automation of language production and interpretation in the speaker-listener loop, and the automation of governance in the governor-governed loop.

The production and interpretation of language takes place in a loop. The ‘speaker’ produces language that he or she wishes to cause some effect in the mind of the ‘listener’ — without the presumption of a listener there is very little point to the act. Likewise, the ‘listener’ interprets the language based on the presumption that there is a speaker who wishes to be understood, and based on what they have learned about the meaning of language from living in a community of other speakers and listeners. Language lives and breathes in a vibrant and interconnected community of speakers and listeners, with individuals often flitting back and forth between the roles. So there is, once again, a symbiotic relationship between the two sides of the loop.



Could the production and interpretation of language be automated? It is already happening in the digital advertising economy. This is a thesis that Pip Thornton (the research assistant on the Algocracy and Transhumanism Project that I am running) has developed in her work. It is well known that Google makes its money from advertising. What is perhaps less well-known is that Google does this by commodifying language. Google auctions keywords to advertisers. Different words are assigned different values based on how likely people are to search for them in a given advertising area (space and time). The more popular the word in the search engine, the higher the auction value. Advertisers pay Google for the right to use the popular words in their adverts and have them displayed alongside user searches for those terms.

This might sound relatively innocuous and uninteresting at first glance. Language has always been commodified and advertisers have always, to some extent, paid for ‘good copy’. The only difference in this instance is that it is Google’s PageRank algorithm that determines what counts as ‘good copy’.
Where the phenomenon gets interesting is when you start to realise that this has resulted in an entire linguistic economy where both the production and interpretation of language is slowly being taken over by algorithms. The PageRank algorithm functions as the ultimate interpreter. Humans adjust their use of language to match the incentives set by that algorithm. But humans don’t do this quickly enough. An array of bots are currently at work stuffing webpages with algorithmically produced language and clicking on links in the hope that it will trick the ranking system. In very many instances neither the producers nor interpreters of advertising copy are humans. The internet is filled with oddly produced, barely comprehensible webpages whose linguistic content has been tailored to the preferences of machines. Humans web-surfers often find themselves in the role of archaeologists stumbling upon these odd linguistic tombs.

Automation is also taking place in the governor-governed relationship. This is the relationship that interests me most and is the centrepiece of the project I’m currently running. I define a governance system as any system that tries to nudge, manipulate, push, pull, incentivise (etc.) human behaviour. This is a broad definition and could technically subsume the two relationships previously described. More narrowly, I am an interested in state-run governance systems, such as systems of democratic or bureaucratic control. In these systems, one group of agents (the governors) set down rules and regulations that must be followed by the others (the governed). It’s less easy to describe this as a reciprocal relationship. In many historical cases, the governors are rigidly separated from the governed and by necessity have significant power over them. But there is still something reciprocal about it. No one — not even the most brutal dictator — can govern for long without the acquiescence of the governed. The governed must perceive the system to be legitimate in order for it to work. In modern democratic systems this is often taken to mean that they should play some role in determining the content of the rules by which they are governed.



I have talked to a lot of people about this over the years. To many, it seems like the governor-governed relationship is intrinsically humanistic in nature. It is very difficult for them to imagine a governance system in which either or both roles becomes fully automated. Surely, they say, humans will always retain some input into the rules by which they are governed? And surely humans will always be the beneficiaries of these rules?

Maybe, but even here we see the creeping rise of automation. Already, there are algorithms that collect, mine, classify and make decisions on data produced by us as subjects of governance. This leads to more and more automation on the governor-side of the loop. But the rise of smart devices and machines could also facilitate the automation of the governed side of the loop. The most interesting example of this comes in the shape of blockchain governance systems. The blockchain provides a way for people to create smart contracts. These are automated systems for encoding and enforcing promises/commitments, e.g. the selling of a derivative at some future point in time. The subjects of these smart contracts are not people — at least not directly. Smart contracts are machine-to-machine promises. A signal that is recorded and broadcast from one device is verified via a distributed network of other computing devices. This verification triggers some action via another device (e.g. the release of money or property).

As noted in other recent blog posts, blockchain-based smart contracts could provide the basis for systems of smart property (because every piece of property in the world is becoming a ‘smart’ device) and even systems of smart governance. The apotheosis of the blockchain governance ideal is the hypothetical distributed autonomous organisation (DAO) which is an artificial, self governing agent, spread out across a distributed network of smart devices. The actions of the DAO may affect the lives of human beings, but the rules by which it operates could be entirely automated in terms of their production and implementation. Again, humans may be indirect beneficiaries of the system, but they are not the primary governors or governed. They are bystanders.


2. The Unsustainability Argument
Where will this process of automation bottom out? Can it continue indefinitely? Does it even make sense for it to continue indefinitely? To some, it is not possible to understand the trend toward total automation in terms of its causes and effects. To them, there is something much more fundamental and disconcerting going on. Total automation is a deeply puzzling phenomenon — something that cannot and should continue to the point where humans are completely off the loop.

The Ford-Reuther story seems to highlight the problem in the clearest possible way. How can a capitalistic economy survive if there are no human producers and consumers? Surely this is self-defeating? The whole purpose of capitalism is to provide tools for distributing goods and services to the humans that need them. If that’s not what happens, then the capitalistic logic will have swallowed itself whole (yes, I know, this is something that Marxists have always argued).

I call this the unsustainability problem and it can be formulated as an argument:


  • (1) If automation trend X continues, then humans will be pushed off the loop.

  • (2) The loop is unsustainable* without human participation.

  • (3) Therefore, if automation trend X continues we will end up with something that is unsustainable*.


You’ll notice that I put a little asterisk after unsustainable. That’s deliberate. ‘Unsustainable’ in this context is not to be understood in its colloquial sense, though it can be. Unsustainable* stands for a number of possible concerns. It could be literally unsustainable in the sense that the trend will eventually lead to some breaking point or crash point. This is common in certain positive feedback loops. For example, the positive feedback loop that causes the hyperinflation of currencies. If the value of a currency inflates like it did in Weimar Germany or, more recently, Zimbabwe, then you eventually reach a point where the currency is worthless in economic transactions. People have to rely on another currency or have recourse to barter. Either way, the feedback loop is not sustainable in the long-term. But unsustainable* could have more subtle meanings. It may be the trend is sustainable in the long-term (i.e. it could continue indefinitely) but if it did so you would radically alter the value or meaning that attached to the activities in the loop. So much so that they would seem pointless or no longer worthwhile.

To give some examples, the unsustainability argument applied to the producer-consumer case might involve literal unsustainability, i.e. the concern might be that it will lead to the capitalistic system breaking down; or it might be that it will radically alter the value of that system, i.e. it might force a change in the system of private property. In the case of the speaker-listener loop, the argument might be that automation misses the point of what a language is, i.e. that a language is necessarily a form of communication between two (or more) conscious, intentional agents. If there are no conscious, intentional agents involved, then you no longer have a language. You might have some form of machine-to-machine communication, but there is no reason for that to take the form of language.


3. Should the Unsustainability Problem Concern Us?
I want to close with some simple critical reflections on the unsustainability argument. I’ll keep these fairly general.

First, I want to talk a bit more about premise (1). There are various ways in which this may be false. The simple fact that there is automation of the tasks typically associated with a given activity does not mean that humans will be pushed off the loop. As I’ve highlighted on other occasions, the ‘loops’ referred to in debates about technology are complicated and break down into multiple sub-tasks and sub-loops. Take the production side of the producer-consumer relationship. Productive processes can usually be broken down into a series of stages which often have an internal loop-like structure. If I own a business that produces some widgets, I would usually start the productive process by trying to figure out what kinds of widgets are needed in the world, I would then acquire the raw materials needed to make those widgets, develop some productive process, release the widgets to the consumers, and then learn from my mistakes/successes in order to refine and improve the process in the future. When we talk about the automation of production, there is a tendency to ignore these multiple stages. It’s rare for them all to be automated, consequently it’s likely that humans will retain some input into the loops.

Another way of putting this point is to say that technology doesn’t replace humans; it displaces them, i.e. changes the ecology in which they operate so that they need to do new things to survive. People have been making this point for some time in the debate about technology and unemployment. The introduction of machines onto the factory floors of Ford Motor Cars didn’t obviate the need for human workers; it simply changed what kinds of human workers were needed (skilled machinists etc.). But it is important that this displacement claim is not misunderstood. It doesn’t mean that there is nothing to worry about or that the displacement won’t have profound or important consequences for the sustainability of the relevant phenomenon. The human input into the newly automated productive or consumptive processes might be minimal: very few workers might be needed to maintain production within the factory and there might be limited opportunity for humans to exercise choice or autonomy when it comes to consumer-related decisions. Humans may be involved in the loops but be reduced to relatively passive roles within them. More radically, and possibly more interestingly, the automation trends may subsume humans themselves. In other words, the humans may not be displaced by technology; they may become the technology itself.

This relates to the plausibility of premise (2). This may also be false, particularly if unsustainability is understood in its literal sense. For example, I don’t see any reason to think that the automation of language production and interpretation in online advertising cannot continue. It may prove frustrating for would-be advertisers, and it may seem odd to the humans who stand on the sidelines watching the system unfold, but the desire for advertising space and the scarcity of attention suggests to me that, if anything, there will be a doubling down on this practice in the future. This will certainly alter the activity and rob it of some of its value, but there will still be the hope that you can find someone that is paying attention to the process. The same goes for the other examples. They may prove sustainable with some changed understanding of what makes them worthwhile and how they affect their ultimate beneficiaries. The basic income guarantee, for instance, is sometimes touted as a way to keep capitalism going in the face of alleged unsustainability.

Two other points before I finish up. Everything I have said so far presumes that machines themselves should not be viewed as agents or objects of moral concern — i.e. that they cannot directly benefit from the automation of production and consumption, or governance or language. If they can — and if it is right to view them as beneficiaries — then the analysis changes somewhat. Humans are still pushed off the loop, but it makes more sense for the loops to continue with automated replacements. Finally, as I have elaborated it, the unsustainability problem is very similar to other objections to technology, including ones I have covered in the recent past. It is, in many ways, akin the outsourcing and competitive cognitive artifacts objections that I covered here and here. All of these objections worry about the dehumanising potential of technology and the future relevance of human beings in the automated world. The differences tend to come in how they frame the concern, not in its ultimate contents.

Sunday, September 18, 2016

Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis




David Krakauer seems like an interesting guy. He is the president of the Santa Fe institute in New Mexico, a complexity scientist and evolutionary theorist, with a noticeable interest in artificial intelligence and technology. I first encountered his work — as many recently did — via Sam Harris’s podcast. In the podcast he articulated some concerns he has about the development of artificial intelligence, concerns which he also set out in a recent (and short) article for the online magazine Nautilus.

Krakauer’s concerns are of interest to me. They echo the concerns of others like Nicholas Carr and Evan Selinger (both of whom I have written about before). But Krakauer expresses his concerns using an interesting framework for thinking about the different kinds of cognitive artifact humans have created over the course of history. In essence, he argues that cognitive artifacts come in two flavours: complementary and competitive. We are creating more and more competitive cognitive artifacts (i.e. AI), and he thinks this could be a bad thing.

What I hope to do in this article is examine this framework in more detail, explaining why I think it might be useful and where it has some shortcomings; then I want to reconstruct Krakauer’s argument against competitive cognitive architectures and subject it to critical scrutiny. In doing so, I hope to highlight the similarities between Krakauer’s argument and the others mentioned above. I believe this is important because the argument developed is incredibly common in popular debates about technology and is, I believe, misunderstood.


1. Complementary and Competitive Cognitive Artifacts
Krakauer takes his cue from Donald Norman’s 1991 paper ‘Cognitive Artifacts’. This paper starts by noting that one of the distinctive traits of human beings is that they can ‘modify the environment in which they live through the creation of artifacts’ (Norman 1991, quoting Cole 1990). When I want to dig a hole, I use a spade. The spade is an artifact that allows me to change my surrounding environment. It amplifies my physical capacities. Cognitive artifacts are artifacts that ‘maintain, display or operate upon information in order to serve a representational function’. A spade would not count as a cognitive artifact under this definition (though the activity one performs with the spade is clearly cognitively mediated) but much contemporary technology does.

Indeed, one of Norman’s main contentions is that cognitive artifacts are ubiquitous. Many of the cognitive tasks we perform on a daily basis are mediated through them. Paper and pen, map and compass, abacus and bead: these are all examples of cognitive artifacts. All digital information technology can be classified as such. They all operate upon information and create representations (interfaces) that we then use to interact with and understand the world. The computer on which I type these words is a classic example. I could not do my job — nor, I suspect, could you — without the advantages that these cognitive artifacts bring.

But there are different kinds of cognitive artifact. Contrast the abacus with a digital calculator. Very few people use abaci these days, though they are still common in some cultures. They are external scaffolds that allow human beings to perform simple arithmetical operations. Sliding beads along a wireframe, in different directions, with upper and lower decks used to identify orders of magnitude, can enable you to add, subtract, multiply, divide and so forth. Expert abaci users can often impress us with their computational abilities. In some cases they don’t even need the physical abacus. They can recreate its structure, virtually, in their minds and perform the same computations at speed. The artifact represents an algorithm to them through its interface — i.e. a ruleset for making something complex quite simple — and they can incorporate that algorithm into their own mental worlds.

The digital calculator is rather different. It also helps us to perform arithmetical operations (and other kinds of mathematical operation). It thereby amplifies our mathematical ability. A human being with a calculator could tell you what 1,237 x 456 was in a very short period of time. But if you took away the calculator the human probably wouldn’t be able to do the same thing on their own. The calculator works on an algorithmic basis, but the representation of the algorithms is hidden beneath the user interface. If you take away the calculator, the human cannot recreate — re-represent — the algorithm inside their own minds. There is no virtual analogue of the artifact.


The difference between the abacus and the calculator is the difference between what Krakauer calls complementary and competitive cognitive artifacts. In the article I read, he isn’t terribly precise about the definitions of these concepts. Here’s my attempt to define them:

Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.

Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.


Put another way, Krakauer says that complementary cognitive artifacts are teachers whereas competitive cognitive artifacts are serfs (for now anyway). When we use them, they improve upon (i.e. compete with) an aspect of our cognition. We use them as tools (or slaves) to perform tasks in which we are interested; but then we become dependent on them because they are better than us. We don’t work with them to improve our own abilities.

Here’s where I must enter my first objection. I find the distinction Krakauer draws between these two categories both interesting and useful. He is clearly getting at something true: there are different kinds of cognitive artifact and they affect how we perform cognitive tasks in different ways. But the binary distinction seems simplistic, and the way in which Krakauer characterises complementary cognitive artifacts seems limiting. I suspect there is really a spectrum of different cognitive artifacts out there, ranging from ones that really improve or enhance our internal cognitive abilities at one end to ones that genuinely compete with and replace them at the other.

But if we are going to stick with a more rigid classification system, then I think we should further subdivide the ‘complementary’ category into two sub-types. I don’t have catchy names for these sub-types, but the distinction I wish to make can be captured by referring to ‘training wheels’-like cognitive artifacts and ‘truly complementary’ cognitive artifacts. The kinds of complementary artifact used in Krakauer’s discussion are of the former type. Remember when you learned to ride a bike. Like most people, you probably found it difficult to balance. Your parents (or whoever) would have attached training wheels to your bike initially as a balance aid. Over time, as you grew more adept at the physical activity of cycling, the training wheels would have been removed and you would eventually be able to balance without them. Krakauer’s reference to cognitive artifacts that can eventually be replaced by a virtual/mental equivalent strike me as being analogous. The physical artifact is like a set of training wheels; the adept user doesn’t need them.

But is there not a separate category of truly complementary artifacts? Ones that can’t simply be taken away or replaced by mental simulacra, and don’t compete with or replace human cognition? In other words, are there not cognitive artifacts with which we are genuinely symbiotic? I think a notepad and pen falls into this category for me. I could, of course, think purely ‘in my head’, but I am so much better at doing it with a notepad and pen. I can scribble and capture ideas, draw out conceptual relationships, and map arguments using these humble technologies. I would not be as good at thinking without these artifacts; but the artifacts don’t replace or compete with me.




2. The Case Against Competitive Cognitive Artifacts
I said at the outset that this had something to do with fears about AI and modern technology. So far the examples have been of a less sophisticated type. But you can probably imagine how Krakauer’s argument develops from here.

Artificial intelligences (narrow, not broad) are the fastest growing example of competitive cognitive artifacts. The navigational routing algorithms used by Google maps; the purchase recommendation systems used by Netflix and Amazon; the automated messaging apps I covered in my conversation with Evan Selinger; all these systems perform cognitive tasks on our behalf in a competitive way. As these systems grow in scope and utility, we will end up living in a world where things are done for us not by us. This troubles Krakauer:

We are in the middle of a battle of artificial intelligences. It is not HAL, an autonomous intelligence and a perfected mind, that I fear but an aggressive App, imperfect and partial, that diminishes human autonomy. It is prosthetic integration with the latter — as in the case of a GPS App that assumes the role of the navigational sense, or a health tracker that takes over decision-making when it comes to choosing items from a menus — that concerns me. 
(Krakauer 2016)

He continues by drawing an analogy with the story of the Lotus Eaters from Homer’s The Odyssey:

In Homer’s The Odyssey, Odysseus’s ship finds shelter from a storm on the land of the lotus eaters. Some crew members go ashore and eat the honey-sweet lotus, “which was so delicious that those [who ate it] left off caring about home, and did not even want to go back and say what happened to them.” Although the crewmen wept bitterly, Odysseus reports, “I forced them back to the ships…Then I told the rest to go on board at once, lest any of them should taste of the lotus and leave off wanting to get home.” In our own times, it is the seductive taste of the algorithmic recommender system that saps our ability to explore options and exercise judgment. If we don’t exercise the wise counsel of Odysseus, our future won’t be the dystopia of Terminator but the pathetic death of the Lotus Eaters. 
(Krakauer 2016)

This is evocative stuff. But the argument underlying it all is a little opaque. The basic idea appears to work like this:


  • (1) It is good (for us) to create and use complementary cognitive artifacts; it is bad (or could be bad) to create and use competitive cognitive artifacts.
  • (2) We are creating more and more competitive cognitive artifacts.
  • (3) Therefore, we are creating a world that will be (or could be) bad for us.


This is vague, but it has to be since the source material is vague. Clearly, Krakauer is concerned about the creation of competitive cognitive artifacts. But why? Their badness (or potential badness) lies in how they sap us of cognitive ability and how they leave us no smarter without them. In other words, their badness lies in how we are too dependent on them. This affects our agency and responsibility (our autonomy). What’s not clear from Krakauer’s account is whether this is bad in and of itself, or whether it only becomes bad if the volume and extent of the cognitive competition crosses the threshold. For reasons I get into below, I assume it must be the latter rather than the former because in certain cases it seems like we should be happy to replace ourselves with artifacts.

Now that the argument is laid bare, it’s similarities with other popular anti-AI and anti-automation arguments becomes obvious. Nicholas Carr’s main argument in his book The Glass Cage is about the degenerative impact of automation on our cognitive capacities. Carr worries that over-reliance on automating, smart technologies will reduce our ability to perform certain kinds of cognitive task (including complex problem-solving). Evan Selinger’s anti-outsourcing argument is similar. It worries about the ethical impact of outsourcing certain kinds of cognitive labour to a machine (though Selinger’s argument is more subtle and more interesting for reasons I explore in a moment).
Krakauer’s argument is just another instance of this objection, dressed up in a different conceptual frame.

Is it any good?


3. The Changing Cognitive Ecology Problem
In a way, Krakauer’s argument is as old as Western Civilisation itself. In the Platonic dialogue The Phaedrus, Plato’s Socrates laments the invention of writing and worries about the cognitive effects that will result from the loss of oral culture:

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

Seems quaint and old-fashioned doesn’t it? Critics of anti-automation always point to this passage. They think it highlights how misguided and simplistic Krakauer’s (or Carr’s or Selinger’s) views are. No one now looks back and laments the invention of writing. Indeed, I think we can all agree that it has been enormously beneficial. It it far better at preserving culture and transmitting collective wisdom than oral traditions ever were. I think I can safely say that having access to high-quality written materials makes me a smarter, better person. I wouldn’t have it any other way (though I acknowledge some books have had a negative impact on society). I gain by having access to so much information: it enables me to understand far more of the world and generate new and hopefully interesting ideas by combining bits and pieces of what I have read. Furthermore, books didn’t really undermine memory in the way that Socrates imagined. They simply changed what it was important to remember. There were still (until recently anyway) pressures to remember other kinds of information.

The problem with Krakauer’s view is deep and important. It is that competitive cognitive artifacts don’t just replace or undermine one cognitive task. They change the cognitive ecology, i.e. the social and physical environment in which we must perform cognitive tasks. This is something that Donald Norman acknowledged in his 1991 paper on cognitive artifacts. There, his major claim was that such artifacts neither amplify nor replace the human mind; rather they change what the human mind needs to do. Think about the humble to-do list. This is an artifact that helps you to remember. But the cognitive act of remembering with a to-do list is very different from the cognitive act of remembering without. With the to-do list, three separate tasks must be performed: creating the list, storing it, looking it up when needs be. Without the list you just search your mind for the information (perhaps through the use of associative cues). The same net result is produced, but the ecology of tasks has changed. These changes are not something that can be evaluated in a simple or straightforward manner. The process of changing the cognitive ecology may remove or eliminate an old cognitive task, but doing so can bring with it many benefits. It may enable us to focus our cognitive energies on other tasks that are more worthy uses of our time and effort. This is what happened with the invention of writing. The transmission of information via the written word meant we no longer needed to dedicate precious time and effort to the simple act of remembering that information. We could dedicate time and effort to thinking up new ways in which that information could be utilised.

The Canadian fantasy author R Scott Bakker describes the ‘cognitive ecology’ problem well in his recent response to Krakauer. As he puts it:

What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us. 
(Bakker 2016)

And therein lies the rub for Krakauer et al: why should we fear the growth of competitive cognitive architectures when their effects on our cognitive ecology are uncertain and when similar technologies have, in the past, been beneficial?

It is a fair point but I think the cognitive ecology objection has its limitations too. It may highlight problems with the generalised version of the anti-automation argument that Krakauer seems to be making, but it fares less well against more specific versions of the argument. For instance, Evan Selinger’s objections to technological outsourcing tend to be much more nuanced and focused. I covered them in detail before so I won’t do so again here. In essence Selinger argues that certain types of competitive cognitive artifact might be problematic insofar as the value of certain activities may come from the fact that we are present, conscious performers of those activities. If we are no longer present conscious performers of the activities — if we outsource our performance to an artifact — then we may denude them of their value. Good examples of this include affective tasks we perform in our interpersonal relationships (e.g. messaging someone to remind them how much you love them) as well as the performative aspects of personal virtues (e.g. generosity and courage). By tailoring the argument to specific cases you end up with something more powerful.

In addition to this, I worry about the naive use of historical examples to deflate concerns about present-day technologies. The notion that you can simply point to the Phaedrus, laugh at Socrates’ quaint preliterate views, and then warmly embrace the current wave of competitive cognitive artifacts seems wrong to me. There may be crucial differences between what we are currently doing with technology and what has happened in the past. Just because everything worked out before doesn’t mean everything will work out now. This is something that has been well-thrashed out in the debate about technological unemployment (proponents of which are frequently ridiculed for believing that this time it will be different). The scope and extent of the changes to our cognitive ecology may be genuinely unprecedented (it certainly seems that way). The assumption behind the cognitive ecology objection is that humans will end up occupying a new and equally rewarding niche in the new cognitive ecology, but who is to say this is true? If technology is better than humans in every cognitive domain, there may be no niches to find. Perhaps we are like flightless birds on some cognitive archipelago: we have no natural predator right now but things could change in the not-too-distant future.

Finally, I worry about the uncertainty involved in the coming transitions. We must make decisions in the face of uncertainty — of course we must. But the notion that we should embrace rampant AI despite (or maybe because of) that uncertainty seems wrong to me. Commitment to technological change for its own sake seems just as naive as reactionary conservatism against it. There must be a sensible middle ground where we can think reasonably and rationally about the evaluative trade-offs that might result from the use of competitive cognitive artifacts, weigh them up as best we can, and proceed with hope and optimism. Throwing ourselves off the cliff in the hopes of finding some new cognitive niche doesn’t feel like the right way to go about it.

Tuesday, September 13, 2016

Philosophical Disquisitions Newsletter 1st Edition



I have started a monthly newsletter. It's going to feature the best content from my blog and wider academic work each month, plus some bonus content that I won't share on the blog (e.g. thought experiment of the month, recommended reading, news and videos etc). You can see the first edition here. If you like it, I would be much obliged if you would consider signing up for the monthly email.

LINK: Philosophical Disquisitions Monthly 1st Edition

LINK: Sign up

Sunday, September 11, 2016

Is Robust Moral Realism a kind of Religious Belief?




Robust moral realism is the view that moral facts exist, but that they are not reducible to non-moral or natural facts. According to the robust realist, when I say something like ‘It is morally wrong to torture an innocent child for fun’, I am saying something that is true, but whose truth is not reducible to the non-moral properties of torture or children. Robust moral realism has become surprisingly popular in recent years, with philosophers like Derek Parfit, David Enoch, Erik Wielenberg and Russell Shafer-Landau all defending versions of it.

What is interesting about these philosophers is that they are all avowedly non-religious in their moral beliefs. They don’t think there is any connection between morality and the truths of any particular religion. Indeed, several of them are explicitly atheistic in their moral outlook. In a recent paper, however, David Killoren has argued that robust moral realism is a kind of religious belief: one that must be held on faith and that shares other properties with popular religions. At the same time, he argues that it is an ‘excellent’ kind of religious belief, one that could be attractive to the non-religious and religious alike.

I want to look at the argument Killoren uses to defend this point of view.


1. Three Features of Robust Moral Realism
Before we get into the meat of Killoren’s argument, we need to have a clearer characterisation of robust moral realism. Go back to my earlier example of the proposition ‘It is morally wrong to torture an innocent child for fun’. Suppose you and I are discussing this proposition one day. You happen to agree with it. Killoren argues that if we were robust realists then we would share three key beliefs about the nature of our agreement about this claim:

Non-naturalism: We would both believe that the statement ‘it is morally wrong to torture an innocent child for fun’ is a non-natural fact. I explained this above but I can be more precise here. The key thing is that we would believe that it is an irreducibly normative fact. It may supervene on natural facts, but it is distinct from and not identical with those natural facts.

Objectivism: We would both believe that the truth of the statement is not dependent on our moral attitudes. In other words, its being true does not depend on our believing it to be true. It is mind-independent. Killoren argues that objectivism in this sense entails that moral facts are independent from two distinct moral attitudes: (i) our moral beliefs and (ii) our moral seemings (i.e. the fact that it seems like X is true).

Optimism: We would both believe that we do in fact know that the statement ‘it is morally wrong to torture an innocent child for fun’ is true. We are consequently optimistic about the truth of our moral beliefs. Killoren is, again, more precise here in saying that optimism is the view that our deepest moral beliefs are true. So we may disagree at the margins (e.g. ‘we should give 10% of our income to charity’), but we agree about more fundamental moral claims (like claims about the torturing of innocent children).



Two things are worth noting about this triptych of beliefs. The first is that the commitment to non-naturalism comes with a significant cost. Killoren calls it the ‘non-naturalist’s handicap’. If non-naturalism is true, then it means that ‘moral facts do not play a contributory role in the best explanation of any natural facts’. This is troubling for otherwise non-religious naturalists because it means that all their moral beliefs and attitudes are not best explained by the existence of true non-natural moral facts. If our minds are ultimately best explained by natural facts, then non-natural moral facts cannot feature in the explanation of the content of our minds. This ‘handicap’ is something that non-realists like Sharon Street have long complained about.

The other point is that the commitment to optimism is the only thing that saves robust realism from moral nihilism. As Killoren puts it, the first two commitments are really the standard features of robust realism. Everyone who calls themselves a robust realist will agree that they are committed to non-naturalism and objectivism. But those two commitments are ontologically neutral. One could accept them and still believe that no moral truths actually exist (e.g. because one is a metaphysical naturalist) or that we can never know what they are. Of course, no robust realist tends to accept this nihilistic view. They all think that moral truths are knowable and that we have a good grasp of the basic ones. So optimism is, implicitly, a feature of their view.


3. Robust Realism is a Type of Religious Belief
Now that we have a clearer sense of what robust realism entails, we can look at Killoren’s main argument. The first thing Killoren does is argue that robust realism requires a kind of faith. He has a lengthy discussion of faith in his paper. He notes that some accounts of faith are non-doxastic in nature, i.e. they hold that faith has nothing to do with our beliefs and everything to do with our desires and aspirations. On these non-doxastic accounts, faith is like hope (or some other positive attitude). But most accounts of faith include a doxastic element. Those are the accounts in which he is interested.

He then distinguishes between two types of doxastic faith:

Blind Faith: Belief in P in the absence of any evidence that P (or in the face of countervailing evidence that not-P).
Unscientific Belief: Belief in P in the absence of any scientific evidence that P (or in the face of countervailing scientific evidence that not-P).

Obviously, blind faith is much broader than unscientific belief. The idea is that there might be some evidence to support an unscientific belief and this evidence might make belief that P rationally defensible, even though P is not supported by or consistent with scientific beliefs. Blind faith involves belief in the absence of even unscientific evidence.

Killoren’s first argument is that robust realism requires faith because it requires unscientific belief. In defence of this he introduces something we can call the argument from the ‘explanatory superfluity entails faith’ principle. It works like this:


  • (1)If one believes that P even though one accepts that P does not play any contributory role in the best available explanations of any natural facts or phenomena, then one believes that P on faith. (the ESEF principle)
  • (2) Commitment to robust realism requires that one believe in moral truths even though one accepts that moral truths do not play any contributory role in the best available explanations of any natural facts or phenomena.
  • (3) Therefore, robust realism entails faith.


It is trivial argument in many ways. The conclusion follows from the earlier characterisations of faith and robust realism. If you find those earlier definitions persuasive, then you’ll find the argument persuasive. Interestingly, Killoren devotes a good portion of his paper to defending the ESEF principle from competing accounts of faith. I found this section of the paper unnecessarily distracting, but if you are keen on learning more about the nature of faith then it might be worth reading. I was quite happy to accept the argument as it stands.

Obviously, to say that robust realism requires faith is already to suggest that it is a type of religion (since, for most people, ‘faith’ is practically synonymous with ‘religion’). But Killoren goes on to enumerate three additional properties shared by religions and robust realism.

The first property is belief in the supernatural. Not all religions require this, but most do. They believe that there exists a realm of facts that lies beyond the natural world. Usually, this realm is taken to consist of supernatural agents, i.e. gods, angels, demons and so on. Robust realism also requires a form of supernaturalism. The moral facts so beloved by the robust realist exist in a non-natural realm. This realm does not consist of supernatural agents, but it doesn’t thereby lose any entitlement to the ‘supernatural’ label. Or at least that’s what Killoren argues. I have a somewhat different view. I tend to think that supernaturalism is a mind-first (or agent-first) ontology. Following the likes of Paul Draper, I tend to think of supernaturalism as being equivalent to the view that ‘mental entities have explanatory priority’, i.e. that the natural world is ultimately explained by some mental entity; and I tend to think of naturalism as the opposite, i.e. that the mental is ultimately explained by the natural. On this definition of naturalism and supernaturalism, robust realism would actually end up falling under the scope of naturalism. Felipe Leon refers to this as ‘broad naturalism’. But I admit that this is, to a degree, definition-mongering. As long as you don’t believe that moral facts are ultimately explained by supernatural agents, then I’d be happy enough to say that robust realism is a species of supernaturalism.

The second property is guidance on how to live. Most religions purport to provide their believers with some set of principles about how they ought to live. Sometimes these principles are very detailed. Robust realism provides something similar to its believers. It is optimistic about the possibility of finding out how we ought to live. And the moral beliefs that are at the core of that position do attempt to provide some guidance on how to live.

The third property is organisation. Here Killoren is suggesting that most religions have an institutional or organisational structure that supports the belief system, ensures that it is promulgated and propagated, and defends it from attack. This seems obviously true of the religions we encounter. They all try to sustain their networks of belief through some set of social organisations (I honestly can’t think of a counterexample). He argues that robust realism also has an organisational structure. He states that robust realists organise themselves via philosophy departments, journals and conferences. In this way they sustain and defend their network of beliefs.

For all these reasons — faith, supernaturalism, guidance on how to live, and organisation — Killoren submits that robust realism is a type of religion.


3. Robust Realism as an Excellent Religion
And yet robust realism is clearly an unusual kind of religion. It doesn’t have the same eschatology or creation myths as most religions. Nor does it purport to provide us with a comprehensive worldview. But therein may lie its strength. Killoren closes his article by arguing that even if robust realism is a religious belief, it is, nonetheless, an excellent religious belief, particularly if you are normally averse to religious belief. There are three reasons for this.

The first is that robust realism is devoid of wishful thinking. Unlike most religions it doesn’t provide for salvation. No one is coming to save us from our moral sins or lead us into everlasting life. To the religiously inclined, this might seem like a disadvantage, but to the usually non-religious it probably won’t. They often like the idea of facing reality and avoiding false hope.

The second is that robust realism will never conflict with the results of scientific inquiry. This makes it unlike many (but not all) other religious beliefs. Many religions are criticised because their core tenets (e.g. creation stories, historical origin myths) conflict with the best available scientific information. This often puts the scientifically inclined off. Robust realists don’t need to worry about this. It is baked into their worldview that their moral beliefs can never conflict with the results of scientific inquiry.

The third is that robust realism provides some basis for morality. Non-religious people are often criticised for not having a coherent or defensible grounding for morality; robust realism provides one. Now, admittedly, Killoren falters in his support for robust realism on this score. He thinks that it provides a grounding for morality but that this grounding is less coherent than the grounding provided by more traditional religious beliefs (e.g. traditional theistic metaethics). I happen to disagree quite strongly. I have written about it at length before. In essence, I don’t think that religion provides a coherent and defensible grounding for morality. In fact, I think that most religiously-motivated metaethical views end up collapsing into a form of robust realism. I won’t get into the arguments here, but you can read about them elsewhere on the blog.

In addition to this, Killoren thinks that robust realism is epistemically defensible. His main argument for this is an argument from phenomenal conservatism:

Phenomenal Conservatism: If it seems to be the case (to us) that P is true, then this provides evidence in favour of P’s truth.

The idea is that when we reflect on our deepest moral beliefs (like ‘it is morally wrong to torture an innocent child for fun’) they seem to us to be true. This provides evidence for their truth. That evidence is defeasible. In other words, you could introduce other evidence to suggest that our phenomenology is misleading us as to the truth of such moral claims. Killoren thinks that it is possible to show that robust realism is not defeated by other sorts of evidence. He doesn’t state this in his paper, but he has contributed to the debate in other work.


4. Concluding Thoughts
I have already offered some doubts about Killoren’s central thesis, in particular his definition of supernaturalism. Let me close with two further critical reflections.

First, I’m not convinced that Killoren’s thesis is an interesting or important one. For one thing, I have doubts about the methodology. Enumerating the key properties of religious beliefs and then highlighting how those properties apply to other sets of beliefs seems to be of dubious merit. Killoren could just as easily argue that mathematical realism (or Platonism) is a species of religious belief. It is also committed to non-naturalism, objectivism and optimism. It requires belief in the supernatural (according to Killoren’s definition). And its believers are organised in the same way that robust realists are. I think it may also require faith in the sense that it requires unscientific belief: It’s true that mathematical equations feature in scientific explanations, but they don’t feature causally (i.e. no one thinks that Newton’s laws cause gravity to exist). Mathematical facts, according to the Platonist, do not play a contributory role in the explanation of natural phenomenon; they merely help to describe them. But moral facts can do the exact same thing. Admittedly, mathematical realism doesn’t provide guidance on how to live (not directly anyway), but that means it lacks one feature among many. Should it still count as a religion? Maybe. Is that an interesting claim? I’m not sure.

Second, I left Killoren’s article still thinking that the most important challenge facing the realist is the epistemological one. The explanatory handicap is a serious one: if moral facts play no contributory role in the best explanation of our moral beliefs, then how can we be sure that our moral beliefs are rationally justified. Killoren falls back on phenomenal conservatism at the end, but I’m not sure that is enough. I think Street’s Darwinian dilemma still bites on this point. I know there are responses to it (I have covered them on the blog before) but they all seem like a stretch to me. The responses from, say, Enoch or Skarsaune tend to require a belief that the telos of evolution is in some pre-ordained harmony with the non-natural moral facts. I don’t know if I can make sense of that view. Killoren defends an another response in his paper 'Moral Occasionalism'. According to him, moral reasons are a type of natural fact and they influence our moral seemings in such a way that our moral seemings match the moral facts without themselves causally interacting with those moral facts. I haven't read the paper in detail, but again this sounds like quite an odd metaphysical view. (Which is not surprising since takes its lead from occasionalism, which is a view about the relationship between the divine and natural causation).

Still, I should probably read the paper before dismissing it.

Saturday, September 10, 2016

Episode #11 - Sabina Leonelli on whether Big Data will revolutionise science

sabina-leonelli-225x300

In this episode I talk to Sabina Leonelli. Sabina is an Associate Professor at the Department of Sociology, Philosophy and Anthropology at the University of Exeter. She is as the Co-Director of the Exeter Centre for the Study of the Life Sciences (Egenis), where she leads the Data Studies research strand. Her research focuses primarily on the philosophy of science and in particular on the philosophy of data intensive science. Her work is currently supported by the ERC Starting Grant DATA_SCIENCE. I talk to Sabina about the impact of big data on the scientific method and how large databases get constructed and used in scientific inquiry.

You can listen below. You can also download here, or subscribe via Stitcher and iTunes (just click add to iTunes).


Show Notes

  • 0:00 - 1:40 - Introduction
  • 1:40 - 10:19 - How the scientific method is traditionally conceived and how data is relevant to the method as traditionally conceived.
  • 10:1913:40 - Big Data in science
  • 13:40 - 18:30 - Will Big Data revolutionise scientific inquiry? Three key arguments
  • 18:30 - 24:13 - Criticisms of these three arguments
  • 24:13 - 29:20 - How model organism databases get constructed in the biosciences
  • 29:20 - 36:30 - Data journeys in science (Step 1): Decontextualisation
  • 36:30 - 41:20 - Data journeys in science (Step 2): Recontextualisation
  • 41.20 - 47:15 - Opacity and bias in databases
  • 51:55 - 57:00 - Data journeys in science (Step 3): Usage
  • 57:00 - 1:00:30 - The Replicability Crisis and Open Data
  • 1:00:30 - End - Transparency and legitimacy and dealing with different datasets
 

Relevant Links

  • 'What difference does quantity make? On the Epistemology of Big Data in Biology' by Sabina Leonelli

Thursday, September 8, 2016

Will human enhancement cause problems for interpersonal communication?




China Mieville’s novel Embassytown is a challenging and provocative work of science fiction. It is set in Embassytown, a colonial outpost of the human-run Bremen empire, located on Arieka, a planet on the edge of the known universe. The native alien race are known as the Ariekei and they have an unusual language. They have two speaking orifices and as a result speak two words at the same time. They cannot communicate with the humans and other alien races who live on the planet because none of these races can speak two words at once. Not even two humans working together can accomplish the feat because then the two words are not spoken by one shared mind. The Ariekei can only understand when the two words are spoken by one individual.

To overcome the communication problems, humans have created a genetically engineered group of Ambassadors. The Ambassadors are identical twins who share an empathic link. They can pull off the trick of communicating in the Ariekei language. The plot of the novel revolves around a new Ambassador who is not a pair of genetically engineered twins. For some unknown reason, their speech is intoxicating and addictive to the Ariekei, which leads to a social crisis on the planet.

I’ll say no more about the novel. It’s worth reading if you ever get a chance. What is interesting about it for present purposes, however, is the way in which it illustrates an important link between our biology and the form and content of our language. Humans have one voice box under the control of one brain. This means our language is necessarily limited to a single channel of speech. One word is spoken at a time. We can understand what is being said if the speech follows the single channel format. If multiple words are spoken at once (as sometimes happens in heated debates or crowded rooms) it becomes difficult.

But what if we start tinkering around with our biology? What if we change our body shape and size? What if we add or subtract new senses and cognitive capacities? Could this lead to communication problems? Advocates of human enhancement often encourage such tinkering, but they rarely consider the implications this might have for interpersonal communication. They rarely consider the possibility of a communication-breakdown between enhanced and non-enhanced individuals.

There’s one noteworthy exception to this. Laura Cabrera and John Weckert’s 2012 paper ‘Human Enhancement and Communication: On Meaning and Shared Understanding’ makes the case for communication problems arising from human enhancement. I want to take a look at the arguments presented in that paper in the remainder of this post.


1. The Basic Argument: Shared Lifeworlds and Human Communication
I actually spoke to Laura about the arguments presented in that paper for an upcoming episode of the podcast I am doing. In the course of that interview, I offered my own reformulation of her main argument. Laura accepted my reformulation so I want to start out with that.

The argument proceeds from the assumption that human communication depends on shared lifeworlds. That is to say, in order for humans to communicate meaningfully with each other they must share a common frame of reference. This is something that linguists and philosophers of language often highlight. The spoken or written word is a highly compressed vehicle for communication. In order to make sense of what is said, both the speaker and listener need to share a whole host of background assumptions about how the world works, how humans relate to that world, and how ideas and metaphors influence our understanding. Thus, if I said to you that ’tis fierce wet outside/bring an umbrella’ you may have some sense of what I am saying but maybe not the whole sense. You know enough about the weather and how it is experienced by human beings to know that you would like to protect yourself from the rain. You know that rain is ‘wet’ so that’s probably what I am referring to. You probably don’t (unless you’re from Ireland) know what I mean by ‘fierce’ wet. This is an idiomatic phrase that is commonly used to describe particularly inclement weather. I can fill you in on that idiomatic quirk and that will help you get the full sense of the meaning.

So you share some of your lifeworld with me, but not all of it. We can communicate and understand each other because of the shared components, but we may hit the occasional bump.




From that starting presumption, the argument proceeds to inquire into the foundations of our shared lifeworld. To some extent (as in the ‘fierce weather’ example) it seems that it must depend on a shared cultural history. But more broadly, it depends on us being similarly situated in the world, i.e. having a similar physical, emotional and cultural relationship with our environments. To the extent that enhancement technologies could impact on how we are situated in the world, e.g by changing our bodies or senses or culture, it could affect this shared lifeworld. This could lead to serious communication problems if the changes wrought by enhancement technologies are sufficiently radical.

This allows us to flesh out the remainder of the argument. It works like this:


  • (1) Human communication depends on us having a shared lifeworld.
  • (2) Having a shared lifeworld depends (to some extent) on having similar bodies, similar perceptual equipment, similar cognitive capacities, and a similar socially embedded nature.
  • (3) Some (radical) human enhancement technologies could have dramatic effects on our bodies, perceptual equipment and cognitive capacities.
  • (4) Therefore, some (radical) enhancement technologies could affect our shared lifeworld.
  • (5) Therefore, some (radical) human enhancement could impact upon or lead to communication problems.


I want to look at premises (2) and (3) in more detail.


2. Three Routes to Communication Breakdown
Premises (2) and (3) are the key to the argument and they can be treated as a pair. Premise (2) makes claims about how our shared lifeworld gets constructed; premise (3) suggests that enhancement technologies could affect the construction process. Cabrera and Weckert use three main examples to illustrate this point.

The first has to do with the role of the body (its size and shape) in the construction of a shared lifeworld. The authors use an example from Lewis Carroll’s Alice in Wonderland to illustrate the point. Alice has spent her day being shrunk and enlarged. It has been a confusing and upsetting experience. She meets a caterpillar and tries to explain the situation to him:

’I can’t explain myself, I’m afraid, sir’ said Alice, ‘because I am not myself, you see’
’I don’t see’, said the Caterpillar.
’I’m afraid I can’t put it more clearly’, Alice replied very politely, ‘for I can’t understand it myself to begin with; and being so many different sizes in a day is very confusing.’
’It isn’t’, said the Caterpillar.

The suggestion from Cabrera and Wecker is that Alice and the Caterpillar are experiencing something of a communication breakdown because of the differences in their lifeworlds. For Alice (a human being) increasing and decreasing in size so often in one day is a very strange experience. For the Caterpillar it is a common one. That’s how caterpillars get around: by constantly elongating and shrinking. Indeed, caterpillars undergo more radical physical transformations at other points in their lifecycle. The difference in how their bodies relate to the world thus causes communication problems. This, of course, ignores other problems with the whole idea of a human speaking to a caterpillar. Some philosophers think the idea is complete nonsense (even in a thought experiment). Wittgenstein was a famous proponent of this view. He once opined that ‘if a tiger could speak, we wouldn’t be able to understand him’. Our lifeworlds are just too different.

How is this relevant to the enhancement debate? Well, there is a significant segment of the transhumanist community that embraces the idea of ‘morphological freedom’, i.e. the freedom to change one’s body size and shape. Some of these changes might be minor and relatively unproblematic. But what if people start grafting wings onto their backs or prehensile tails onto their spines? These would result in more radical reorientations in how we relate to the world. Also, there are those who think we could someday upload our personalities and identities to a digital computer. This would lead to a type of disembodied existence. That would surely result in a very different kind of lifeworld. This is hinted at in the movie Her, where a physical human being falls in love with an intelligent operating system. They are able to speak to each other (that’s how the relationship is possible) but they clearly have very different lifeworlds and this leads to the eventual breakdown of the relationship.

The second example used by Cabrera and Weckert has to do with sensory experiences. Clearly, our senses impact upon our lifeworld. Think for a moment about the number of expressions in the English language that rely, directly or indirectly, on some visual metaphor, or attempt to report some visual experience. This mode of sensation has a powerful effect on the content of our language. But, interestingly, the absence of this sensation doesn’t seem to lead to communication problems. Those with congenital blindness seem to be able to engage in meaningful dialogue with those who can see. The philosophers Bryan Magee (sighted) and Martin Milligan (blind) tested this hypothesis in their book Sight Unseen. The book is a series of letters back and forth between both authors. The dialogue carried out in these letters seems perfectly ordinary. There are some philosophical disagreements — Magee insists that there is something that Milligan can never know when it comes to the raw experience of seeing red — but there is no clear moment of communication breakdown. Milligan knows what Magee is getting at; and Magee knows what Milligan is getting at.

Cabrera and Weckert contrast this real-world example with a fictional one. HG Wells’s short story The Country of the Blind tells us about a traveler who visits a valley in the Andes that has been cut off from the rest of the world for centuries. The inhabitants of the valley are all blind, and nobody alive remembers anything about the world of sight. The traveler has serious communication problems as a result. This results in a hypothesis about when changes to our sensory experience will radically affect our communication:

Hypothesis: If you have a sense that the majority in your society lack, then you will have serious communication problems; if you lack a sense that the majority in your society have, then you will not.

This is the suggested difference between Milligan and the traveler in HG Wells’s story. Milligan has been raised in a sighted society. To make his way in that world, he has to attune himself to the lifeworld of the sighted people. He has to adopt their language and idioms. The traveler is inserted into a society that lacks sight. He has already acquired a different mode of communicating and understanding the world. It is very difficult for him to bridge the gap between his lifeworld and that of the valley-dwellers (at least in the short time available to him).

This is relevant to the enhancement debate because one thing that enhancement technologies could do is change how we experience and sense the world around us. Neural prosthetics could create new modes of sensation (e.g. seeing in ultraviolet or infrared; hearing different frequencies of sound). Augmented reality eyewear could potentially do something similar, e.g. by constantly displaying statistical predictions of how the people and objects in the world around us will behave. We might then, literally, see the future all the time (or, at least, as good an estimate as we can get of that future). If these changes are radical and sudden, and if relatively few people experience them, then we might end up with something like the communication problems depicted in HG Wells’s short story. I am, however, somewhat sceptical of this. I think the sensory changes brought about by enhancement technologies are unlikely to be that radical and sudden. The credibility of Wells’s story depends on the fact that the traveler was raised in a very different community with a very different lifeworld. If he was not, or if he had more time to acculturate, the communication problems might dissipate. So unless those with enhanced senses are sectioned off from the rest of society, this doesn’t seem to me like a major threat to communication.

The final example used by Cabrera and Weckert relates to cognitive capacities. They have a couple of really good illustrations in this section of their paper. I’ll just focus on one, having to do with memory. Our memory clearly affects our lifeworld. It is the fact that I remember who I am from day to day — that I can situate myself within a coherent life-narrative — and that I remember general facts about the world and how it works, that facilitates much communication. We see this most clearly in the case of people with terrible amnesias. Henry Molaison, a famous amnesiac case study, lost the ability to form new long-term memories back in the 1950s when he underwent radical surgery for his epilepsy. He lived the remainder of his life in a perpetual present. He could remember events that happened before the surgery, but nothing after. If you sat in a room with him for a couple of hours, he would get to know you and would seem to remember who you were. But if you went back the following day, he would have forgotten and you would need to start from scratch. This obviously created noticeable communication problems. Henry wasn’t building a life narrative in the same way the people around him were. This could lead to moments of great sadness and frustration.

Henry Molaison had a different lifeworld, maybe not radically different, but noticeably different, due to his amnesia. The same thing can happen in the opposite direction. Those with extremely good memories (eidetic memories) often experience the world in a very different way. We know this from some famous case studies, particularly Alexander Luria’s pioneering work The Mind of Mnemonist. The book describes a real-life patient who can effectively remember everything that has every happened to him. Borges used Luria’s work as the inspiration for his short story Funes, the Memorious. One passage from the story gives a sense of how different the lifeworld of someone with an extreme memory might be:

With one quick look, you and I perceive three wineglasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 30, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he had see only once…Nor were those memories simple — every visual image was linked to muscular sensations, thermal sensations, and so on. He was able to reconstruct every dream, every daydream he ever had. Two or three times he had reconstructed an entire day; he had never once erred or faltered but each reconstruction had itself taken an entire day.

Even with this, the story is about the struggle to really understand what it would be like to live this life. If you get two or more people with such extreme memories together, they may develop new communication styles that are better attuned to their lifeworlds. This is relevant to the enhancement debate, of course, because cognitive enhancement is such a prominent desire among the proponents of human enhancement. But enhancing cognitive capacities could have knock-on effects on communication.



3. Why is this important?
This brings us to a concluding question: why is any of this important? I think Cabrera and Weckert do a good job of highlighting the potential impact of enhancement on communication, but why should we care about this? As they themselves note, the arguments they make are highly speculative. They are suggesting a possible area of inquiry and concern; they are not making a firm prediction.
But it is important because of the role that communication plays in our society. For better or worse, language is one of the distinctive attributes of human society. Languages are capacious and flexible modes of communication. They are obviously more expressive than animal modes of communication. They allow us to form very rich lifeworlds, replete with abstract theories and concepts, similes and metaphors, irony and humour, and so on. Sharing in these lifeworlds is part of the ethical glue that holds society together.

If the enhanced and the unenhanced have radically different lifeworlds, then there is some cause for concern. They may not be able to understand one another. They may lose empathy and regard for one another’s modes of existence. If I cannot really understand what it is like to be you, I may not be able to protect and care about your interests. Society is a collaboration, competition and compromise over these interests. If we cannot communicate with one another, we may be left with nothing but competition.

One final point before I go. Cabrera and Weckert briefly mention in their paper how other types of technology affect the way in which we communicate. But they don’t seem to think (or entertain) the possibility that they could radically alter our lifeworld. I think this is somewhat mistaken. I think ICTs are beginning to radically alter how we situate ourselves within and understand the world around us. I also think language is an increasingly important commodity in the digital age. This is something Google appreciates intimately: they make their money by commodifying language. Sometimes this means that they abstract language away from its original semantic context. Pip Thornton (currently working as a research assistant on my Algocracy and Transhumanism project) makes this point with her project on poetic language, and it is something I think needs to be considered in more detail.

Wednesday, September 7, 2016

Talking about Algocracy and Transhumanism on the Singularity Bros Podcast




I recently had the honour of appearing on the Singularity Bros podcast to discuss the work I am doing as part of the Algocracy and Transhumanism project. I had a very enjoyable and wide-ranging conversation with the hosts Zach and Scott about nominative determinism, algorithmic governance, self-driving cars, moral enhancement and the comedy of Louis CK. They also created the amazing graphic you see above to accompany the show.

You can listen at this link.


Monday, September 5, 2016

Computers and Law Special Edition on Algorithmic Governance




As part of the Algocracy and Transhumanism project I am running, myself and my colleague Dr. Rónán Kennedy put together a special edition of the journal/magazine Computers and Law on the topic of algorithmic governance. It consists of a diverse range of articles on the increasingly prominent role of algorithms in decision-making, and the implications this has for the law. The special edition arose from a workshop we held on the topic back in March 2016.

Readers who have been following my work on this topic might be interested in reading it, particularly the contributions from other authors who share neither my views or analytical perspective on this topic. Details and short descriptions below:




  • Towards Open Government: Niall Ó Brolcháin asks how sovereign governments that were constituted in a bygone age can move into a new technological era that demands openness and transparency.



  • The Legal Consequences of Genetic PredictionAisling de Paor highlights some of the ethical and legal concerns arising with use of algorithms based on genetic information, and advocates the need to appropriately control and regulate these new technologies.

  • When Algorithms Kill: Peter Gallagher considers the autonomous weapons debate in international humanitarian law