Tuesday, June 28, 2016

The Machine Made me Do It: Human responsibility in an era of machine-mediated agency




[This the text of a talk I'm delivering at the ICM Neuroethics Network in Paris this week]

Santiago Guerra Pineda was a 19-year old motorcycle enthusiast. In June 2014, he took his latest bike out for a ride. It was a Honda CBR 600, a sports motorcycle with some impressive capabilities. Little wonder then that he opened it up once he hit the road. But maybe he opened it up a little bit too much? He was clocked at over 150mph on the freeway near Miami Beach in Florida. He was going so fast that the local police decided it was too dangerous to chase him. They only caught up with him when he ran out of gas.

When challenged to explain his actions, Pineda gave an interesting response:

The thing is when you ride the motorcycle you can’t let the motorcycle get control of you…and at that moment the motorcycle took control of me and I just kept going faster and faster.

As one journalist put it, Pineda was suggesting that the machine made him do it.

It’s a not-altogether unfamiliar idea. Many is the time I’ve wondered whether I am wielding or yielding to the technology that suffuses my daily life. But would this ‘machine made me do it’ excuse ever hold up in practice? Could we avoid the long arm of the law by blaming it all on the machine?

That’s the question I’ve been asked to answer. I cannot hope to answer it definitively but I can hope to shed some analytical clarity on the topic and make some contribution to the debate. I’ll try to make two arguments: (i) that in order to answer the question you need to disambiguate the complexity of human-machine relationships; and (ii) this complexity casts doubt on many traditional (and not-so traditional) theories of responsibility. In this way, studying the complexity of human machine relationships may be a greater boon to the hard incompatibilist school of thought than fundamental discoveries in human neuroscience will ever be. This last bit is controversial, but I’ll try to motivate it towards the end of this talk.


1. The Four Fundamental Relationships between Man and Machine
When we say ‘the machine made me do it’ what exactly are we saying? Consider three short stories, each of them based on real life events, each of them pointing to a different relationship between man and machine:

Story One - GPS Car Accident: In March 2015, a car being driven by the Chicagoan Iftikhar Hussain plunged 37 feet off the derelict Cline Avenue Bridge in Southwest Chicago. Mr Hussain survived the accident but his wife Zhora died of burns after the car burst into flames. At the time of the accident, Mr Hussain had been fixated on following his GPS and had not noticed the warning signs indicating that the bridge was closed.

Story Two - DBS-induced Johnny Cash Fan: Mr B suffered from Obsessive Compulsive Disorder. His doctors recommended that it be treated with an experimental form of deep brain stimulation therapy. This involved electrodes being implanted into his brain to modulate the activity in a particular sub-region. A control device could be used to switch the electrodes on and off. The treatment was generally successful, but one day while the device was switched on Mr B developed a strong urge to listen to the music of Johnny Cash. He bought all his CDs and DVDs. When the device was switched off, the love of Johnny Cash dissipated.

Story Three - Robot Killer: In July 2015, at a Volkwagen production plant in Germany, a manufacturing robot killed a machine technician. The 22 year-old was helping to assemble the machine when it unexpectedly grabbed him and crushed him against a metal plate. The company claimed the accident was not the robot’s fault. It was due to human error. The story was reported around the world under the headline ‘Robot kills man”. 

In the first story, one could argue that GPS made the man do it, but it sounds like a dodgy way of putting it. There was no coercion. Surely he shouldn’t have been so fixated? In the second story, one could argue that the DBS made the man like Johnny Cash, but there is something fuzzy about the relationship between the man and the machine. Is the machine part of the man? Can you separate the two? And in the third story, one could argue that the machine did something to the man, but again it feels like there is a complicated tale to be told about responsibility and blame. Was human error really the cause? Whose error? Can a machine ever ‘kill’ a man?

I think it’s important that we have an analytical framework for addressing some of this complexity. Human agency is itself a complex thing. It has mental and physical components. To exercise your agency you need to think about your actions: you need to translate desires into intentions by combining them with beliefs. You also need to have the physical capacity to carry out those intentions. It is the combination of these mental and physical components that is essential for responsibility. Indeed, its essentiality is embedded into the fundamental structure of a criminal offence. As every first year law student learns (in the Anglo-American world at least) a crime consists of two fundamental elements: a guilty mind and a guilty act. You need to have both for criminal liability.

This requires nuance in any conversations about machines making us do things. It forces us to ask the question: how exactly does the machine interfere with our agency? Does it interfere with the mental components or the physical ones? What I want to suggest here is that there are four specific types of machine ‘interference’ that can arise, and that these four types of interference settle into two general types of man-machine relationship. This might sound a little confusing at first, so let’s unpack it in more detail.

The two general relationships are (I) the outsourcing relationship; and (II) the integration relationship. The outsourcing relationship involves human beings outsourcing some aspect of their agency to a machine. In other words, they get the machine to do something on their behalf. This outsourcing relationship divides into two major sub-types: (a) the outsourcing of action-recommendations, i.e. getting the machine to decide which course of action would be best for you and then implementing that action through your own physical capacities (possibly mediated through some machine like a car or a bike) - this is an indirect interference with mental aspects of agency; and (b) the outsourcing of action-performances, i.e. you decide what the best course of action is and get a machine to physically implement it - this is an interference with physical aspects of agency. From the three stories given above, story one would seem to involve the outsourcing of action-recommendations: the GPS told the man where to go and the man followed the instructions. And story three would seem to involve the outsourcing of action-performances: somebody decided that an industrial robot was the fastest and most efficient way to assemble a car and designed it to perform certain actions in a controlled environment.

This brings us to the integration-relationship. This involves human beings integrating a machine into their own biology. In other words, it involves the fusion of their biological wet-ware with technological hard-ware. The second story clearly involves some form of human-machine integration. The DBS device is directly incorporated into the patient’s brain. But again, there are different forms of machine integration. The brain itself is a complex organ. Some brain activities are explicit and conscious — i.e. they are directly involved in the traditional mental aspects of agency — others are implicit and subconscious — they seem to operate on the periphery of the traditional mental aspects of agency. The changes made by the device could manifest in conscious-reasoning and decision-making, or it could operate below the level of conscious reasoning and decision-making. This suggests to me that the integration-relationship divides into two major sub-types: c) bypassing-integration, i.e. the machine integrates with the implicit, subconscious aspects of brain activity and so bypasses the traditional mental capacities of agency and (d) enhancing-integration, i.e. the machine integrates with the explicit, conscious aspects of the brain and enhances traditional mental capacities of agency.* I suspect the story of Mr B involves bypassing-integration as opposed to enhancing-integration: the device presented him with a new desire. Although he was consciously aware of it, it was not something that he could rationally reflect upon and decide to endorse: it’s incorporation into his life was immediate and overwhelming.

This gives us a hierarchically-organised, somewhat complex taxonomy of human-machine relationships. I have tried to illustrate it in the diagram below. Note that my initial description of the relationships doesn’t even do justice to their complexity. There are important questions to be asked about the different ways in which a machine might bypass explicit mental processing and the different degrees of performance outsourcing. Some of this complexity will be teased apart in the discussion below. For now, this initial description should give you a sense of the framework I am proposing.


2. A Compatibilistic Analysis of the ‘Machine made me do it’ Excuse
I said at the outset that disambiguating the complexity of human-machine relationships would help us to analyse the ‘machine made me do it’-excuse. But how? I’ll make a modest proposal here. Each of the four relationships identified above involves machinic interference with human agency. Therefore, each of them falls — admittedly loosely — within the potential scope of the ‘machine made me do it’ label. By considering each type of interference, and the impact it may have on responsibility, separately we can begin to approach an answer to our opening question: will the excuse ever work?

To help us do that, we need to think about the conditions of responsibility, i.e. what exactly is it that makes a human being responsible for their actions? There are many different accounts of those conditions. The classic Aristotelian position holds that there are two fundamental conditions of responsibility: (i) the volitional condition, i.e. the action must be a voluntary performance by the agent and (ii) an epistemic condition, i.e. the agent must know what they were doing. There has been much debate about the first of those conditions; relatively less about the second (though this is now beginning to change). In relation to the first, much of the debate has centred on the need for free will and the various different accounts of what it takes to have free will.

I cannot do justice to all the different accounts of responsibility that have been proposed on foot of this debate. For ease of analysis, I will limit myself to the standard compatibilistic theories of responsibility, i.e. those accounts that hold it is possible for an agent to voluntarily and knowingly perform an act even if their actions are causally determined. Compatibilistic theories of responsibility reject the claim that in order to be responsible for what you do you must be able to do otherwise - thus they are consistent with the deterministic findings of contemporary neuroscience. They argue that having the right causal relationship to your actions is all that matters. In an earlier post, I suggested that these were the four most popular compatibilistic accounts of responsibility:

Character-based accounts: An agent is responsible for an action if it is caused by that agent, and not out of character for that agent. Suppose I am a well-known lover of fine wines. One day I see a desirable bottle of red. I decide to purchase it. The action emanated from me and was consistent with my character. Therefore I am responsible for it (irrespective of whether my character has been shaped by other factors). The most famous historical exponent of this view of responsibility was David Hume. Some feel it is little bit too simplistic and suggest various modifications.

Second-order desire accounts: An agent is responsible for an action if it is caused by the first-order desire of that agent and that first-order desire is reflectively endorsed by a second order desire. I have a first order desire for wine (i.e. I want some wine). I buy a bottle. My first order desire is endorsed by a second order desire (I want to want the wine). Therefore, I am responsible.

Reasons-responsive accounts: An agent is responsible for an action if it is caused by their decision-making mechanism (mental/neurological) and that decision-making mechanism is responsive to reasons. In other words, if the mechanism was presented with different reasons for action it would produce different results in at least some possible worlds. I like wine. This gives me a reason for buying it. But suppose that in some shops the wine is prohibitively expensive and buying it would mean I lose out on other important foodstuffs. This gives me a reason not to buy it. If I purchase wine in one shop, I can be said to be responsible for this decision if in some of the other shops where it is prohibitively expensive I would have not purchased wine. This is a somewhat complicated theory, particularly when you add in the fact that there are different proposals regarding how sensitive the decision-making mechanism needs to be.

Moral reasons-sensitivity accounts: An agent is responsible for an action if it is caused by their decision-making mechanism and that mechanism is capable of grasping and making use of moral reasons for action. This is very similar to the previous account, but places a special emphasis on the ability to understand moral reasons.


Each of these accounts fleshes out the connection an agent must have to their acts in order to be held responsible for those acts. They each demand a causal connection between an agent and an act; they then differ on the precise mental constituents of agency that must be causally involved in producing the act. What I would say — and I won’t have time to defend this claim here — is that they each ultimately appeal to some causal role for explicit, consciously represented mental constituents of agency. In other words, they all say that in order to be responsible you must consciously endorse what you do and consciously respond to different reasons for action. The link can be direct and proximate, or indirect and distal, but it is still there.**

With that in mind we can now ask the question: how would these different accounts of responsibility make sense of the four different machinic interferences with agency outlined above? Here’s my initial take on this.

The first type of interference involves the outsourcing of action-recommendations to a machine — just as Iftikhar Hussain outsourced his route-planning to his GPS. Initially, it would appear that this does nothing to undermine the responsibility of the agent. The machine is just a tool of the agent. The agent can take onboard the machine’s recommendation without any direct or immediate interference with character, rational reflection or reasons-responsivity. But things get a little bit more complicated when we think about the known psychological effects of such outsourcing. Psychologists tell us that automation bias and complacency is common. People get comfortable handing over authority to machines. They stop thinking. The effect has been demonstrated among pilots and other machine operators relying on automated control systems. The result is that actions are no longer the product of rational reflection or of a truly reasons-responsive decision-making mechanism. This might lend some support to the ‘machine made me do it’ line of thought.

The one wrinkle in this analysis comes from the fact that most compatibilistic theories accept that the link between our actions and our conscious capacities need not be direct. If you fall asleep while driving and kill someone as a result, your responsibility can be traced back in time to the moments before you fell asleep — when you continued to drive despite being drowsy. If we know that outsourcing action-recommendations can have these distorting effects on our behaviour, then our responsibility may be traced back in time to when we chose to make use of the machine. But what if such outsourcing is common from birth onwards? What if we grow up never knowing what it is like to not rely on a machine in this way? More on this later.

The second type of interference involves the outsourcing of action-performances to a machine. The effect of this on responsibility really depends on the degree of autonomy that is afforded to the machine. If the machine is a simple tool — like a car or a teleoperated drone — then using the machine provides no excuse. The actions performed by the machine are (presumably) direct and immediate manifestations of the agent’s character, rational reflection, or reasons responsive decision-making mechanism. If the machine has a greater degree of autonomy — if it responds and adapts to its environment in unpredictable and intelligent ways — then we open up a potential responsibility gap. This has been much discussed in the debate about autonomous weapons systems. The arguments there tend to focus on whether doctrines of command responsibility could be used to fill the responsibility-gap or, more fancifully, on whether machines themselves could be held responsible.

The third type of interference involves the bypassing-integration of a machine into the agent’s brain. This would seem to straightforwardly undermine responsibility. If the machine bypasses the conscious and reflective aspects of mental agency, then it would seem wrong to say that any resultant actions are causally linked to the agent’s character, rational reflection, reasons-responsivity and so on. So in the case of Mr B, I would suggest that he is not responsible for his Johnny Cash loving behaviour. The only complication here is that once he knows that the machine has this effect on his agency — and he retains the ability to switch the machine on and off — one might be inclined to argue that he acquires responsibility for those behaviours through his continued use of the machine. But this argument should be treated with caution. If the patient needs the machine to treat some crippling mental or physical condition, then he is faced with a very stark choice. Indeed, one could argue that patients facing such a stark choice represent pure instances of the ‘machine made me do it’ excuse. Their choices are coerced by the benefits of the machine.

The fourth type of interference involves the enhancing-integration of a machine into the agent’s brain. This might be the most straightforward case. If the machine really does have an enhancing effect, then this would not seem to undermine responsibility. If anything, it might make them more responsible for their actions. This is a line of argument that has been made by Nicole Vincent and her colleagues on the Enhancing Responsibility project. The major wrinkle with this argument has to do with the ‘locus of control’ for the machine in question. In the case of DBS, the patient can control the operation of the device themselves. Thus they have responsibility both for the initial use of the device and any downstream enhancing effects it may have (except in some extreme cases where they lack the capacities for responsibility prior to switching on the machine). In other words, in the DBS case the locus of control remains with the agent and so it seems fair to say they retain responsibility when the machine is being used. But imagine if the machine is not controlled directly by the agent? Imagine if the switching on and off of the device is controlled by another agent or by some automated, artificially intelligent computer program? In that case it would not seem fair to say that the agent retains responsibility, no matter what the enhancing effect might be. That seems like a classic case of manipulation.

Which brings me to my final argument…


3. The Machine-based Manipulation Argument
Everything I have just argued assumes the plausibility of the compatibilist take on responsibility. It assumes that human beings can be responsible for their actions, even if everything they do is causally determined. It suggests that machinic-interference in human agency can have complex effects on human responsibility, but it doesn’t challenge the underlying belief that humans can indeed be responsible for their actions.

This is something that hard incompatibilists dispute. They think that true moral responsibility is impossible. It is not compatible with causal determinism; nor is it compatible with indeterminism. One of the chief proponents of hard incompatibilism is Derk Pereboom. I covered his defence of the view on my blog in 2015. His main argument against the compatibilist take on responsibility is a variation on a manipulation argument. The starting premise of this argument is that if the action of one agent has been manipulated into existence by another agent, then there is no way that the first agent can be responsible for the action. So if I grab your hand, force you to take hold of a knife, and then use your arm to stab the knife into somebody else, there is no sense in which you are responsible for the stabbing.

Most people will accept that starting premise. Manipulation involves anomalous and unusual causation. In the example given, the manipulator bypasses the deliberative faculties of the agent. The agent is forced to do something without input from their agency-relevant capacities. How could they be responsible for that? (Even if they happen to like the outcome.)

Pereboom develops his argument by claiming that there is no difference between direct manipulation by another agent and other forms of external causation. He builds his argument slowly by setting out four thought experiments. The first thought experiment describes a case in which a neuroscientist implants a device into someone’s brain to manipulate their deliberation about some decision. This doesn’t bypass their deliberative faculties, but it nevertheless seems to undermine responsibility. The second thought experiment involves the same set up, only this time the neuroscientist manipulates the deliberative faculties at birth. Again, Pereboom argues that this seems to undermine responsibility. The third thought experiment is similar to the second, except this time the agent’s deliberative faculties are manipulated (effectively brainwashed) by the agent’s peers as they are growing up. It seems iffy to ascribe responsibility there too. Then the fourth thought experiment involves generic social, biological and cultural determination of an agent’s deliberative faculties. Pereboom challenges proponents of compatibilism to explain the difference between the fourth case and the preceding three. If they cannot, then the compatibilist take on responsibility is wrong.

Pereboom’s argument has been challenged. One common response is that his jump from the third thought experiment to the fourth is much too quick. It is easy to see how responsibility is undermined in the first two cases because there is obvious manipulation by an outside agent. It is also pretty clear in the third case. All three involve anomalous, manipulative causation. But there is nothing equivalent in the fourth case. To suppose that it involves something anomalous or responsibility-undermining is to beg the question.

Now, I’m not saying that this defence of compatibilism is any good. It would take a much longer article to defend that point of view (if you’re interested some of the relevant issues were thrashed out in my series of posts on Pereboom’s book). The point I want to make in the present context is relatively simple. If machinic interference with human agency becomes more and more common, then we are going to confront many more cases of anomalous causation. The lines between ordinary agency and manipulated agency will be much more blurry.

This could be a real threat to traditional theories of responsibility and agency. And it could be more of a threat than fundamental discoveries in the neuroscience of human behaviour. No matter how reductionistic or mechanistic explanations of human behaviour become, it will always be possible for philosophers to argue that they do nothing to unseat our commonsense intuitions about free will and responsibility. That we can cohere our commonsense intuitions with the findings of neuroscience. It is less easy to account for machinic interference in behaviour in the same way. It’s not the case that machinic interference involves fundamental discoveries about the mechanisms of human behaviour. It always involves interference with behaviour.


* You may ask: why couldn’t the integration be disenhancing? It probably could be, but here I assume that most forms of integration are intended to be enhancing and that they are successful in doing so. If they were unsuccessful, that would add further complexity to the analysis, but the general line of argument would not change. 

 ** I include the ‘indirect and distal’ comment in order to allow for the possibility of tracing responsibility for an unconscious act at T2 back in time to a conscious act at T1.

No comments:

Post a Comment