Pages

Tuesday, February 4, 2014

Advanced Trolleyology and the Doctrine of Double Effect



Most people are familiar with the trolley problem and the influence it has had on contemporary applied ethics. Originally formulated by Philippa Foot in 1967, and subsequently analysed by virtually every major philosopher in the latter half of the 20th Century, the trolley problem has provoked debates about the merits of utilitarianism and deontology, and provided the basis for a whole sub-branch of moral psychology: trolleyology. Since the original formulation and its two variants, experimenters have created multiple variations on the basic trolley dilemma, each one tweaking and adjusting the conditions in order to provoke a different moral response.

But what has all this scrutiny really achieved? Joshua Greene thinks it has achieved quite a lot. Greene suggests that it can illuminate the underlying psychological mechanisms of moral choice, and cast doubt upon some traditional and much-beloved ethical principles. That, at least, is one of the central arguments in his recent book Moral Tribes.

I agree with Greene, in part. I think the results of the psychological experiments are fascinating, and I think some of the proposed psychological mechanisms of moral choice are informative, but I’m less sure about the broader philosophical implications. Nevertheless, in the spirit of educating myself in public, I thought I might do a couple of blog posts dealing with some of the themes and ideas from Greene’s work. The primary advantage of this that it gives me an excuse to cover the experimental findings and theoretical models; but I’ll try not to avoid the deeper moral questions either.

The remainder of this post is divided into four parts. First, I consider (very briefly) the classic trolley problem and one of the proposed solutions to that problem. Second, I look at some variants of the classic problem that examine the role of personal/impersonal force in explaining people’s moral responses. Third, I look at some variants of the classic problem that examine the role of the means/side-effect distinction in explaining the different responses to the problem. Fourth, I turn to Greene’s analysis of these variants and try to reconstruct what I think his argument is.

(Note on sources: I take this from Chapter 9 of Moral Tribes; many of the experimental results are taken from Greene, Cushman et al 2009)


1. Classic Trolleyology and the Doctrine of Double Effect
Apart from trolleys and train tracks, the multiple variants of the trolley problem all have one thing in common: they each ask us to imagine a scenario in which we can (a) perform some action that will result in one person being killed and five people being saved; or (b) do nothing, which will result in five people being killed and one person remaining alive. They then ask us whether we would perform that action or not. The variations in the trolley problem relate to the causal connection between our actions and the end result.

The classic presentation involved two cases:

Switch: A trolley car is hurtling out of control down a train track. If it continues on its current course, it will collide with (and kill) five workers who are on the track. You are standing beside the track, next to a switch. If you flip the switch, the trolley will be diverted onto a sidetrack, where it will collide with (and kill) one worker. Do you flip the switch?
Footbridge: A trolley car is hurtling out of control down a train track. If it continues on its current course, it will collide with (and kill) five workers who are on the track. You are standing on a footbridge over the track, next to a very fat man. If you push him off the footbridge, he will collide with the trolley car, slowing it down sufficiently to save the five workers. He, however, will die in the process. Do you push the fatman?

These two scenarios have been presented to innumerable experimental subjects over the years. The reactions seem pretty consistent. In the set of experiments discussed by Greene, 87% of respondents said they would flip the switch; but only 31% said that they would push the fatman. Why are the reactions to these two cases so different? Especially given that the utilitarian calculus is similar in both.

One common suggestion is that these experiments pay testament to a non-consequentialist prohibition against causing harm as a means to an end (versus causing harm as a side effect). This is the so-called doctrine of double effect, which has had many supporters over the years:

Doctrine of Double Effect (DDE): It is impermissible to cause harm as a means to a greater good; but it may be permissible to cause harm as a side effect of bringing about a greater good.

Experimental data from the two scenarios above suggests that the DDE is a robust, widely-shared, moral intuition. And given the role of robust intuitions in moral argument, this is good enough for many people. But should it be? Greene argues that it shouldn’t. He does so by asking us to consider experiments dealing with other variants of trolley problem. These experiments, it is argued, collectively point to the irrationality of our moral intuitions.


2. Trolley Problems involving Personal/Impersonal Force
The difficulty with simply accepting the DDE as the explanation of, and justification for, the different responses to Switch and Footbridge is that there are other potential explanations that seem less morally compelling. Consider the fact that Switch involves the impersonal administration of force (the flipping of the switch), whereas Footbridge involves the personal administration of force (pushing the fatman).

If this distinction accounted for the different responses, it might give us pause. After all, we generally don’t think that the personal/impersonal nature of the lethal force is all that relevant to our moral calculations. Greene has an illustration of this. He asks us to imagine that one of our friends is landed in a real-world version of the trolley problem. This friend then phones us up asking whether he should kill one to save five. We wouldn’t ask this friend whether he was administering the lethal force personally or not, would we? If we wouldn’t, it suggests that the impersonal/personal distinction is morally irrelevant. (This might beg the question ever-so-slightly)

But if that’s right we have a problem. Some experimental results suggest that the personal or impersonal nature of the force does make a difference to how people react. Consider the following variants on the original cases (along with the percentage of experimental subjects who approved of killing in each case):

Remote Footbridge: The set up is similar to the original footbridge case, only this time you are not standing alongside the fatman. Instead, you are standing next to the track, beside a switch which would release a trapdoor that the fatman is standing on. Do you flip the switch? 63% of experimental subjects said “Yes”.
Footbridge Switch: This was a control for the previous scenario which tested to see whether “remoteness” from the victim was a decisive factor. The set-up was the same, only this time you were standing next to a switch on the footbridge, i.e. you were standing in close proximity to the fatman. In this case, 59% of experimental subjects said they would flip the switch and release the trapdoor.
Footbridge Pole: This time you are standing at far end of the footbridge from the fatman. You cannot reach him and push him off with your own hands. You can, however, use a long pole to knock him off. Should you use it? Only 33% of experimental subjects said “yes” in this case.

Taken together, these experimental results suggest that the personal application of force — even if it is done via a long pole — makes a difference to people’s intuitive reactions. If such a morally irrelevant distinction can make a difference like this, proponents of the DDE might be less sanguine about their beloved principle.


3. Trolley Problems and the Means/Side Effect Distinction
This is not to say that our intuitive judgments do not track the differences between harm as a means versus harm as a side effect. The experimental evidence suggests that they do. But they do so in an odd manner. Careful manipulation of the variables within the trolley problem highlights this fact. Consider:

Obstacle Collide: You are standing on a narrow footbridge. The footbridge is over the sidetrack, not the main track. At the far end, there is a switch. If you get to it in time, you can flip it and divert the out-of-control trolley car onto the sidetrack. Doing so will save the lives of the five workers. The problem is that to get to the switch in time you will have to deal with an obstacle: a very fat man (who we assume cannot be communicated to within the relevant timeframe). The only thing to do is to run into him and knock him off the bridge. This will lead to his death. Should you do it? 81% of experimental subjects approved.
Loop: The set-up is like the original Switch case, Only this time the sidetrack loops back onto the main track. If there was nothing on the sidetrack, flipping the switch would not save the five workers (the trolley would collide with them eventually). Fortunately (or unfortunately), there is a single worker on the sidetrack. So if you flip the switch, the trolley will collide with (and kill) him and therefore stop before it loops back to the main track. Do you flip the switch? 81% of experimental subjects said “yes”.
Collision Alarm: This one is complicated as it involves two separate, parallel tracks. On the first track there is a trolley hurtling out of control, about to collide with five workers. On the second track, there is another trolley, not hurtling out of control into anything. But there is a sidetrack to this second track on which we find a single worker and an alarm sensor. You are standing next to a switch that can divert the trolley onto the sidetrack. If you do so, the trolley will collide with (and kill) the worker, but will also trigger the railway alarm system. This will automatically shut down the trolley on the first track (thereby saving the five). Do you flip the switch? 87% approved in this case.

These three cases all play around with the means/side-effect distinction. In Obstacle Collide, you need to push the fatman off in order to get to the switch. His death is a (foreseeable) side effect of your primary intention. You’d prefer if he didn’t die. Contrariwise, in Loop, you need to kill the one worker: if he wasn’t on the sidetrack there would be no point in flipping the switch. And in Collision Alarm we also need to kill the worker, although the mechanism of causation in the same as in the original Switch case.

The fact that there is widespread approval of killing one to save five in both the Loop and Collision Alarm cases, even though they involve killing as a means to end, suggests that our intuitive commitment to the DDE may not be that consistent after all.


4. The Contamination-Debunking Argument
So what’s going on here? What accounts for the different responses to the different dilemmas? Greene argues for something like a contamination-effect (the language is mine): our commitment to the DDE is contaminated by our intuitive response to the personal/impersonal distinction. This can be seen if we array the results of the various experiments on a two-by-two matrix.




What inferences can we draw from this diagram? Well, there seems to be some agreement that if you cause harm as a side effect of doing good, it is okay. That is consistent with the DDE. Furthermore, there is agreement that if you personally cause harm as a means of doing good, it is not okay. That too is consistent with the DDE. What is not consistent with the DDE is the lower right hand box. This suggests that if you impersonally cause death as a means to a positive end, it is okay. In fact it gets a very high approval rating.

The argument in all this is somewhat opaque. Greene never sets it out explicitly. It is clearly some species of debunking argument. Greene means to debunk our commitment to the DDE by revealing its psychological quirks. I’ve covered debunking arguments on the blog before. Indeed, I once discussed Guy Kahane’s template for understanding these arguments which used Greene’s work as an exemplar of this style of argument. But it seems to me that this particular argument about the DDE is not easily subsumed within Kahane’s template.

The best I can do for now is suggest something like the following:


  • (1) If our only basis for endorsing a normative principle is our intuitive commitment to that principle, and if our intuitive commitment to that principle is sensitive to the presence of irrelevant factors (i.e. is contaminated by irrelevant factors), we should not endorse that principle.
  • (2) Our sole basis for endorsing the DDE is our intuitive commitment to it.
  • (3) But our intuitive commitment to the DDE is sensitive to morally irrelevant factors (viz. personal/impersonal force).
  • (4) Therefore, we should not endorse the DDE.


This has an air of plausibility about it. I think the contamination of our intuitive responses should give pause for thought. But whether that is enough to ditch the moral principle completely is another question. I certainly don’t think it gives us the right to embrace utilitarianism (which seems to be Greene’s argumentative goal), since each of these cases also suggests that our commitment to utilitarianism can be contaminated by irrelevant factors. Furthermore, I guess one could come back at Greene and argue that the personal/impersonal force distinction is not morally irrelevant (Greene himself concedes that it can be relevant when it comes to the assessment of moral character - that might be the wedge needed to pry open his argumentative enterprise).

Still, to be fair to the guy, he doesn’t rest everything on this one argument. In fact, his discussion of the DDE and these particular experiments is just a warm-up. His main argument develops a more detailed explanation of the psychological mechanisms underlying intuitive moral judgments. Once he reveals the details of those mechanisms, he thinks he has a more persuasive debunking argument. I’ll try to cover it in another post.

No comments:

Post a Comment