Pages

Thursday, February 6, 2014

Joshua Greene's Modular Myopia Hypothesis

Joshua Greene

(Previous Entry)

In this post I’m going to take a look at Joshua Greene’s modular myopia hypothesis (MMH), as detailed in his recent book Moral Tribes. The MMH is both an attempt to explain our anomalous responses to the multiple variants of the trolley problem, and an attempt to account for other aspects of our moral decision-making. To fully understand the significance of the MMH, you will need to read the previous entry on advanced trolleyology and the doctrine of double effect. I’m not going to restate all the details from that post here.

Nevertheless, one detail does need to be restated at the outset. As you’ll recall, an argument was presented at the end of the previous post. This argument purported to “debunk” our commitment to the intuitively compelling doctrine of double effect. It did so by showing how that commitment was contaminated by the presence morally irrelevant factors.

One thing that the MMH is designed to do is to build upon this contamination argument. Only this time, instead of showing how one intuitively compelling moral principle is questionable, the goal is to show how several of the outputs of our moral decision-making faculties are questionable. This is a point that will be re-emphasised at the end of this post.

In the meantime, we will preoccupy ourselves with the following four topics. First, we’ll get a bird’s eye view of the MMH, paying particular attention its reliance on the “dual process” theory of moral reasoning. Second, we’ll look at Greene’s evolutionary explanation for the existence of the MMH. Third, we’ll get into the details of the MMH by considering exactly why it gives rise to anomalous results in the trolley cases. Then fourth and finally, we’ll see what the debunking potential of the MMH really is.


1. The Modular Myopia Hypothesis, in brief
One of the key inferences from the experimental analysis of trolley problems is that our intuitive moral responses are sensitive to two factors: (i) whether harm is used as a means to an end or whether it is a mere side effect; and (ii) whether the harm was administered personally or impersonally. If harm occurs as a side effect, or if it is brought about impersonally, then we tend to think little of it; if it occurs as a means to an end, and we personally cause the harm, then we tend to think a lot of it. Why is this?

That’s what the MMH tries to answer. The central plank of the hypothesis is that our moral reasoning is modular. In other words, that we have different brain modules that are responsible for different styles of moral reasoning. Two such modules have emerged from Greene’s experimental work (this is the “dual process” aspect of the theory). The first is a “fast” or automatic module, which issues moral responses on essentially emotive grounds: “that feels wrong”, “that feels right” etc. The second is a “slow” or manual module, which is much more dispassionate and rationalistic, focusing primarily on the costs and benefits of our actions.

The slow, manual module is stolidly consequentialist in nature. Across all versions of the trolley problem it simply weighs the costs and benefits and, ceteris paribus, comes down in favour of saving five by killing one. The fast, automatic module is much more erratic in nature — at least, erratic in the sense that the principles it adheres to are initially opaque. Analysis of the experimental results, however, reveals that this module is simply “myopic” in the principles it applies. It attaches strong negative moral emotions to acts that are personal and which use harm as a means to an end, but ignores other morally salient factors.

This then is the essence of the MMH: our automatic moral reasoning is myopic in nature. As Greene sees it, once we understand the essence of the MMH, there are two questions to ask about it. Why do we have myopic moral modules? And why is our module myopic in that particular way? The first question takes us into the evolutionary origins of the module. The second question forces us to confront the mechanisms of the myopic module.


2. An Evolutionary Account of the MMH
The evolutionary account offered by Greene has all the hallmarks of good “just so” story. Such stories are often criticised due to their lack of empirical content. Nevertheless, Greene maintains that his story generates predictions that can be tested. This makes it more scientifically satisfying.

The just so story works something like this. At some point in our evolutionary history (probably at a pre-human point), we developed a brain that was capable of advance planning. It could take internal goals and develop action plans that could be used to realise those goals. With this capacity came a problem: our ancestors could now plan premeditated acts of violence. These acts of violence could be used for achieving their goals. While this capacity might be beneficial in many organisms (e.g. solitary predators), it created particular problems for human beings. Human beings are social animals, and any member of human society who repeatedly and wantonly used violence to get what they wanted would quickly find themselves on the receiving end of retaliatory attacks and the like. The result being that violence would exclude them the benefits of social cooperation.

In order to overcome this problem evolution programmed our decision-making modules to be more discerning in their penchant for violence. To be precise, it evolved an internal monitoring system that sounded an “alarm” whenever our ancestors thought about performing a socially counter-productive act of violence. This is what our automatic moral module does. The problem is that it does so in a myopic way, by ignoring many features of our actions.

We’ll get back to those features in a minute. What is important for now is that this hypothesis, according to Greene, generates some predictions:

Predictions of the MMH 
(a) The system didn’t evolve to respond to artificial thought dilemmas like the trolley problem; so what should really get it going is real-world violence. 
(b) The system should respond to certain cues of violence, irrespective of whether those cues actually mean that someone is being violently harmed. In other words, it should respond to simulated violence. This is down to the “myopia” of the system 
(c) Since the system evolved as an “internal” monitor of violence, it should respond less strongly to simulated acts performed by others than to simulated acts performed by oneself.


Greene argues that these predictions have been confirmed by a series of experiments performed by Fiery Cushman, Wendy Mendes and their colleagues. The experiments involved real-world simulations of violent acts. For example, in one experiment subjects were asked to strike someone’s leg with a fake (but real-looking) hammer; in another they were asked to smash a fake baby’s head off a table. The experimenters found that people had a very strong negative emotional reaction when they performed these simulated acts of violence themselves, but not when they watched others perform them. This was all in spite of the fact that the experimental subjects were fully aware that their actions would not really cause harm to anyone (it’s safe to say the experiment would never have received ethics approval if they were kept in the dark about this!).

I guess my one difficulty with all this is that I don’t know how predictive those predictions really are. It’s possible that Greene is simply retrospectively cherry-picking the experimental data to find results that fit his hypothesis. This might be okay as a starting point, but further confirmation and experimentation is surely needed (perhaps this is being done). And since I don’t have mastery of the experimental literature myself, it’s possible that there is disconfirming evidence out there that is simply ignored by Greene. I’m not well-positioned to say. For the time being, I remain somewhat sceptical of the evolutionary story being told here.


3. Actions Plans and Moral Myopia
Leaving the evolutionary bit to one side, the second question to ask of the MMH relates to the actual mechanisms underlying it. Why is it that our automatic module is sensitive to some features of our actions but not to others? To answer this, Greene tries to combine his own, dual-process theory of moral reasoning with an alternative theory, defended by John Mikhail.

I haven’t read Mikhail’s defence of this theory, though I did read some of his older papers. As I recall, Mikhail is a moral grammarian. He proposes that human brains have an innate moral grammar: from a few simple components they can morally evaluate an infinite range of actions and outcomes. This is similar to they way in which they have an innate linguistic grammar, from which they can evaluate an infinite set of sentences. The analogy to Chomsky’s theory of language is immediate and direct. Fortunately, we don’t need to worry about the nuances of Mikhail’s theory in this post. We just need to focus on one of his ideas.

The idea in question is that of the action plan. This is something originally developed by Alvin Goldman and Michael Bratman. The proposal is that human brains represent actions in terms of branching action plans. Each plan has a primary “trunk” that begins with some bodily movement and terminates in the agent’s goal. Every point along the primary trunk is an event that is necessary (in a weak, empirical sense) for the realisation of the goal. From the primary trunk a number of additional branches (secondary, tertiary and so on) emerge. Along these branches we find alternative routes to the same goal or foreseen side effects of the primary action. The action plans for the Switch and Footbridge trolley dilemmas (see previous entry) are illustrated below.



Greene argues that these action plan diagrams can be used to understand the myopia of our automatic moral module. In essence, his claim is that the module simply inspects the primary branches of our moral action plans. If it finds some morally troubling feature on that primary branch (such as the use of personal force, or the infliction of harm) it will sound the alarm. If it doesn’t find such features, it will give the action plan the all clear. It ignores all sub-branches and their outcomes (including the oft-neglected side effects in the footbridge case).

Greene believes that this explains some of the particularly odd results we find in the trolley experiments. Take the results of the Loop experiment (discussed previously). In this case, subjects are asked whether they would divert a trolley onto a sidetrack that loops back onto the main track. The catch being that this diversion will only save the lives of the five people on the main track if it collides with one worker who happens to be on the sidetrack. In this case, the death of one person is being used as a means to an end, and so is contrary to the doctrine of double effect. Nevertheless, experiments suggest that there is high approval for diverting the trolley onto the sidetrack.

Action Plan for the Loop Case

Why is this? Greene holds that the MMH has the answer. The Loop case is odd in that there is a secondary branch off the primary trunk.* It is along that secondary branch that harm is being used as a means to an end. The automatic module doesn’t see it though. It simply inspects the primary branch of the action plan, doesn’t find anything morally troubling along that branch, and so approves of the action. It ignores the secondary branch.

This isn’t the complete picture. There is still the role of the slower manual moral module to factor in. This module is not so myopic. It can “see” the secondary branch. But it is stolidly consequentialist in nature. Remember? So it inspects the secondary branch and gives it the thumbs up. This is why we get such high approval for diversion in the Loop case. This is a key point. Greene’s theory is that our moral reactions in trolley cases result from the combined effect of both modules: the manual one, which weighs costs and benefits, and the automatic one, which is myopic in the various ways described.

A final question arises: why is the automatic module so very myopic? Why does it only focus on the primary branch? Greene submits that there are sound evolutionary (and indeed a priori reasons for this). Any action we pursue will have a huge number of side effects (both foreseen and unforeseen). To trace out each branch of the primary action plan would be massively cognitively costly. It makes sense that evolution would try to minimise this cost by focusing solely on the primary branch.


4. The MMH and the Debunking Argument
So how does all this tie in to Greene’s debunking project? Well, with the MMH we get a much “evolutionary”-flavoured debunking argument. This is a type of argument I’ve addressed in detail before, with some comments on Green’s own work. The basic idea is that our myopic moral module is the product of causal and evolutionary history that we are not warranted in trusting.

Guy Kahane’s template for understanding causal/evolutionary debunking arguments is helpful in this regard. He suggests that all such arguments fit within the following mold:

Causal Premise: S’s evaluative belief that P is caused by process Y. 
Epistemic Premise: Process Y does not track the truth of evaluative propositions of type P. 
Conclusion: S’s belief that P is unjustified, or unwarranted.

Greene’s argument can be made to fit this mold too. Greene is claiming that some of our moral beliefs are products of an evolved psychological mechanism which may not give rise to warranted moral beliefs.


  • (1) Some of our moral beliefs (e.g. the belief that killing is wrong in the Footbridge case) are caused by the myopic automatic moral module in our brains.
  • (2) The myopic automatic moral module does not reliably track the truth of (all) moral propositions.
  • (3) Therefore, we should not trust, at least some of, our moral beliefs.


Premise (1) rests on the truth of Greene’s MMH. Moral psychologists might critique that hypothesis, but I’m willing to grant it for the time being. Premise (2) is more interesting to me. Greene never explicitly defends it in his book — indeed, he never explicitly outlines the argument he is defending at all — but there are implicit defences of it scattered throughout the text. And it must be conceded that it has a degree of plausibility. If Greene is right, then the MMH evolved to solve a very particular kind of problem: the problem of premeditated violence in small social groups. A module that is designed to solve this problem may work well for some moral problems (Greene concedes as much) but may be inapplicable to a broad range of other moral problems (this is the major thesis of Greene’s book). Furthermore, simple rational reflection suggests that we shouldn’t always trust the module. If it is myopic in the way Greene describes, then it does ignore a lot of factors that might be relevant to moral decision-making, namely all the unexplored branches of our action plans.

The problem, however, is how do we get from this argument to the endorsement of utilitarianism? That’s ultimately where Green is trying to lead us, and by itself the debunking of some of our moral beliefs — however persuasive that debunking might be — wouldn’t seem to be sufficient for us to embrace utilitarianism. Alas, a fuller treatment of this issue is beyond the scope of this post. To see what Greene has to say, I recommend reading his book.


* There is a technical objection one could make to this, viz. why is it a secondary branch at all? Isn't there just a single causal pathway to the goal and hence isn't the claim that there is a branch simply arbitrary? Greene tries to address this objection in a footnote. He argues that the secondary branch pathway is parasitic on the primary trunk because "the turning of the trolley away from the five makes sense as a goal-directed action all by itself, without reference to the secondary causal chain, that is, to what happens after the trolley is turned. But the secondary chain cannot stand alone." 

No comments:

Post a Comment