Our moral reasoning follows well-worn pathways. An argument is presented telling us that a particular action should be deemed good/bad or right/wrong; we assess its premises, locating its strengths and weaknesses in the proces; and then, following an exhaustive process of deliberation and evaluation, we reach an all-things-considered judgment as to the merits or demerits of the particular action.
Commonplace as that style of reasoning is, it has some obvious shortcomings. The most significant of which has to do with our own cognitive limitations. When assessing ethical arguments, we are often taken into abstruse and esoteric areas of metaphysics and morality. There is a very significant and live possibility of our making some mistake about these areas of inquiry. What difference does that risk of error make for our ethical decision-making?
The literature on moral risk tries to address this question, and I’m currently trying to get a handle on it. As part of my efforts, I’m writing this series of posts on Dan Moller’s article “Abortion and Moral Risk”. In part one, I looked at Moller’s attempt to highlight how moral risks arise in the abortion debate. Now, in this part, I’ll look at Moller’s attempt to develop some principles and rules of thumb for dealing with those risks.
The discussion will take two parts. First, it will consider the general question: once you have recognised the possibility of some kind of moral risk how should you let it affect your moral decision-making, if at all? Broadly conceived, there are three possible answers, two extreme and one intermediate. Moller defends the intermediate position, according to which recognising the possibility of moral risk should make some kind of difference to your decision-making, though its not clear how much. This leads to the second part of today’s post, which looks at five different criteria Moller thinks you should keep in mind when determining how much of a difference it should make.
1. Dealing with Moral Risk: Three Approaches
When it comes to acknowledging the impact of moral risk on actual ethical decision-making, there are three possible positions one could take up. They are arranged along a spectrum, with the two ends representing the two extreme positions, and the full-length of the spectrum between those extremes representing the blurry and indistinct intermediate position. The three positions are:
1. No-difference: Even though there is a real possibility that you could be making a moral error, you should not change your behaviour as a result.
2. Some difference: The possibility of moral error should make some difference to your decision-making, though the degree of difference it makes varies in accordance with a number of factors.
3. Paralysis: The possibility of moral error should cause you to refrain from making any decision.
As might be clear from its pejorative name, the third possibility is easily dismissed. If one refrained from performing an action or making decision on the grounds that there was some moral risk associated with it, one would soon be unable to do anything. That’s because there’s some degree of moral risk associated with any decision. Risk-free decision-making is an ideal, not a reality.
So that leaves us with the other two positions. At first glance, the “no-difference”-position has some allure. Because risk-free decision-making is an ideal, we might be tempted to take a rather cavalier attitude toward risk, acknowledging its existence but ploughing ahead with our all-things-considered judgment nonetheless. This, however, would be a mistake. Or so, at least, Moller argues.
He argues this point by asking us to consider two thought experiments, both involving some level of moral risk, and asking us whether it is plausible to think we should “plough ahead” in both instances. I’ll briefly describe both thought experiments now. As you read through the descriptions keep in mind that the examples have some features that Moller thinks should alter our reaction to moral risk.These features may be pushing your intuitions in a particular direction and may cause you to question the generality of the lessons that can be learned from these examples. I certainly found this to be the case when I first read them. But I think they do make sense when you read them in light of Moller’s criteria for determining the impact of moral risk. So I would suggest being patient and re-reading them after you know what those criteria are, which you will do after reading the second section in this post.
The first thought experiment tells the story of Frank, who is dean of a medical school, and must decide whether the school should pursue important research according to Plan A or Plan B. As far as he can tell, there is little separating the plans, apart from the fact that A has less paperwork associated with it. But Frank has little ethical expertise so he waits to hear the deliberations of his five-person ethics committee. They come back to him with the following conclusions. First, they all agree that Plan B is ethically acceptable. Second, three out of five think that Plan A is acceptable, but two out of five think it comes with a real risk of doing significant ethical harm. What should Frank do?
The suggestion is that Frank shouldn’t simply discount the views of the two members of the ethics committee. They have greater ethical insight and knowledge than he does, and their belief that there is risk of significant harm should be factored in. Certainly, the fact that A has less paperwork associated with it is unlikely now to be a decisive reason in its favour. Indeed, the situation seems to be reversed in terms of the desirability of the two plans. Since B doesn’t appear to have any significant harms associated with it, it looks to be the more desirable option.
Frank’s case is what Moller refers to as a “thinly-described” example. So consider a second one, this time involving a woman named Sally who has a temporary illness that will last for a month. The illness is such that if she conceives a child within the month, the child will be born with a severe handicap. The handicap will significantly reduce their quality of life (relative to a “normal” person) but not to the point that their life is not worth living. If she waits a month, the illness will pass, and so too will the possibility of any child she conceives having this handicap (though, of course, other problems may arise). Suppose Sally adopts a person-centred theory of wrongdoing, according to which she cannot do any harm to a child she conceives within the month since the child does not yet exist. But suppose she is aware that there are impersonal theories of wrongdoing according to which conceiving the child within the month is wrong (even though it doesn’t harm the child). What should Sally do?
Again, it seems wrong to say that Sally’s risk of moral error — in this case an error having to do with personal versus impersonal wrongs — is insignificant. Indeed, it seems like conceiving the child within the month would be the wrong thing to do in this instance. She should wait until her temporary illness passes. The moral risk in this case seems sufficient to warrant the extra degree of caution.
If all this is right, then the “No difference”-position is flawed. Moral risk makes some difference to our moral decision-making. (Note: if you think it’s ironic that the defence of this view relies on the very same methods — i.e. analysis of thought experiments — that were thought to be risk-laden in part one, then you’ll be glad to know I thought that was ironic too).
2. Weighing up the Moral Risks
But how much of a difference does it make? This is a very difficult question to answer, but it is the most important for anyone taking moral risk seriously. To this point, Moller has been dealing with the low-hanging fruit in the analysis of moral risk. Establishing the existence of genuine moral risks, and ruling out the extreme positions one can take up in relation to that risk is a relatively easy thing to do; the hard part is figuring out how to deal with the blurry and indistinct line between those two extremes.
Unfortunately, Moller has only a few pointers to offer on this crucial topic. Still, they are somewhat helpful and worthy of consideration. In total, he recognises five separate factors that will affect how much of a difference moral risks make to our moral decision-making. To illustrate, assume that our choice is whether or not to perform action A (if it helps, imagine that “A” is the act of getting an abortion, since that was the case study we looked at in part one). In that case, the following five factors should be borne in mind:
- 1. The likelihood that act A involves wrongdoing;
- 2. How wrong A would be if it were wrong;
- 3. The costs the agent faces if she omits A;
- 4. The agent's level of responsibility for facing the choice of doing A;
- 5. Whether not doing A would also involve moral risk.
Moller is clear that this is not an exhaustive list, and is at best a starting point for a complete theory of moral risk. Bearing that in mind, let’s briefly talk through each of these five factors. As we do, we’ll see how they may have influenced our judgment about the two thought experiments discussed earlier.
The first factor is relatively straightforward, conceptually, but more difficult practically. When making a decision about how much weight to place on the possibility of moral error, an obvious thing to consider is the probability that you are making that error. If the probability of error is 0.4, then it would seem to count for more than if the probability of error is 0.0001. The problem is how exactly do you come up with a measure for the likelihood of moral error? I can imagine all sorts of measures being used — e.g. the opinion of ethical experts within the relevant field — but I don’t know that any of them would be particularly good.
The second factor is also pretty straightforward conceptually, and perhaps a bit easier on a practical level. Obviously, the magnitude of the wrong done (if it is a wrong) would have a significant impact on decision-making. Abortion is a compelling example because if you do make an error (morally speaking) you might be doing something very bad (i.e. killing an innocent person). Contrast that with a case in which one of your choices comes with a risk of causing someone to feel pain equivalent to a light pin prick for less than one second. That’s morally bad, for sure, but much less so than killing an innocent person. Thus, even if there was a high likelihood of making the error in the latter case, it might be okay to run the risk.
The third factor relates largely to the prudential costs of not performing the relevant action. I’m not entirely sure why Moller considers it significant, but that’s because I’m not entirely sure whether his argument is primarily about rational decision-making or moral decision-making. Certainly, personal costs are relevant to both, but the weight attaching to them might vary depending on whether one is concerned with making rational choices or moral choices.
The fourth factor is no doubt an important one, but Moller says very little about it in the article. I think the basic idea behind it is that the more responsibility one has for the relevant decision, the more important the possibility of moral error becomes. But that intuition would need to be worked out in greater detail.
The fifth factor is given pride of place in Moller’s discussion as he thinks it to be particularly important. The gist of it is that moral risk has its biggest impact on decision-making when the risks involved are asymmetrical. In other words, when the errors (both in terms of probability and magnitude) fall predominantly on one side. This is probably why Frank and Sally’s cases were compelling. In Sally’s case, for instance, there doesn’t seem to be any moral risk associated with waiting a month to conceive a child (except for maybe the risk posed by anti-natalist theories); all the risk attaches to conceiving within the month. When there are roughly equal risks on both sides, there will be little impact on decision-making. In between, there are any number of problematic cases.
So there we have it, a brief overview of Moller’s paper “Abortion and Moral Risk”. To quickly recap, in part one we looked at the types of moral risk that might arise in the abortion debate. As we saw, moral risks could arise from getting the theory of personhood wrong, or weighting one’s reaction to different thought experiments incorrectly. In part two, we considered what we should do about such moral risks. Dismissing the extreme positions, we saw how Moller argued that moral risk should make some difference to our decision-making, though maybe not a decisive difference (this was called the “some difference-position”). We then looked at five factors that determine how significant moral risks really are. They varied from the likelihood of the risk to the asymmetrical nature of the risks involved in the relevant decision. It was suggested that the latter was particular important insofar as many of the most compelling cases for the “some difference”-position involve highly asymmetrical risks.