|Albrecht Dürer - The Four Horsemen of the Apocalypse
Here’s an interesting thought experiment:
The human brain is split into two cortical hemispheres. These hemispheres are joined together by the corpus callosum, a group nerve fibres that allows the two hemispheres to communicate and coordinate with one another. The common assumption is that the corpus callosum unites the two hemispheres into a single conscious being, i.e. you. But there is some evidence to suggest that this might not be the case. In split brain patients (i.e. patients whose corpus callosum has been severed) it is possible to perform experiments that result in the two halves of the body doing radically different things. In these experiments it is found that the left side of brain weaves a narrative that explains away the discrepancies in behaviour between the two sides of the body. Some people interpret this as evidence that the left half of the cortex is primarily responsible for shaping our conscious identity. But what if that is not what is going on? What if there are, in fact, two distinct conscious identities trapped inside most ‘normal’ brains but the left-side consciousness is the dominant one and it shuts down or prevents the right side from expressing itself? It’s only in rare patients and constrained experimental contexts that the right side gets to express itself. Suppose in the future that a ground-breaking series of experiments convincingly proves that this is indeed the case.
What ethical consequences would this have? Pretty dramatic ones. It is a common moral platitude that we should want to prevent the suffering and domination of conscious beings. But if what I just said is true, it would seem that each of us carries around a dominated and suffering conscious entity inside our own heads. This would represent a major ongoing moral tragedy and something ought to be done about it.
This fanciful thought experiment comes from Evan Williams’s paper ‘The Possibility of an Ongoing Moral Catastrophe’. It is tucked away in a footnote, offered up to the reader as an intellectual curio over which they can puzzle. It is, however, indicative of a much more pervasive problem that Williams thinks we need to take seriously.
The problem is this: There is a very good chance that those of who are alive today are unknowingly complicit in an unspecified moral catastrophe. In other words, there is a very good chance that you and I are currently responsible for a huge amount of moral wrongdoing — wrongdoing that future generations will criticise us for, and that will be a great source of shame for our grandchildren and great-grandchildren.
How can we be so confident of this? Williams has two arguments to offer and two solutions. I want to cover each of them in what follows. In the process, I’ll offer my own critical reflections on Williams’s thesis. In the end, I’ll suggest that he has identified an important moral problem, but that he doesn’t fully embrace the radical consequences of this problem.
1. Two Arguments for an Ongoing Moral Catastrophe
Williams’s first argument for an ongoing moral catastrophe is inductive in nature. It looks to lessons from history to get a sense of what might happen in the future. If we look at past societies, one thing immediately strikes us: many of them committed significant acts of moral wrongdoing that the majority of us now view with disdain and regret. The two obvious examples of this are slavery and the Holocaust. There was a time when many people thought it was perfectly okay for one person to own another; and there was a time when millions of Europeans (most of them concentrated in Germany) were knowingly complicit in the mass extermination of Jews. It is not simply that people went along with these practices despite their misgivings; it’s that many people either didn’t care or actually thought the practices were morally justified.
This is just to fixate on two historical examples. Many more could be given. Most historical societies took a remarkably cavalier attitude towards what we now take to be profoundly immoral practices such as sexism, racism, torture, and animal cruelty. Given this historical pattern, it seems likely that there is something that we currently tolerate or encourage (factory farming, anyone?) that future generations will view as a moral catastrophe. To rephrase this in a more logical form:
- (1) We have reason to think that the present and the future will be like the past (general inductive presumption)
- (2) The members of most past societies were unknowingly complicit in ongoing moral catastrophes.
- (3) Therefore, it is quite likely that members of present societies are unknowingly complicit in ongoing moral catastrophes.
Premise (2) of this argument would seem to rest on a firm foundation. We have the writings and testimony of past generations to prove it. Extreme moral relativists or nihilists might call it into question. They might say it is impossible to sit in moral judgment on the past. Moral conservatives might also call it into question because they favour the moral views of the past. But neither of those views seems particularly plausible. Are we really going to deny the moral catastrophes of slavery or mass genocide? It would take a lot of special pleading and ignorance to make that sound credible.
That leaves premise (1). This is probably the more vulnerable premise in the argument. As an inductive assumption it is open to all the usual criticisms of induction. Perhaps the present is not like the past? Perhaps we have now arrived at a complete and final understanding of morality? Maybe this makes it highly unlikely that we could be unknowingly complicit in an ongoing catastrophe? Maybe. But it sounds like the height of moral and epistemic arrogance to assume that this is the case. There is no good reason to think that we have attained perfect knowledge of what morality demands. I suspect many of us encounter tensions or uncertainties in our moral views on a daily or, at least, ongoing basis. Should we give more money to charity? Should we be eating meat? Should we favour our family and friends over distant strangers? Each of these uncertainties casts doubt on the claim that we have perfect moral knowledge, and makes it more likely that future generations will know something about morality that we do not.
If you don’t like this argument, Williams has another. He calls it the disjunctive argument. It is based on the concept of disjunctive probability. You are probably familiar with conjunctive probability. These is the probability of two or more events both occurring. For example, what is the probability of rolling two sixes on a pair of dice? We know the independent probability of each of these events is 1/6. We can calculate the conjunctive probability by multiplying together the probability of each separate event (i.e. 1/6 x 1/6 = 1/36). Disjunctive probabilities are just the opposite of that. They are the probability of either one event or another (or another or another) occurring. For example, what is the probability of rolling either a 2 or a 3 if you roll two dice? We can calculate the disjunctive probability by adding together the probability of each separate event (1/6 + 1/6 = 1/3). It should be noted, though, that calculating disjunctive probabilities can be a bit more complicated than simply adding together the probabilities of separate events. If there is some overlap between events (e.g. if you are calculating the probability of drawing a spade or an ace from a deck of cards) you have to subtract away the probability of the overlapping event. But we can ignore this complication here.
Disjunctive probabilities are usually higher than you think. This is because while the probability of any particular improbable event occurring might be very low, the probability of at least one of those events occurring will necessarily be higher. This makes some intuitive sense. Consider your own death. The probability of you dying from any one specific cause (e.g. heart attack, bowel cancer, infectious disease, car accident or whatever) might be quite low, but the probability of you dying from at least one of those causes is pretty high.
Williams takes advantage of this property of disjunctive probabilities to make the case for ongoing moral catastrophe. He does so with two observations.
First, he points out that there are lots of ways in which we might be wrong about our current moral beliefs and practices. He lists some of them in his article: we might be wrong about who or what has moral standing (maybe animals or insects or foetuses have more moral standing than we currently think); we might be wrong about what is or is not conducive to human flourishing or health; we might be wrong about the extent of our duties to future generations; and so on. What’s more, for each of the possible sources of error there are multiple ways in which we could be wrong. For example, when it comes to errors of moral standing we could err in being over or under-inclusive. The opening thought experiment about the split-brain cases is just one fanciful illustration of this. Either one of these errors could result in an ongoing moral catastrophe.
Second, he uses the method for calculating disjunctive probabilities to show that even though the probability of us making any particular one of those errors might be low (for argument’s sake let’s say it is around 5%), the probability of us making at least one of those errors could be quite high. Let’s say there are fifteen possible errors we could be making, each with a probability of around 5%. In that case, the chances of us making at least one of those errors is going to be about 54%, which is greater than 1 in 2.
That’s a sobering realisation. Of course, you might try to resist this by claiming that the probability of us making such a dramatic moral error is much lower than 5%. Perhaps it is almost infinitesimal. But how confident are you really, given that we know that errors can be made? Also, even if the individual probabilities are quite low, with enough possible errors, the chance of at least one ongoing moral catastrophe is still going to be pretty high.
2. Two Responses to the Problem
Having identified the risk of ongoing moral catastrophe, Williams naturally turns to the question of what we ought to do about it.
The common solution to an ongoing or potential future risk is to take corrective measures by hedging your bets against it or to taking precautiounary approach to that risk. For example, if you are worried about the risk of crashing your new motorcycle and injuring yourself, you’ll either (a) take out insurance to protect against the expenses associated with such a crash or (b) simply avoid buying and using a motorcycle.
Williams argues that neither solution is available in the case of ongoing moral catastrophe. There are too many potential errors we could be making to hedge against them all. In hedging against one possible error you might commit yourself to another. And a precautionary approach won’t work either because failing to act could be just as big a moral catastrophe as acting, depending on the scenario. For example, failing to send more money to charity might be as big an error as sending money to the wrong kind of charity. You cannot just sit back, do nothing, and hope to avoid moral catastrophe.
So what can be done? Williams has two suggestions. The first is that we need to make it easier for us to recognise moral catastrophes. In other words, we need to make intellectual progress and advance the cause of moral knowledge: both knowledge of the consequential impact of our actions and of the plausibility/consistency of our moral norms. The idea here is that our complicity in an ongoing moral catastrophe is always (in part) due to a lack of moral knowledge. Future generations will learn where we went wrong. If we could somehow accelerate that learning process we could avert or at least lessen any ongoing moral catastrophe. So that’s what we need to do. We need to create a society in which the requisite moral knowledge is actively pursued and promoted, and in which there is a good ‘marketplace’ of moral ideas. Williams doesn’t offer specific proposals as to how this might be done. He just thinks this is the general strategy we should be following.
The second suggestion has to do with the flexibility of our social order. Williams argues that one reason why societies fail to minimise moral catastrophes is because they are conservative and set in their ways. Even if people recognise the ongoing moral catastrophe they struggle against institutional and normative inertia. They cannot bring about the moral reform that is necessary. Think about the ongoing moral catastrophe of climate change. Many people realise the problem but very few people know how to successfully change social behaviour to avert the worst of it. So Williams argues we need to create a social order that is more flexible and adaptive — one that can implement moral reform quickly, when the need is recognised. Again, there are no specific proposals as to how this might be done, though Williams does fire off some shots against hard-wiring values into a written and difficult-to-amend constitutional order, using the US as a particular example of this folly.
3. Is the problem more serious than Williams realises?
I follow Williams’s reasoning up until he outlines his potential solutions to the problem. But the two solutions strike me as being far too vague to be worthwhile. I appreciate that Williams couldn’t possibly give detailed policy recommendations in a short article; and I appreciate that his main goal is not to give those recommendations but to raise people’s consciousnesses as to the problem of ongoing moral catastrophe and to make very broad suggestions about the kind of thing that could be done in response. Still, I think in doing this he either underplays how radical the problem actually is, or overplays it and thus is unduly dismissive of one potential solution to the problem. Let me see if I can explain my thinking.
On the first point, let me say something about how I interpret Williams’s argument. I take it that the problem of ongoing moral catastrophe is a problem that arises from massive and multi-directional moral uncertainty. We are not sure if our current moral beliefs are correct; there are a lot of them; and they could be wrong in multiple different possible ways. They could be under-inclusive or over-inclusive; they could demand too much or not demand; and so on. This massive and multi-directional moral uncertainty supports Williams’s claim that we cannot avoid moral catastrophe by doing nothing, since doing nothing could also be the cause of a catastrophe.
But if this interpretation is correct then I think Williams’s doesn’t appreciate the radical implications of this massive and multi-directional moral uncertainty. If moral uncertainty is that pervasive, then it means that everything we do is fraught with moral risk. That includes following Williams’s recommendations. For example, trying to increase moral knowledge could very well lead to a moral catastrophe. After all, it’s not like there is an obvious and reliable way of doing this. A priori, we might think a relatively frictionless and transparent marketplace of moral ideas would be a good idea, but there is no guarantee that this will lead people to moral wisdom. If people are systematically biased towards making certain kinds of moral error (and they arguably are, although making this assessment itself depends on a kind of moral certainty that we have no right to claim), then following this strategy could very well hasten a moral catastrophe. At the same time, we know that censorship and friction often blocks necessary moral reform. So we have to calibrate the marketplace of moral ideas in just the right way to avoid catastrophe. This is extremely difficult (if not impossible) to do if moral uncertainty is as pervasive as Williams seems to suggest.
The same is true if we try to increase social flexibility. If we make it too easy for society to adapt and change to some new perceived moral wisdom, then we could hasten a moral catastrophe. This isn’t a hypothetical concern. History is replete with stories of moral revolutionaries who seized the reins of power only to lead their societies into moral desolation. Indeed, hard-wiring values into a constitution, and thus adding some inflexibility to the social moral order, was arguably adopted in order to provide an important bulwark against this kind of moral error.
The point is that if a potential moral catastrophe is lurking everywhere we look, then it is very difficult to say what we should be doing to avoid it. This pervasive and all-encompassing moral uncertainty is paralysing.
But maybe I am being ungenerous to Williams’s argument. Maybe he doesn’t embrace this radical form of moral uncertainty. Maybe he thinks there are some rock-solid bits of moral knowledge that are unlikely to change and so we can use those to guide us to what we ought to do to avert an ongoing catastrophe. But if that’s the case, then I suspect any solution to the problem of moral catastrophe will end up being much more conservative than Williams’s seems to suspect. If that’s the case, we will cling to the moral certainties like life rafts in a sea of moral uncertainty. We will use them to evaluate and constrain any reform to our system.
One example of how this might work in practice would be to apply the wisdom of negative utilitarianism (something Williams is sceptical about). According to negative utilitarianism, it is better to try to minimise suffering than it is to try to maximise pleasure or joy. I find this to be a highly plausible principle. I also find it to be much easier to implement than the converse principle of positive utilitarianism. This is because I think we can be more confident about what the causes suffering are than we can be about what induces joy. But if negative utilitarianism represents one of our moral life rafts, it also represents one of the best potential responses to the problem of ongoing moral catastrophe. It’s not clear to me that abiding by it would warrant the kinds of reforms that Williams seems to favour.
But, of course, that’s just my two cents on the idea. I think the problem Williams identifies is an important one and also a very difficult one. If he is right that we could be complicit in an ongoing moral catastrophe, then I am not sure that anyone has a good answer as to what we should be doing about it.