[This is the rough draft of a paper I presented at the RIP Moral Enhancement Conference at Exeter on the 7th July 2016]
Some people are frightened of the future. They think humanity is teetering on the brink. Something radical must be done to avoid falling over the edge. This is the message underlying Ingmar Persson and Julian Savulescu’s book Unfit for the Future. In it they argue that humanity faces several significant existential risks (e.g. anthropocentric climate change, weapons of mass destruction, loss of biodiversity etc.). They argue that in order to overcome these risks it is not enough to improve our technology and our political institutions. We will also need to improve ourselves. Specifically, they argue that we may need to morally enhance ourselves in order to deal with the impending crises. We need to be less myopic and self-centred in our policy-making. We need to be more impartial, wise and just.
It is a fascinating idea albeit one that has been widely critiqued. But what I find most interesting about it is the structure of the argument Persson and Savulescu make. It rests on two important claims. The first is that the future is a scary place, full of uncertainty and risk. The second is that in order to avert the risk we must enhance ourselves. Thus, the argument draws an important link between concerns about an uncertainty and risk, and the development of enhancement technologies. In this paper, I want to further explore that link.
I do so by making three main arguments. First, I argue that uncertainty about the future comes in two major forms: (i) factual uncertainty and (ii) moral uncertainty. This is not a novel claim. Several philosophers have argued that moral uncertainty is distinct from factual uncertainty and that we should take it more seriously in our practical reasoning. What is particularly interesting about those encouraging us to take moral uncertainty more seriously is their tendency to endorse asymmetry arguments. These arguments claim that in certain contexts (e.g. the decision to abort a foetus or to kill and eat an animal) the moral uncertainty is stacked decisively in favour of one course of action. The consequence is that even if we cannot precisely quantify the degree of uncertainty, we should favour one course of action if we wish to minimise the risk of doing bad things.
Second, I argue that some arguments against human enhancement can be reconceived as moral risk/uncertainty asymmetry arguments. I take the work of Nicholas Agar to be a particularly strong example of this style of argumentation. Agar objects to human enhancement on the grounds that it could cause us to lose the internal goods that are associated with the use of normal human capacities. I suggest that Agar’s concerns about losing internal goods can be reframed as an argument about the imbalance of moral uncertainty involved in the decision to enhance. In other words, Agar objects to enhancement because he thinks the benefits of doing so are likely to be limited and the (moral) risks great. Thus the uncertainty inherent in the choice is decisively stacked against enhancement.
Third, I close by arguing that this style of asymmetry argument is not particularly persuasive. This is because the argument only works if it ignores other competing moral risks that are inherent in the decision not to enhance. This is why Persson and Savulescu’s argument is so interesting (to me at any rate): they emphasise some of these other risks that weigh in favour of enhancement. When you add their argument to that of Agar you end up with a much more balanced equation: there is no decisive, uncertainty-based argument, either for or against human enhancement.
1. Moral Uncertainty and the Asymmetry Argument
Let’s start by explaining the distinction between moral and factual uncertainty. Suppose you are a farmer and one of your animals is sick. You go to the vet and she gives you some medication. She tells you that this medication is successful in 90% of cases, but that in 10% of cases it proves fatal to the animal. Should you give the animal the medication? The answer will probably depend on a number of factors, but in the description given you face an uncertain (technically risky)* future. The animal could die or it could live. The uncertainty involved here is factual in nature. There is no doubt in your mind about what the right thing to do is (you want the animal to live, not die). The doubt is simply to do with the efficacy of the medication.
Contrast that with another case. Suppose you are a farmer who has recently been reading up the literature on the morality of killing and eating animals. You are not entirely convinced that your life’s work has been morally wrong, but you do accept that it is possible for animals to have a moral status that makes killing and eating them wrong. In other words, you are now morally uncertain as to the right course of action. You might be 90% convinced that it is okay to kill and eat your livestock; but accept a 10% probability that this is a grave moral wrong (the numbers don’t matter; the imbalance does). This is very different from uncertainty as to the efficacy of the medication. The uncertainty is longer about the means to the morally desired end; it is about the end itself.
Admittedly, the distinction between moral uncertainty and factual uncertainty is not perfect. Moral realists might want to argue that there are moral facts and hence there is no distinction to be drawn. But I suspect that even moral realists believe that moral facts (i.e. facts about what is right/wrong or good/bad) are distinct from other kinds of facts (e.g. facts about the weather or the state of one’s digestion). The ability to draw that distinction is all that is relevant here. Moral uncertainty involves uncertainty about facts relating to what is right/wrong and good/bad; factual uncertainty is uncertainty about anything else.
I’ll say no more about the distinction here. The key question is whether moral uncertainty is something that we should take seriously in our decision-making. Most people think that we should take factual uncertainty seriously. The classic example is in prescriptions about gambling or playing the lottery. It is unsure whether you will win the lottery or not. But decision theorists will tell you that the odds are stacked against you and that this should guide your behaviour. They will tell you that even if the money you might win would be a good thing,** your probability of winning is so low as to make the decision to play irrational. Should we make similar prescriptions in cases where the moral rightness/wrongness of your action (or the goodness/badness of the outcome) is uncertain?
The major impediment to doing so is our inability to precisely quantify the probabilities attached to our moral uncertainties. We could make subjective probability assessments (pick some range of prior probabilities and update) but this is likely to be unsatisfactory. Nevertheless, some philosophers insist that there are cases in which the moral uncertainties (whatever they may be) stack decisively in favour of one course of action over another. These are cases of what Weatherson (who is critical of the idea) calls risk asymmetry.
Here’s an example. You are out one night for dinner. There are two options on the menu. The first is a juicy ribeye steak; the second is a vegetarian chickpea curry. You are pretty sure that eating meat is morally acceptable, but think there is some chance that it is a grave moral wrong. You like chickpea curry, but think that steak is much tastier. But you also know that eating meat is not nutritionally necessary.
To put it more formally, you know that you have two options (a) eat the steak or (b) eat the chickpea curry. And when deciding between them you know that you could be in one of two moral worlds:
W1: Eating meat is morally permissible; meat is tasty but not nutritionally essential.
W2: Eating meat is a grave moral wrong; meat is tasty but not nutritionally essential.
You think it is more likely that you are in W1, but accept a non-negligible risk that you are in W2. Which option should you pick? (I have mapped out this decision problem on the decision tree below).
Proponents of the asymmetry argument would claim that in this scenario you should eat the chickpea curry, not the steak. Why? Because if it turns out that you are in W2, and you eat the meat, you will have done something that is gravely morally wrong (perhaps on a par with killing and eating a human being). If, on the other hand, it turns out that you are in W1, and you eat the meat, then you have not done something that is particularly morally good. It’s permissible but no more. In other words, there is a sense in which eating the chickpea curry weakly dominates eating the steak across all the morally relevant possible worlds. You minimise your chances of doing something that is gravely morally wrong by going vegetarian. (Yes: this is effectively a moral version of Pascal’s wager)
That’s the gist of the asymmetry argument. It can be applied in other moral contexts. Some people use asymmetry arguments to claim that you should avoid aborting a foetus; some people use them to argue that you should give significant proportions of your income to charities in the developing world. Across all these contexts, asymmetry arguments seem to share four features:
- (i) You have (at least) two options A or B.
- (ii) You are not sure which moral world you are in (W1 or W2).
- (iii) A (or B) is either neutral or not particularly good in both W1 and W2.
- (iv) A (or B) is a serious moral wrong (or bad) if you are in W2.
As long as the risk of being in W2 is non-negligible, you should avoid A (or B).
What I want to argue now is that these four features are also present in certain objections to human enhancement. Hence those objections can be reframed as moral asymmetry arguments.
2. Asymmetry and Human Enhancement: The Case of Nicholas Agar
The best (but not the only) example of this comes from the work of Nicholas Agar. His 2013 book Truly Human Enhancement is the most up-to-date expression of his views. The gist of the argument in the book is that we should refrain from radical forms of human enhancement because if we don’t we run the risk of losing touch with important moral values, and not gaining anything particularly wonderful in return. To explain how this fits within the asymmetry argument mold, I’ll have to spend some time outlining the concepts and ideas Agar uses to motivate his case.
Agar’s concern is with the prudential axiological value of human enhancement. He wants to know whether the use of enhancement technologies will make life better for the people who are enhanced. In this manner, he is concerned about an intersection between enhancement and moral value, but not with moral enhancement as that term has come to be used in the enhancement debate (the term ‘moral enhancement’ is usually used to refer to the effect of enhancement on right/wrong conduct, not with its axiological effect). I think it is interesting that the term ‘moral enhancement’ is limited in this way, but I won’t dwell on it here. I’ll come back to the intersections between Agar’s argument and the more typical moral enhancement debate later in this article.
Agar follows the traditional view that enhancement technologies are targeted at improving human capacities beyond functional norms. So when he asks the question ‘will enhancement make life better for the people who are enhanced?’, he is really asking the question ‘will the improvement of human capacities beyond the functional norm make life better for those whose capacities are improved?’. In this respect he is drawing a distinction between radical and non-radical forms of enhancement. Take any human capacity — e.g. the capacity for memory, numerical reasoning, problem-solving, empathy. For the normal human population, there will be some variation in the strength of those capacities. Some people will have better memories than others. Some will display more empathy. Typically, the distribution of these abilities follows a bell curve. This bell-curve defines the range of normal human capacities. For Agar, non-radical enhancement involves the improvement of capacities within this normal range. Radical enhancement involves the enhancement of capacities beyond this normal range. Agar’s argument is about the prudential axiology of radical enhancement, i.e. that which moves us beyond the normal range.
When it comes to assessing the value of human capacities, Agar thinks that we must distinguish between two types of goods that are associated with the utilisation of those capacities. The first are the external goods. These are the goods that result from the successful deployment of the capacity. For example, the human capacity for intelligence or problem solving can produce certain goods: new technologies that make our lives safer, more enjoyable, and healthier. Enhancing human capacity beyond the normal range might be prudentially valuable because it helps us to get more of these external goods. These external goods are to be contrasted with internal goods. These are goods that are intrinsic to, constituted/exemplified by, the deployment of the capacity. For instance, the capacity for numerical reasoning might produce the intrinsic good of understanding a complex mathematical problem; or the capacity for empathy might produce the intrinsic good of sharing someone else’s perspective and understanding the world through their eyes.
The distinction between internal and external goods can be tricky. It derives from the work of Alasdair McIntyre. He explains it by reference to chess. In playing the game of chess, there are certain external goods that I might be able to achieve. If I am good at it, I might be able to win tournaments, prize money, fame and ardour. These are all products of my chess-playing abilities. At the same time, there are certain internal goods associated with the practice of playing chess well. There is the strategic thought, the flash of insight when you see a brilliant move, the rational reflection on endgame and openings. These goods are not mere consequences of playing chess. They are intrinsic to the process.
Agar argues that what is true for chess is true for the deployment of human capacities more generally. Using a particular capacity can produce external goods and it can exemplify internal goods. I have noted on a previous occasion that Agar isn’t as clear as he could be about the relationship between human capacities and internal and external goods. The relationship between capacities and external goods is pretty clear: capacities are used to produce outcomes that can be good for us. The relationship between capacities and internal goods is less clear. I think the best way to understand the idea is that our capacities allow us to engage in certain activities or modes of being that instantiate the kinds of internal goods that McIntyre appeals to.
The internal/external goods distinction is critical to Agar’s case against enhancement. He notes that although internal and external goods often go together; they can also pull apart. Getting more and more of an external good might require us to forgo or lose sight of an internal good. So, for example, using a calculator might make you better able to arrive at the correct mathematical result, but it also forces you to forgo the intrinsic good of understanding and solving a mathematical problem for yourself. Similarly, attaching rollerblades to the ends of your legs might make you go faster from A to B, but it prevents you from realising the intrinsic goods of running. Note how both of these examples involve forms of technological enhancement: the calculator in one instance and the rollerblades in the other. This is telling. Agar’s main argument is that if we engage in radical forms of human enhancement, we will forgo more and more of the internal goods associated with different kinds of activities and modes of being. He thinks this applies across the board: the kinds of relationships we currently find valuable will be sacrificed for something different; new sporting activities will have to be invented as old ones lose their value; new forms of music and art will be required along with new jobs and intellectual activities. Indeed, Agar also argues that our sense of self and personal identity (our story to ourselves about the things that are valuable to us now) will be disrupted by this process. In short, radical enhancement will force us to give up many (if not most) of the internal goods that currently make our lives valuable.
And for what? Why would we be even tempted to forgo all these internal goods? Two arguments are proffered by proponents of radical enhancement. The first is that enhancing human capacities beyond the normal range will allow us to secure the more important external goods. These external goods include things like more advanced scientific discoveries; and increased wisdom and capacity for making morally and existentially significant policy choices. In other words, the external goods include solving the problems of climate change, and proliferation of weapons of mass destruction — the very things that Persson and Savulescu highlight in their argument for moral enhancement. The second argument is that even if we do lose the old internal goods, we will find new (possibly better) ones to replace them. Thus if you read the work of, say, Nick Bostrom you’ll find him waxing lyrical about the radically new forms of art and music that will be possible in the posthuman future. In other words, the internal goods post-radical enhancement might be even better than those pre-radical enhancement.
For Agar, these arguments hold little water. The second argument rests on largely speculative post-enhancement internal goods. Even if those speculations turn out to be correct you would still have to accept the loss of the old pre-enhancement internal goods. Furthermore, these new goods wouldn’t be are goods, i.e. the ones that shape our current evaluative frameworks. They would be different goods — ones that only really make sense to posthumans, not to us right now. And the first argument rests on a false dichotomy. It’s not like failing to enhance ourselves means that we must forgo the appealed-to external goods. On the contrary, there are perfectly good methods of achieving those goods without radically enhancing our capacities. I discussed this aspect of Agar’s case against enhancement at length in an earlier post. I’ll offer a brief summary here.
Agar’s point is that radical enhancement of human capacities requires the integration of technology into human biology (be it through brain implants or neuroprosthetics or psychopharmacology or nanotech). That’s how you enhance capacities beyond the normal human range. But the integration of technology into human biology is risky. Why do it when we can just create external devices that either automate an activity or can be used as tools by human beings to achieve the desired outcomes? These external technologies can help us realise all the essential external goods, without requiring us to radically enhance ourselves and thereby forgo existing internal goods. Agar uses a thought experiment to illustrate his point:
The Pyramid Builders: Suppose you are a Pharaoh building a pyramid. This takes a huge amount of back-breaking labour from ordinary human workers (or slaves). Clearly some investment in worker enhancement would be desirable. But there are two ways of going about it. You could either invest in human enhancement technologies, looking into drugs or other supplements to increase the strength, stamina and endurance of workers, maybe even creating robotic limbs that graft onto their current limbs. Or you could invest in other enhancing technologies such as machines to sculpt and haul the stone blocks needed for construction.
Which investment strategy do you choose?
The question is rhetorical. Agar thinks it is obvious that we would choose the latter and that we have continually done so throughout human history. His argument then is that when it comes to securing external goods, we are like the pharoahs. We could risk going down the internal enhancement route, but why risk it when there is a perfectly good alternative? We can have the best of both worlds. We can avoid the calamities mentioned by Persson and Savulescu through better external technologies; and we can keep all the internal goods we currently value.
My claim is that in making this argument, Agar is deploying a kind of moral risk asymmetry argument. But it is difficult to see this because the argument is far more complex than the earlier example involving vegetarianism. It blends factual and moral uncertainty together to make an interesting case against enhancement. But at its core — I submit — it is not about factual uncertainty. It is about uncertainty as what goods are important to the well-lived life and how that uncertainty is decisively stacked in favour of one course of action. Agar is conceding that there could be new internal goods post-enhancement (there is a non-negligible probability of this). But there is also a non-negligible risk that they involve sacrificing our existence evaluative frameworks.
This does map onto the structure of the asymmetry argument that I outlined earlier:
(i) We can choose between two options: (a) pursue radical human enhancement (i.e. enhance human capacities beyond the normal range) or (b) do not pursue radical enhancement.
(ii) When choosing, we could be in one of two moral worlds:
W1: There are important irreplaceable internal goods associated with current levels of human capacity; there are important external goods that we could realise through enhancement; there are no compensating internal goods associated with enhanced levels of human capacity.W2: There are important irreplaceable internal goods associated with current levels of human capacity; there are important external goods that we could realise through enhancement; there could be new compensating (better?) internal goods associated with enhanced levels of human capacity.
(iii) Option (a) is seriously axiologically flawed if we are in W1 and not particularly good if we are in W2. This is because in W1 radical enhancement entails losing all the important irreplaceable internal goods, for little obvious gain (we could have used external technologies to secure the external goods); and in W2 it is still unnecessary, causes us to lose existing goods, and substitutes in one set of possibly better internal goods.
(iv) Option (b) is axiologically superior if we are in W1 and not obviously axiologically inferior if we are in W2. This is because in W1 it allows us to retain the existing internal goods and without forgoing the external goods (thanks to external technologies). While in W2 it also leaves us effectively as we were. We keep the goods we have instead of punting for an unknown set of alternatives.
The upshot is that radical enhancement doesn’t look like a good bet in any moral world. There is no doubt that factual uncertainty plays a significant role in this argument — there are uncertainties as to the likely effects of technology on human life — but there is also little doubt that moral uncertainty is also playing a role — it is because we don’t know whether existing internal goods provide the optimum complement for a good life that we should be wary about losing them. Our future, radically enhanced selves, might have very different evaluative frameworks (ways of assessing what is worthwhile and what is not) from our own. What we now deem important and valuable might be completely unimportant to them. So why sacrifice a current, reasonably well-known evaluative framework, for a completely speculative one, when there is little gain?
3. Is this style of argument persuasive?
I hope I have convinced you that moral uncertainty — specifically in the form of axiological uncertainty — plays a role in Agar’s argument against the desirability of human enhancement. Let me close then by asking the obvious question: is the argument any good? I think not, and in explaining why I think this I hope to return to the original discussion of Persson and Savulescu. My contention is that Agar’s argument only succeeds if it involves an incomplete specification of the decision problem which we face. In particular, if it takes an overly benign view of external technologies. In effect, this is exactly Persson and Savulescu warn against in Unfit for the Future, but with a slight twist.
Let me build up to this argument slowly. As a general rule, one should always be somewhat sceptical of arguments that make use of decision-theoretical models. These models are always simplifications. They highlight some of the salient features of a decision problem while at the same time obscuring or downplaying others. This doesn’t make them worse than other modes of arguing for or against a particular course of action (all human reasoning involves abstraction and simplification), but there is a danger that people are fooled by the apparent rigour and formality of the model. At the same time, there is a significant advantage to dressing up an argument in the formal decision-theoretical garb: doing so makes it easier to identify what is being left out of the analysis. That’s one reason why I think my reconstruction of Agar’s argument has value. Agar doesn’t strictly couch his argument in decision-theoretical terms. But when you do so you see more clearly where he might have made a misstep.
I think his major misstep is in suggesting that the failure to radically enhance is relatively risk free: that externalising technologies can help us to achieve the desired external goods without forgoing the internal goods. I think there is a much more intimate and complex relationship between external technologies and existing internal goods. You can see this most clearly in the debate about automation. I think it is fair to say — and I think Agar would agree — that one’s job is often a source of both internal and external goods. You work to get an income; to gain social status; to provide goods and services that are valuable to society. These are all external goods. At the same time, your work can also be a source of internal goods: the mastery of some skill and the sense of satisfaction and accomplishment associated with the performance of the skill (the analogy to McIntyre’s chess player is almost exact). Now suppose your job is becoming more competitive: to keep achieving the same level of external goods you will have to radically improve your productivity and performance. Agar’s claim is that you could do this by using external technologies (e.g. robot assistance or advanced tools). This would allow you to achieve the external goods without forgoing the internal goods. That is the logic underlying the Pharaoh thought experiment (though, to be clear, I doubt Agar thinks that the experience of the pyramid builders is replete with internal goods).
But this seems very wrong. The use of external technologies to achieve the desired external goods of work does not necessarily leave intact the current internal goods of work. In many instances, the technologies replace or takeover from the human workers. The human workers then lose all the internal goods that were intrinsic to their work. They might find new jobs and there might be new internal goods associated with those jobs, but that’s besides the point. They lose touch with the internal goods of their previous jobs. That’s exactly the kind of thing Agar would like us to avoid by favouring external technologies over enhancement technologies. But he cannot always have his cake and eat it. Indeed, the modern trend is in favour of more and more externalising technologies — ones that sever the link between human activity and desired outcomes. Smart machines, machine learning, artificial intelligence, autonomous robots, predictive analytics, and on and on. All these technological developments tend to takeover from humans in discrete domains of activity. They are often favoured on the grounds that they are more effective and more efficient than those humans in achieving the external goods associated with those activities. So soon there will be no need for skilled drivers, lawyers, surgeons, accountants, and teachers. Robots will do a better job than humans ever could.
This will involve a radical shift in existing evaluative frameworks. Currently, much of human self-worth and meaning is tied up in performing activities that make a genuine difference to the world. Sometimes those activities are directly linked to paid employment; sometimes they are not. Either way, they are under threat from the rise of smart machines. If the machines takeover, we will have to change our priorities. We will have to take a different approach to finding meaning in our lives. This sounds like it might involve the kind of radical shift that Agar is concerned about.
What could help us to avoid this shift? Well, ironically, one thing that could help is radical human enhancement. By enhancing our capacities we could ‘run faster to stay in one place’. We could contend with the increasing demands of our jobs (or whatever), and that way retain the internal goods we currently cherish. Indeed, radical enhancement might be essential if we are to do so. In the end then, Agar’s argument fails because it ignores the negative impact of external technologies on internal goods. Once that negative impact is factored in, the asymmetry that does the heavy-lifting in Agar’s argument dissolves.
Let me close with some final thoughts on the relevance of this Persson and Savulescu’s case for moral enhancement. As I noted at the outset, this argument is also very much premised on claims about risk and uncertainty. They think that current technological developments pose significant existential threats to humans and that the only way to resolve these problems is to pursue moral enhancement of those humans. But this argument is not as robust a defence of human enhancement as it might appear to be. The argument assumes that humans will maintain their relevance in political and social decision-making processes. But interestingly we might be able to address the problems they identify by removing humans from those processes. Better smart technologies might make better moral decisions. Why bother keeping humans in the loop?
So, somewhat ironically, it may be that it is only when you take Agar’s concerns about losing internal goods seriously, that you make a robust case for maintaining human participation in social decision-making. And it may be that it is only then that the case for moral enhancement of those humans can flourish.
*Throughout this paper I ignore the technical distinctions between risk and uncertainty. Most of the examples given in the paper involve uncertainty as opposed to risk because you cannot precisely quantify the degree of uncertainty involved in the decision. But many of the papers on moral uncertainty ignore this technicality and so I follow suit.
** Yes, I know some people will disagree with this. If so, then they are adding moral uncertainty into the decision-making problem. They are suggesting that the moral value of having lots of money (particularly if you get it suddenly and unexpectedly) is unclear.
Respected sir ,
ReplyDeleteWithout uncertainty there is no hope no ethics , uncertainty is urgent for moral life? If not then why ?