A couple of weeks back, I looked at David Owens’s article “Disenchantment”. In this article, Owens argues that the ability to manipulate and control all aspects of human life — which is, arguably, what is promised to us by enhancement technologies — would lead to disenchantment. Those of you who read my analysis of Owens’s article will know that I wasn’t too impressed by his arguments. Since then I’ve been wondering whether there might be a better critique of enhancement, one which touches upon similar themes, but which is defended by more rigorous arguments.
I think I may have found one. Saskia Nagel’s article “Too Much of a Good Thing? Enhancement and the Burden of Self-Determination” presents arguments that are broadly similar to Owens’s, but in a slightly more persuasive way. Like Owens, her concern is with the increased choice that enhancement affords us. Like Owens, she thinks that this increased level of choice may have a disorienting and discomfiting effect on our well being. But unlike Owens, Nagel supports her arguments with copious references to decision theory and psychology. The result is a more satisfying set of arguments.
As I read her, Nagel there two distinct arguments underlying Nagel’s thesis. The first — which I shall call the Well-Being Argument Against Enhancement (WBA for short) — suggests that there might be an inverse relationship between choice and subjective well-being. This runs contrary to the presumptions of certain liberal theories. The second — which I shall call the Social Responsibility Argument Against Enhancement (SRA for short) — highlights the burdens and responsibilities that might be imposed on people as a result of enhancement, which has further negative effects for individual well-being.
In this post, and the next, I want to look at both arguments, considering the structure and the defence of the key premises in some detail. I kick things off in this post by looking at the WBA.
1. The Well-Being Argument Against Enhancement
There is an assumption in liberal thought — perhaps with its roots in the work of John Stuart Mill — that says: the more choice the better. The reasoning behind this is straightforward: An individual’s well-being is determined largely by whether they satisfy their preferences (i.e. whether they get what they want); there is a huge diversity of preferences out there to satisfy; individuals are themselves the best judge of which actions (or combinations thereof) will satisfy those preferences; so by giving them more options, we make it more likely that they will satisfy their preferences, which will in turn increase the overall level of well-being.
Nagel’s first argument — the WBA — takes issue with this line of reasoning. The WBA holds that far from it being the case that more choice is better, more choice may actually be inimical to well-being. It may be true, up to a point, that more choice means increased well-being, but beyond that point there is an inverse relationship between choice and well-being. Indeed, beyond that point choice has a paralysing and disorienting effect on our lives. Enhancement technologies exacerbate this problem by dramatically increasing the number of choices we have to make about our lives.
That would give us the following argument:
- (1) Although more choice might lead to increased well-being up to a certain point, beyond that point more choice actually reduces the level of individual well-being.
- (2) Enhancement technologies will increase the number of choices we have to make about our lives beyond the optimal point.
- (3) Therefore, enhancement technologies will reduce the level of individual well-being.
Before assessing the two premises of this argument, I need to say a word or two about how it relates to what Nagel says in her article. First off, as per usual on this blog, nowhere in her article does Nagel actually present an argument in these formal terms. This is strictly my reconstruction of what she has to say. Second, it’s probably not entirely fair to suggest that Nagel really thinks that enhancement technologies “will reduce the level of individual well-being.” Parts of what she says could be read as supporting that conclusion, but I suspect it’s fairer to say that she thinks enhancement technologies might have this effect. My reconstruction of the argument uses the more definitive expression “will” because that just makes it more interesting and fun to engage with. A cautiously hedged argument is more philosophically and academically acceptable, but it risks setting the burden of proof too low since, of course, enhancement “might” have that effect. The possibilities here are vast. What’s important is what is likely to happen.
With that interpretive caveat out of the way, I can proceed to evaluate the key premises of the WBA, starting with (1).
2. Assessing the Argument
Nagel defends the first premise of her argument with a variety of observations, some based on decision theory and the psychology of decision-making, others based on the psychology of regret. Before looking at those observations, however, we need to be clear about what premise (1) does not say. Premise (1) does not deny the liberal contention that more choice increases well being. What premise (1) is saying is that there is an optimal level of choice, beyond which more choice actually has a negative impact on well-being. Think of an inverse-U (or N) shaped graph, with the number of choices available to an agent displayed along one axis, and the level of well-being experienced by that agent displayed along the other. Although initially the level of well-being increases along with the increase in choice, after a certain point the level of well-being plateaus and eventually decreases. The key practical question for the evaluation of the WBA is: have we reached the plateau already, will enhancement technologies push us over the edge, or will they simply add to the ongoing decline?
Still, this is simply to clarify the nature of the claim being made by premise (1). Is there any reason to think that this claim is true? Here, Nagel relies heavily on the work of Barry Schwartz and his book The Paradox of Choice. Based largely on studies of consumer behaviour, though backed-up by other cognitive psychology studies such as those by Kahneman and Tversky on decision heuristics, Schwartz’s work suggests that more choice increases the level of anxiety among decision-makers. If we imagine that every decision we make can be formally modelled on a decision tree, and that decisions are made by selecting the best branch of the tree, then adding more choices increases the number of branches and increases the complexity of the decision problem that needs to be solved. This leads to the decision-maker getting “stuck” in the decision problem, paralysed by the uncertainty of the various options.
Of course, in reality decision-makers don’t really follow the strict axioms of decision theory. Instead, they adopt heuristics for quick and dirty decision-making under uncertainty. But even then, Schwartz and Nagel argue, adding more choices upsets their usual heuristics, and increases their level of decision-making anxiety. This has a negative impact on well-being for the simple reason that they find it difficult to pick options that actually will have a positive impact.
Regret is an additional factor here. With more choices come more opportunities for regret. We often wonder about what might have been, and I know that in my own life I tend to regret the roads I have not taken, particularly if the roads I have taken don’t seem to be all that great. This can have an obvious and negative impact on well-being. Regret arises after the decision is made, so it’s not directly linked to the decision-paralysis problem, but it is rather a separate reason for thinking that more choice is not necessarily better.
So we have a two-pronged defence of premise one:
- (4) Increasing the number choices makes the decision problems that need to be solved by each individual decision-maker more and more complex. This can lead to decision-paralysis.
- (5) Increasing the number of choices increases the opportunities for regret, and regret impacts negatively on well-being.
These look like plausible reasons to support premise (1), but a couple of cautionary words are in order. First, even my minimal wikipedia-based research into Schwartz’s thesis suggests that the evidence for it is not clearcut. Second, to reiterate the point made above, we need to recognise that even if premise (1) is true, the claim it makes may be pretty modest. I’m perfectly willing to accept that there may be a point at which we have far too many choices to make, but what matters is exactly where the threshold lies. Which brings us to…
…The defence of premise (2). Is there any reason to think that it’s true? Here, Nagel is considerably sketchier than she is in relation to premise (1). She seems to think it obvious that enhancement increases the number of choices we have, and in fact she adopts a very similar thought experiment to that adopted by David Owens when defending his claims about enhancement. She asks to imagine that there are a variety of drugs on the market (“MoodUp, ProSocial, AntiGrief, LiftAnxiety, AgileDayandNight, Prothymos, 7-24-Flourishing”). If effective, such drugs would obviously increase the number of choices we have to make in our daily lives. We would be forced into new decision-problems: Do we want to be perked up or relaxed? Do we want to be outgoing and social, or more introverted and focused?
I enjoy imaginative exercises of this sort, but they fall short of making the case for premise (2). Even if we are willing to tolerate speculation about future enhancement technologies, we still need to know whether the increased number of choices will either (a) push us into the negative part of inverse-U on the well-being/choice graph; or (b) push us further into the negative part of the well-being/choice graph. The only thing that Nagel really says that might support these position is that the kinds of choices we make with enhancement technologies would be qualitatively different from everyday consumer-like choices. Thus, with enhancement we actively choose the kinds of emotional states we wish to have, rather than just the type of cake we want to eat, or car we want to drive.
But again, this doesn’t get us to premise (2). And even if it did I’m not sure that it would be successful. I’ve heard people like Owens and Nagel suggest that the choices one could make with enhancement technologies would be qualitatively distinct from the kinds of choices we currently make, but I’m not convinced. It seems to me like we frequently make choices that regulate and affect our mood, or that change the type of person we want to be. For example, when I go running or cycling, I do so because I know it will improve my mood, and will strengthen certain character traits. The only possible difference with a cognitively altering drug is that the manipulation of character and mood might be more direct and less effortful, but I see no reason to think that this by itself would be enough to increase decision-anxiety and reduce well-being.
So I have to conclude that premise (2) is not defended. This obviously detracts from the force of the argument.
To sum up, one of Nagel’s arguments against enhancement is the well-being argument (WBA). According to this, there can be an inverse-relationship between the number of choices we have and our level of well-being. More precisely, if we have lots of choice, we face more complex and difficult decision problems, and more opportunities for regret. Both of these things impact negatively on our well-being. But while this may well be true, there is no good reason (yet) to think that enhancement technologies in and of themselves will increase the level of choice to the point where the relationship between it and well-being is inverted.
That brings us to the end of my discussion of the WBA. In part two, I will look at Nagel’s second argument against enhancement: the social responsibility argument (SRA).