Pages

Saturday, March 16, 2013

Nagel on the Burden of Enhancement (Part Two)



(Part One)

It has oft been observed that people are uneasy about the prospect of advanced enhancement technologies. But what is the cause of this unease? Is there any rational basis to it? I’m currently trying to work my way through a variety of arguments to this effect. At the moment, I’m looking at Saskia Nagel’s article “Too Much of Good Thing? Enhancement and the Burden of self determination”, which appeared a couple of years back in the journal Neuroethics.

In this article, Nagel presents two main arguments. The first — which I call the Well-Being Argument (WBA) — suggests that there might be an inverse relationship between the level of choice and the level of well-being, and since enhancement increases the amount of choice it could have a negative impact on well-being. The second — which I call the Social Responsibility Argument (SRA) — suggests that the growth of enhancement technologies may lead to increased social pressure to enhance, which has negative implications from the perspective of both social equality and well-being.

I looked at the WBA in part one and questioned some of its premises. In the remainder of this post, I do the same for the SRA.


2. The Social Responsibility Argument
Let me start with a story.

Jake is an epileptic. His condition is serious, but manageable. Although he is extremely prone to seizures, if he takes his prescribed medication he can usually avoid them. Suppose Jake is about to undertake a long journey by car. He knows that if he doesn’t take his medication in advance of the trip, there is a high probability that he will have a seizure and that this may lead to an accident. Despite this, he decides not to take his medication, thinking it will be too much hassle to get it out of his bag. So he drives off. Two hours into the journey he has seizure, which causes him to swerve the car off the road at which point he collides with a pedestrian. The pedestrian dies.

Is Jake responsible for the pedestrian’s death? I don’t know how you feel about it, but my feeling is that he is. His awareness of the risk posed by his condition, coupled with the fact that he deliberately ran that risk, seem sufficient to make him responsible for the pedestrian’s death (deterministic considerations being ignored for present purposes). But now contrast the judgment about this case with another case. Suppose this time Jake has the same condition, but there is no medication capable of controlling it. Suppose, further, that he is forced to take control of a car when the driver has a heart attack. Nevertheless, he still has a seizure and causes a fatal accident. Is he responsible this time? I don’t think so, but what’s the salient difference between the cases? Well, in the former, Jake has both the means and the opportunity to avoid (or, at least, minimise the risk of) the accident. Thus, he has the medication, and he has the decision-point prior to entering the car. In the latter case, he has neither of these things. There is no controlling medication, and he has to take control of the car in an emergency.

By suggesting that the growth of enhancement technologies creates more and more scenarios like that of the first Jake case, Nagel uses the difference in attitudes between it and the second case to fashion an argument against enhancement. The argument has two stages, the first of which looks something like this:


  • (1) If we have both the means and the opportunity to avoid a particular kind of social risk, then we are likely to be held responsible for the realisation of that risk (if and when it arises). 
  • (2) The growth of enhancement technologies increases both the means and the opportunities we have for avoiding social risk. 
  • (3) Therefore, the growth of enhancement technologies increases the number of occasions on which we are likely to be held responsible for the realisation of social risk.


By itself this is a pretty interesting argument, and I will say more about its key premises and concepts below, but hopefully you can see why it needs a second stage. The problem is that this first stage is largely descriptive-predictive. It holds that increased enhancement leads to increased responsibility for social risk, but this in itself isn’t a bad thing. Additional premises are needed to turn this into an anti-enhancement argument.

What might those premises look like? Unfortunately, there is no clear indication from Nagel’s article. This is largely my fault, since I am the one insisting on the formal reconstruction of what she has to say. Still, I think Nagel would be keen to push this argument back in the direction of well-being related concerns. In other words, she would like to argue that the increased likelihood of being held responsible creates a social expectation that we will take responsibility for those risks, which in turn imposes burdens and anxieties on the individual. So:


  • (4) An increased likelihood of being held responsible for social risk will lead to increased an social expectation that we will take responsibility for those risks.
  • (5) An increased social expectation for taking responsibility creates anxieties and imposes burdens, both of which are inimical to individual well-being.
  • (6) Therefore, the growth of enhancement technologies is likely to be inimical to individual well-being.


Although this is certainly an argument worth making, I think the first stage of the SRA could be pushed in an alternative, equally worthy direction. That direction has to do social equality. My thinking is that an increased social expectation to use enhancement may lead to the social risk burden being distributed in an unequal way. In other words, those who are most vulnerable and most in need of social protection may face a higher level of expectation. The argument would look like this (the first premise is the same as in the previous argument):


  • (4) An increased likelihood of being held responsible for social risk will lead to an increased social expectation to take responsibility for social risk.
  • (7) This increased social expectation is likely to lead to the social risk burden being distributed in an inegalitarian manner.
  • (8) Therefore, the growth of enhancement technologies is likely to lead to an inegalitarian distribution of the social risk burden.


To the extent that inegalitarian social distributions are deemed morally circumspect, this would also count as an anti-enhancement argument.


2. Assessing the Social Responsibility Argument
The fact that the SRA can take different forms creates difficulties when it comes to its assessment. Ideally, one should cover all the possibilities, but in the interests of time and space I will only cover one. Thus, I will focus on the first stage, which is common to both forms, and the social equality version of the second stage. I justify this on the grounds that I already spoke about well-being in part one.

So let’s look at the first stage of the argument. Are the key premises of this stage supported? Well, premise (1) seems pretty reasonable to me. It can be supported in two ways. First, one could simply rely on thought experiments like the one I outlined involving Jake the epileptic. If there is widespread agreement about responsibility in such cases, then it seems that we are likely to be held responsible in such situations. But in addition to this intuition mongering, we can also point to some actual legal cases in which this seems to be the case. The current law on criminal responsibility (in England and Wales at least) suggests that you can indeed be responsible for the downstream consequences of failing to minimise some social risk. For example, if you continue to drive while feeling drowsy you can be held responsible for causing death by dangerous driving, even if you are actually asleep at the time of the offence. This leads me to believe that premise (1) is sound.

Premise (2) is a bit less clear to me. It may well be that certain forms of enhancement minimise social risk, but it’s not obviously true. The link between enhancement and the minimisation of social risk may be extremely indirect in many cases. Alternatively, enhancement may itself increase social risk. For example, an enhanced memory or imagination may have a debilitating effect on one’s ability to concentrate on a particular task. This could be risky. Still, if we stick with the driving while drowsy example for a minute, I think it is fair to say that a cognitive enhancer like modafinil might minimise the social risk in such a case. Since modafinil keeps you alert and awake for longer periods of time, it is one way of minimising the social risk associated with driving in such conditions.

But it’s not the only thing that will minimise the social risk. Choosing not to drive is another way of doing this. So, in this case, although enhancement may give us the means and opportunity to avoid social risk, it won’t be the only thing doing this, nor even the most effective. This has implications for the argument as a whole as it highlights a gap between the premises and the conclusion. Specifically, increasing the number of means and opportunities for minimising social risk does not, in and of itself, increase the likelihood of our being held responsible in cases where the risk is realised. Why not? Because the judgment of responsibility probably only requires one plausible and accessible way of minimising the risk to have been neglected. That said, an increased number of opportunities for avoiding the risk certainly won’t decrease the likelihood of being held responsible.

That brings us onto the second stage of the argument, the one focusing on the distribution of the social risk burden. Before I begin to assess this stage, it’s worth briefly defining what I mean by “social risk burden”. In every society, there is cooperation between multiple persons. This typically creates a social surplus. Thus, for example, if we cooperate to create a local police service, we improve the security and safety for most people in our community. That improved level of security and safety would be a social surplus. But to attain that social surplus, people have to accept certain duties and responsibilities (burdens). For instance, the police officers need to be paid, so people need to contribute to their payment. If they do not, they risk losing the social surplus. This gives us the definition: the social risk burden is simply the bundle of duties of responsibilities imposed upon people in order to secure the social surplus (or “to minimise the risk of losing that surplus”).

Social surpluses and burdens are distributed across the members of society. Oftentimes this distribution can be unequal. Some people get more of the surplus, some people get more of the burden. This might be morally problematic, depending upon one’s theory of distributive justice. The key claim in the second stage of the argument comes in premise (7). It says that the increased social expectation to take responsibility (that comes with the growth of enhancement technologies) is likely to be inegalitarian. For present purposes, I define “inegalitarian” in terms of strict egalitarianism. In other words, a distribution is inegalitarian if people do not have exactly equal shares of surplus/burden. I know that this is problematic.

So now we come to the crux of it: is there any reason to think that enhancement would result in an inegalitarian distribution of the risk burden? Superficially, yes. Some people create more risks than others. If they are expected to take responsibility for avoiding those risks, and if enhancement does make available means and opportunities for minimising those risks, then we might expect the burden to fall on them. For example, if a surgeon could minimise the risks of surgery going wrong by taking cognitive enhancing drugs, we might impose a burden on them to take those drugs. But since surgeons are involved in more risky activities than most (or, at least, more activities that are themselves intrinsically high risk), a greater burden is likely to fall on them than on the rest of us.

But is this morally problematic? Maybe not. For example, one could argue that surgeons are well-compensated for their high risk work. Thus, when you work out the total distribution of surpluses and burdens, they aren’t really being unfairly treated. Their high remuneration and social status offsets the burden. Still, there are others who might be unfairly treated. Take Jake the epileptic once again. His physical condition makes him a high risk individual. If we impose a burden on him to avoid the risks associated with his condition, he can function as an ordinary citizen and we can get along with our day-to-day lives. But there is no benefit to him over and above the benefits to ordinary citizens. In other words, he gets the burden without the compensating reward. That might be morally problematic.
The saving grace here could be the fact that enhancement technologies create their own compensating rewards. In other words, in addition to minimising certain social risks, enhancement technologies might, to borrow John Harris’s phrase make us “more of everything we want to be”.


3. Conclusion
To sum up, Nagel cautions us about the effects of human enhancement in two ways. First, she argues that by increasing both the level of choice and the number of opportunities for regret, enhancement may impact negatively on our well-being. This was the basis of the WBA, which I examined in part one. Second, she argues that enhancement might increase the social expectation to take responsibility for things. This could impact negatively on our individual well-being, but also on the distribution of social surpluses and burdens. This was the basis of the SRA, which I considered in this post.

I think the SRA, as it pertains to social equality, is an interesting argument. It might be worth pursuing its various implications in more depth. I have sketched some of the potential avenues of discussion above, but certainly not all of them. The problem being posed, as I see it, is that people might be forced to enhance in order to minimise social risks without getting any compensating reward. The question is whether enhancement always creates its own compensating reward. These are things worth thinking about.

No comments:

Post a Comment