Pages

Tuesday, July 19, 2016

Moral Enhancement and Moral Freedom: A Critical Analysis




The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?

Proponents of moral neuroenhancement think we should — though they typically focus on much higher stakes scenarios. A popular criticism of their project has emerged. This criticism holds that trying to ensure moral conformity comes at the price of moral freedom. If our brains are prodded, poked and tweaked so that we never do the wrong thing, then we lose the ‘freedom to fall’ — i.e. the freedom to do evil. That would be a great shame. The freedom to do the wrong thing is, in itself, an important human value. We would lose it in the pursuit of greater moral conformity.

I find this line of argument intriguing — not least because it shares so much with the arguments made by theists in response to the infamous problem of evil. In this post, I want to look at Michael Hauskeller’s analysis and defence of this ‘freedom to fall’ objection. I base my discussion on two of his papers. The first was published a few years ago in The Philosophers’ Magazine under the title ‘The Little Alex Problem’; the second is due to be published in the Cambridge Quarterly Review of Healthcare Ethics under the title 'Is it desirable to be able to do the undesirable?'. The second paper is largely an expanded and more up to date version of the first. It presents very similar arguments. Although I read it before writing this post, I’ll still base most of my comments on the first paper (which I read more carefully).

I’ll break the remainder of the discussion down into four sections. First, I’ll introduce Hauskeller’s formulation of the freedom to fall objection. Second, I’ll talk about the value of freedom, drawing in particular on lessons from the theism-atheism debate. Third, I’ll ask the question: would moral neuroenhancement really undermine our freedom to fall? And fourth, I’ll examine Hauskeller’s retreat to a quasi-political account of freedom in his defence of the objection. I’ll explain why I’m less persuaded by this retreat than he appears to be.


1. The Freedom to Fall and the Little Alex Problem
Hauskeller uses a story to illustrate the freedom to fall objection. The story is fictional. It comes from Anthony Burgess’s (in)famous novel A Clockwork Orange. The novel tells us the story of “Little” Alex, a young man prone to exuberant acts of ultraviolence. Captured by the authorities, Alex undergoes a form of aversion therapy. He is given medication that makes him feel nauseous and then repeatedly exposed to violent imagery. His eyes are held open in order to force him to view the imagery (a still from the film version provides the opening image to this post). The therapy works. Once he leaves captivity, he still feels violent urges but these are quickly accompanied by feelings of nausea. As a result, he no longer acts out in violent ways. He has achieved moral conformity through a form of moral enhancement.


The novel takes an ambivalent attitude towards this conformity. One of the characters (a prison chaplain) suggests that in order to be truly good, Alex would have to choose to do the good. But due to the aversion therapy, this choice is taken away from him. The induced nausea effectively compels him to do the good. Indeed, the chaplain goes further and suggests that Alex’s induced goodness is not really good at all. It is better if a person can choose to do the bad than be forced to do the good. This is what Hauskeller calls the ‘Little Alex’ problem. He describes it like this:

This is what I call the “Little Alex” problem… it invites us to share a certain moral intuition (namely that it is in some unspecified way bad or wrong or inhuman to force people into goodness) and thus to accept the ensuing paradox that under certain conditions the bad is better than the good — because it is not only suggested that it is wrong to force people to be good (which is fairly uncontroversial) but also that the resulting goodness is somehow tainted and devaluated by the way it has been produced 
(Hauskeller 2013, 75)



To put the argument in more formal terms, we could say the following:


  • (1) It is morally better, all things considered, to have the freedom to do the bad (and actually act on that freedom) than to be forced to do the good.
  • (2) Moral neuroenhancement takes away the freedom to do the bad.
  • (3) Therefore, moral neuroenhancement is, in some sense, a morally inferior way of ensuring moral conformity.



This formulation is far from being logically watertight, but I think it captures the gist of the freedom to fall objection. Let’s now consider the first two premises in some detail.



2. Is it Good to be Free to do Evil?
The first premise of the argument makes a contentious value claim. It states that the freedom to do bad is such an important good that a world without it is worse than a world with it. In his 2013 article LINK, Hauskeller suggests that the proponent of premise one must accept something like the following value hierarchy:

Best World: A world in which we are free to do bad but choose to do good (i.e. there is both moral conformity and moral freedom)
2nd Best World: A world in which we are free to do bad and (sometimes) choose to do bad (i.e. there is moral freedom but not, necessarily, moral conformity)
3rd Best World: A world in which we always do good but are not free to do bad (i.e. there is moral conformity but no moral freedom)
Worst World: A world in which we are not free and do bad (i.e. there is neither moral conformity nor moral freedom).




In his more recent paper, Hauskeller proposes a similar but more complex hierarchy featuring 6 different levels (the two extra levels capture differences between ‘sometimes’ and ‘always’ doing good/bad). In that paper he notes that although the proponent of the ‘freedom to fall’ argument must place a world in which there is moral freedom and some bad above a world in which there is no moral freedom, there is no watertight argument in favour of this hierarchy of value. It is really a matter of moral intuitions and weighing competing values.

This seems right to me and is one place where proponents of the ‘freedom to fall’ argument can learn from the debate about the problem of evil. As is well-known, the problem of evil is the most famous atheological argument. It claims that the existence of evil is incompatible (in varying degrees) with the existence of a perfectly good god. Theists have responded to this argument in a variety of ways. One of the most popular is to promote the so-called ‘free will’ theodicy. This is an argument claiming that moral freedom is a great good and that it is not possible for God to create a world in which there is both moral freedom and no evil. In other words, it promotes a similar value hierarchy to that suggested (but not defended) by Hauskeller.

There has been much back-and-forth between theists and atheists as to whether moral freedom is such a great good and whether it requires the existence of evil. Many of the points that have been made in that debate would seem to apply equally well here. I will mention just two.

First, I have always found myself attracted to a line of argument mooted by Derk Pereboom and Steve Maitzen. This may be because I and something of a free will sceptic. Pereboom and Maitzen argue that in many cases of moral evaluation, the freedom to do bad is a morally weightless consideration, not just a morally outweighed one. In other words, when we evaluate a violent criminal who has just savagely murdered ten people, we don’t think that the fact that he murdered them freely speaks in his favour. His act is very bad, pure and simple; it is not slightly good and very bad. Admittedly, this isn’t much of an argument. It is an appeal to the intuitive judgments we exercise when assessing another’s moral conduct. Proponents of moral freedom can respond with their own intuitive judgments. One way they might do this is by pointing to cases of positive moral responsibility and note how in those cases we tend to think it does speak in someone’s favour if they acted freely. Indeed, the Little Alex case is possibly one such case. The only thing I would say about that is that it highlights a curious asymmetry in the moral value of freedom: it’s good when you do good, but weightless when you do bad. Either way these considerations are much less persuasive if you don’t think there is any meaningful reconciliation of freedom with moral responsibility.

Second, and far more importantly, non-theists have pointed out that in many contexts the freedom to do bad is massively outweighed by the value of moral conformity. Take the case of a remorseless serial killer who tortures and rapes young innocent children. Are we to suppose that allowing the serial killer the freedom to do bad outweighs the child’s right to live a torture and rape-free life? Is the world in which the serial killer freely does bad really a better world than the one in which he is forced to conform? It seems pretty unlikely. This example highlights the fact that moral freedom might be valuable in a limited range of cases (and if it is exercised in a good way) but that in certain ‘high stakes’ cases its value is outweighed by the need for moral conformity. It is open to the defender of moral enhancement to argue that its application should be limited to those ‘high stakes’ cases. Then it will all depend on how high the stakes are and whether moral enhancement can be applied selectively to address those high stakes cases.* According to some proponents of moral enhancement — e.g. Savulescu and Persson — the stakes are very high indeed. They are unlikely to be persuaded by premise one.

(For more on the problems with viewing moral freedom as a great good, I highly recommend Wes Morriston's paper 'What's so good about moral freedom?')


3. Is Moral Enhancement Really Incompatible with Moral Freedom?
Even if we granted premise (1), we might not grant premise (2). This premise claims that moral freedom is incompatible with moral enhancement, i.e. that if we ensure someone’s conformity through a technological intervention, then they are not really free. But how persuasive is this? It all seems to depend on what you understand by moral freedom and how you think moral enhancement works.

Suppose we take moral freedom to be equivalent to the concept of ‘free will’ (I’ll consider an alternative possibility in the next section). There are many different accounts of free will. Libertarian accounts of free will hold that freedom is only possible in an indeterministic world. The ‘will’ is something that sits outside the causal order of the universe and only jumps into that causal order when the agent makes a decision to act. It’s difficult for me to see how a proponent of libertarian free will could accept premise (2). All forms of moral enhancement will, presumably, operate on the causal networks inside the human brain. If the will is something that sits outside those causal network then it’s not clear how it is compromised by interventions into them. That said, I accept that there are some sophisticated emergentist and event-causal theories of libertarianism that might be disturbed by neural interventions of this sort, but I think their reasons for disturbance can be addressed by considering other theories of free will.

Other theories of free will are compatibilist in nature. They claim that free will is something situated within the causal order. An agent acts freely when their actions are produced by the right kind of mental-neural mechanism. There are many different accounts of compatibilist free will. I have discussed most of them on this blog before. The leading ones argue that an agent can act freely if they are reasons-responsive and/or their actions are consistent with their character and higher order preferences.

Moral enhancement could undermine compatibilist free will so understood. But it all depends on the modality of the enhancement. In the Little Alex case, the aversion therapy causes him to feel nauseous whenever he entertains violent thoughts. This is inconsistent with some versions of compatibilism. From the description, it seems like Alex’s character is still a violent one and that he has higher-order preferences for doing bad things, it’s just that he is unable to express those aspects of his character thanks to his nausea. He is blocked from acting freely. But aversion therapy is hardly the only game in town. Other modalities of moral enhancement might work by altering the agent’s desires and preferences such that they no longer wish to act violently. Still others might work by changing their ability to appreciate and process different reasons for action, thus improving their reasons-responsivity. Although not written with moral enhancement in mind, Maslen, Pugh and Savulescu’s paper on using DBS to treat Anorexia Nervosa highlights some of these possibilities. Furthermore, there is no reason to think that moral enhancement would work perfectly or would remove an agent’s ability to think about doing bad things. It might fail to ensure moral conformity in some instances and people might continue to entertain horrendous thoughts.

Finally, what if an agent freely chooses to undergo moral enhancement? In that case we might argue that he has also freely chosen all his resulting good behaviour. He has pre-committed to being good. To use the classic example, he is like Odysseseus tying himself to the mast of his ship: he is limiting his agency at future moments in time through an act of freedom at an earlier moment in time. The modality of enhancement doesn’t matter then: all that matters is that he isn’t forced into undergoing the enhancement. Hauskeller acknowledges this possibility in his papers, but goes on to suggest that they may involve a dubious form of self-enslavement. This is where the politics of freedom come into play.


4. Freedom, Domination and Self-Enslavement
Another way to defend premise (2) is to analyse it in terms of political, not metaphysical, freedom. Metaphysical freedom is about our moral agency and responsibility; political freedom is about how others relate to and express their wills over us. It is about protecting us from others so as to meet the conditions for a just and mutually prosperous political community — one that respects the fundamental moral equality of its citizens. Consequently, accounts of political freedom not so much about free will as they are about ensuring that people can develop and exercise their agency without being manipulated and dominated by others. So, for example, I might argue that I am politically unfree in exercising my vote, if the law requires me to vote for a particular party. In that case, others have chosen for me. Their will dominates my own. I am subordinate to them.

This political version of freedom provides a promising basis for a defence of premise (2). One problem with moral enhancement technology might be that others decide whether it should be used on us. Our parents could genetically manipulate us to be kinder. Our governments may insist on us taking a course of moral enhancement drugs to become safer citizens. It may become a conditional requirement for accessing key legal rights and entitlements, and so on. The morally enhanced person would be in a politically different position from the naturally good person:

The most conspicuous difference between the naturally good and the morally enhanced is that the latter have been engineered to fell, think, and behave in a certain way. Someone else has decided for them what is evil and what is not, and has programmed them accordingly, which undermines, as Jurgen Habermas has argued, their ability to see themselves as moral agents, equal to those who decided how they were going to be. The point is not so much that they have lost control over how they feel and think (perhaps we never had such control in the first place), but rather that others have gained control over them. They have changed…from something that has grown and come to be by nature, unpredictably, uncontrolled, and behind, as it were a veil of ignorance, into something that has been deliberately made, even manufactured, that is, a product. 
(Hauskeller 2013, 78-79)

There is a lot going on in this quote. But the gist of it is clear. The problem with moral enhancement is that it creates an asymmetry of power. We are supposed to live together as moral equals: no one individual is supposed to be morally superior to another. But moral enhancement allows one individual or group to shape the moral will of another.

But what if there is no other individual or group making these decisions for you? What if you voluntarily undergo moral enhancement? Hauskeller argues that the same inequality of power argument applies to this case:

…we can easily extend [this] argument to cases where we voluntarily choose to submit to a moral enhancement procedure whose ultimate purpose is to deprive us of the very possibility to do wrong. The asymmetry would then persist between our present (and future) self and our previous self, which to our present self is another. The event would be similar to the case where someone voluntarily signed a contract that made them a slave for the rest of their lives. 
(Hauskeller 2013, 79)

What should we make of this argument? It privileges the belief that freedom from the yoke of others is what matters to moral agency — that we should be left to grow and develop into moral agents through natural processes — not manipulated and manufactured into moral saints (even by ourselves). But I’m not sure we should be swayed by these claims. Three points seem apposite to me.

First, a general problem I have with this line of argument is the assumption that it is better to be free from the manipulation of others than it is to be free from other sorts of manipulation. The reality is that our moral behaviour is the product of many things. Our genetic endowment, our social context, our education, our environment, various contingent accidents of personal history, all play an important part. It’s not obvious to me why we should single out causal influences that originate in other agents for particular ire. In other words, the presumption that it is better that we naturally grow and develop into moral agents seems problematic to me. Our natural development and growth — assuming there is a coherent concept of the ‘natural’ at play here — is not intrinsically good. It’s not something that necessarily worth saving. At the very least, the benefits of moral conformity would weigh (perhaps heavily) against the desirability of natural growth and development.

Second, I’m not sure I buy the claim that induced moral enhancement involves problematic asymmetries of power. If anything, I think it could be used to correct for asymmetries of power. To some extent this will depend on the modality of enhancement and the benefits it reaps, but the point can be made more generally. Think about it this way: The entire educational system rests upon asymmetries of power, particularly the education of young children. This education often involves a moral component. Do we rail against it because of the asymmetries of power? Not really. Indeed, we often deem education necessary because it ultimately helps to correct for asymmetries of power. It allows children to develop the capacities they need to become the true moral equals of others. If moral enhancement works by enhancing our capacities to appreciate and respond to moral reasons, or by altering our desires to do good, then it might help to build the capacities that correct for asymmetries of power. It might actually enable effective self control and autonomy. In other words, I’m not convinced that being moral enhanced means that you are problematically enslaved or beholden to the will of others.

Third, I’m not convinced that self-enslavement is a bad thing. Every decision we make enslaves our future selves in at least some minimal sense. Choosing to go to school in one place, rather than another, enslaves the choices your future self can make about what courses to take and career paths to pursue. Is that a bad thing? If the choices ultimately shape our desires — if they result in us really wanting to pursue a particular future course of action — then I’m not sure that I see the problem. Steve Petersen has made this point in relation to robot slaves. If a robot is designed in such a way that it really really wants to do the ironing, then maybe getting it to do the ironing is not so bad from the perspective of the robot (this last bit is important — it might be bad from a societal perspective because of how it affects or expresses our attitudes towards other, but that’s not relevant here since we are talking about self-enslavement). Likewise, if by choosing to undergo moral enhancement at one point in time, I turn myself into someone who really really want to do morally good things at a later moment in time, I’m not convinced that I’m living some inferior life as a result.

That’s all I have to say on the topic for now.

* Though note: if the stakes are sufficiently high, non-selective application might also be plausibly justified.

No comments:

Post a Comment