(Part One, Part Two)
Suppose you are an athlete, training for the Olympic games. Your coach enters your changing room one morning and offers you a choice. You can either follow a rigorous training program for the next six months, or you can take a handful of magic pills and take the next six months off. Either way you'll be prepared for the Olympic games. Which should you choose?
I don't know what you think, but I reckon many people would say you should choose the rigorous training program. Why? Presumably because they think that the athletic achievement gained through the training program is more commendable, more praiseworthy, than that gained through the pill-taking.
Maybe that attitude is correct; maybe it isn't. It really doesn't matter because the question we are asking in this series is whether the same attitude should hold in the case moral enhancement. Suppose there are two pathways to moral conformity (i.e. conduct that conforms with the demands of morality). The first involves traditional moral deliberation: learning about moral principles, applying them to cases, evaluating countervailing principles and so on. The second involves the use of moral enhancement technologies: biomedical interventions that directly manipulate our moral emotions, thereby making us more inclined to behave morally. Is it less commendable, or less praiseworthy to follow the second path?
Thomas Douglas's article "Enhancing Moral Conformity and Enhancing Moral Worth" addresses this question. It does so by critiquing four different arguments, each of which supports the conclusion that moral enhancement undermines the moral worth of our conduct (when compared with deliberative methods of achieving moral conformity). To date in this series, we've examined two of those arguments and highlighted their shortcomings. In this post, we'll consider the last two arguments: the moral effort argument; and the unreliability argument.
1. The Moral Effort Argument
Does the amount of effort it takes to achieve moral conformity increase its moral worth? Is moral enhancement too much of a quick fix? Before you answer those questions, consider the following two cases (taken from Douglas's article):
David's Case: David finds it easy to conform with the demands of morality. He was brought up in a loving nurturing family "where responsibility and moral sensitivity were encouraged and his role models seldom exhibited...objectionable moral attitudes." He also lives in a society that promotes moral virtues and moral reflection. To be clear, it is not that he automatically conforms with the demands of morality -- he still needs to engage in moral deliberation -- but the process is relatively easy for him.
Felix's Case: Felix was "raised in a dysfunctional family where violence was openly encouraged, bigoted attitudes were routinely expressed...and moral sensitivity was viewed as a sign of weakness". He also lives in a society that does not promote moral virtue and reflection, and which has adopted a questionable normative code. Nevertheless, Felix has managed to overcome these impediments and frequently engages in careful and sensitive moral deliberation. As a result, he is just as good as David at conforming his behaviour to the demands of morality.
Who's behaviour carries greater moral worth? David or Felix? Although both are equally good at conforming to the demands of morality, I suspect many people would say that Felix's behaviour is more morally worthy. What's more, the reason for saying this seems intimately linked to the amount of effort Felix has to put into it when compared to David.
There is an argument against moral enhancement lurking here. If it is true that more effort means more worth, then arguably the use of moral enhancement technologies means less worthy behaviour when compared to the use of traditional moral deliberation. After all, such technologies are expected to directly manipulate our moral emotions which makes it easier to conform with the demands of morality (i.e. it removes some of the psychological barriers to moral conformity). In doing so, the technologies bypass the more arduous and effortful process of moral deliberation.
To state this formally:
- (12) The more effort that goes into a moral act the more praise is deserves.
- (13) Morally conforming behaviour produced through the use of moral enhancement technologies requires less effort than morally conforming behaviour produced through deliberative methods.
- (14) Therefore, morally conforming behaviour produced through the use of moral enhancement technologies deserves less praise than morally conforming behaviour produced through deliberative methods.
Is this argument any good? Douglas highlights two problems. First, despite the intuitive appeal of the "more effort = more worth", it does not seem likely that effort is necessary for moral worth. Second, going beyond this point, there are cases in which it more effort is not even sufficient for more worth.
On the first of these points, it can be argued that our intuitions about moral worth and effort are somewhat inconsistent. Indeed, there is a case for the view that morally conforming behaviour produced with a minimum of effort is highly praiseworthy. Consider the moral saint, who selflessly dedicates themselves to improving the lot of others. One of the things that is so admirable about their behaviour is that they don't fall victim to the psychological barriers and weaknesses that affect the rest of us. Furthermore, it seems silly to suppose that morally conforming behaviour like that of David's could not attract a high degree of moral praise, even if other behaviour attracts more. So, in other words, even if Felix is better than David, David is not so bad that we wouldn't like to have more people like him around.
- (15) Effort does not seem necessary for high degrees of moral praise; indeed, sometimes it is the very effortlessness of morally conforming behaviour that makes it so admirable.
The second point -- that effort is not even always sufficient for praise -- is a trickier one to get across. Indeed, I'm not sure that Douglas does a great job of it so I'm going to try to simplify it here. To put it bluntly, if it were always and everywhere true that effort increased praise, people could do the most arbitrary and bizarre things to increase the worth of their behaviour. David, for example, could deliberately expose himself to moral corruption by enlisting in the Ku Klux Klan, imbibing as much of their bigoted beliefs as possible, and then arduously ridding himself of them. But surely that would just be gratuitous? It wouldn't make his subsequently conforming behaviour any more praiseworthy (it might even make it less praiseworthy).
The point, which Douglas thinks is generalisable, is that only nongratuitous effort will increase moral worth. In the case of Felix, the effort was nongratuitous because he was a victim of circumstances beyond his control; in the second case of David, the effort was gratuitous because he was voluntarily choosing to make things more difficult for himself. The claim then is that, on at least some occasions, the choice of a deliberative method of achieving moral conformity, over non-deliberative moral enhancement will be gratuitous rather than non-gratuitous. It will be arbitrarily making things more difficult for ourselves, and won't increase the moral worth of our conduct.
- (16) Effort is not always sufficient for increased moral praise: only nongratuitous effort is sufficient, gratuitous effort is not.
This suggests that the moral effort argument fails.
2. The Unreliability Argument
The final argument against moral enhancement is, once more, distinctively Kantian in nature. In his Groundwork on the Metaphysics of Morals, Kant presents a famous thought experiment involving a shopkeeper. The shopkeeper, we are told, charges his customers a price that maximises his profit. As it also happens, the price is a fair one. Consequently, the shopkeeper's pricing policy conforms with the demands of morality.
But surely there is something unpraiseworthy about the way in which the shopkeeper achieves this conformity? After all, his moral conformity is brought about by accident; by the contingent coincidence of his profit motive with a fair price in one particular market at one particular moment in time. If he were to follow the profit motive across all markets, it is unlikely that this happy coincidence would always arise. To put it more succinctly: he is following an unreliable pathway to moral conformity, and this unreliability seems to detract from moral praiseworthiness.
Again, there is an argument against moral enhancement lurking here. One concern with enhancement vis-a-vis deliberation, is that by directly manipulating emotions and dispositions, enhancement simply makes it easier to conform with moral demands in particular cases and settings. It does not provide us with the moral knowledge needed to conform our conduct with moral reasons across different possible worlds. Take a simple example. Moral enhancement drugs might be able to reduce violent impulses. In many cases, this would ensure moral conformity. But, as we all know, there are certain circumstances in which violence is morally necessary. If the drugs simply reduce violent impulses across the board, they won't be able to produce moral conformity in those circumstances. They are too blunt and unreliable for that. Deliberation, on the other hand, doesn't suffer from the same shortcoming as its raison d'etre is to produce moral knowledge.*
To state the argument formally:
- (17) In order for an action to warrant moral praise, it must be produced by a causal-psychological pathway that produces reliable moral conformity (i.e. conformity across different possible worlds).
- (18) Morally conforming actions brought about by moral enhancement technologies are not produced by reliable causal-psychological pathways (certainly when compared to deliberative pathways).
- (19) Therefore, morally conforming actions brought about by moral enhancement do not warrant moral praise.
There are two responses to this argument, both targetting the claims made in premise (18).
The first is, as Douglas puts it, an Aristotelian point about the link between moral action and moral knowledge. It could be that by regularly conforming our behaviour with moral demands, we acquire the kind of generalisable moral knowledge that the Kantian is looking for. In other words, even if the initial use of moral enhancement technologies does not produce reliable moral conformity, it will kick-start a process that leads to this. We learn by doing, as the saying goes. The problem here (ignoring the speculative aspect of it) is that this response still acknowledges some force to the criticism: there is still the initial unreliable phase of conformity.
- (20) We may acquire moral knowledge by engaging in morally conforming behaviour; thus, enhancement technologies could kick-start our journey to a more reliable causal-psychological pathway to moral conformity.
This leads to the second response. This one points out that one of the ways in which moral enhancement might work is by removing the barriers to moral knowledge. It could be that one of things that prevents people from acquiring generalisable moral knowledge is the presence, in their minds, of distorting biases and emotions. My anger and self-love might prevent me from seeing the value of charity. If an enhancement technology could work by reducing or eliminating those biases, it might help to lift the veil of moral ignorance. Thus, I could almost directly and immediately attain the necessary moral knowledge.
One concern about this response is that removing psychological barriers is not actually guaranteed to produce moral conformity. It may be that even after my anger is sated, and my self-love eliminated, I fail to see the virtue of charity. But as Douglas points out, this doesn't put moral enhancement in a worse position than plain old deliberation. After all, deliberative methods of attaining conformity are not guaranteed to work either. Reading The Life you can Save doesn't always make people more generous and giving. So we'd have to call it a draw on this front.
- (21) Enhancement technologies could remove the psychological barriers to acquiring moral knowledge, and thereby bring about reliable moral conformity.
To sum up, the development of technologies that can immediately and directly manipulate our moral emotions is an intriguing one. It promises increased instances of moral conformity, and this seems like a good thing. Despite this promise, some have objected that the kind of moral conformity that could be achieved with these technologies is superficial and less worthy that the kind of conformity achieved through traditional deliberative methods of moral reasoning.
Nevertheless, as we have seen in this series, these objections have little to recommend them. Either they rely on faulty beliefs about what is needed for moral worth, or they rely an impoverished conception of how enhancement technologies might actually work. In particular, they neglect the possibility of enhancement and deliberation being complements to one another, not alternatives. Enhancements may simply remove the barriers to proper deliberation.
I'd recommend checking out some of Thomas Douglas's other work if you are interested in this topic.
* Douglas presents a second reason for thinking moral enhancement may lead to unreliable conformity. He points out that the success of enhancement technologies may depend heavily on the degree to which a distorting emotion or trait is present. I won't get into that argument here.