Sunday, June 5, 2011

Citizen Cyborg: Agar on Democratic Transhumanism (Part Two)

(Series Index)

This is the second of two posts looking at chapter 8 of Nicholas Agar’s book Humanity’s End . The chapter challenges the sociologist James Hughes’s optimistic views about the fate of the ordinary human being in a posthuman society. At the heart of the chapter is the concern that in a society with radically enhanced beings, ordinary humans are likely to be oppressed, ignored, compelled to undergo enhancement or conscripted into a kind of servitude.

James Hughes thinks that this need not be the case provided there is a shift in the moral framework adopted in a posthuman society. The particular shift which he envisions is one involving the substitution of a human rights based ethic for a personhood based rights ethic. “Personhood” being a broader normative concept which could grant to rights, privileges and responsibilities to humans, posthumans and, indeed, great apes.

Agar is sceptical. He thinks it unlikely that Hughes’s proposed shift will have the desired effect. He does so because the lessons of human history suggest that the dominant moral framework within a society - i.e. that one that actually shapes people’s behaviour - is usually disconnected from the ideals proposed by moral and political philosophers. And it is this dominant moral framework, not the ideal one, with which we are concerned: if this does not change in a posthuman society, then precautionary measures may be warranted.

As noted at the end of the last post, there is some reason to think that the lessons of human history would not apply to a posthuman society. In addition to their increased longevity and cognitive power, radically enhanced humans should also undergo moral enhancement. This could make them less prone to human errors in moral reasoning.

But there’s still cause for scepticism. Agar supports this point by looking at two leading contemporary ethical theories - contractualism and consequentialism - and seeing how they could be applied in a posthuman society. Let’s see what he has to say about each of these.

1. Posthuman Contractualism
A social contract theory of political morality usually takes as its starting point the belief that humans are, essentially, rational maximisers of their own well-being. It follows this with an obvious factual claim: if all individuals pursue their own well-being in an unconstrained, uncoordinated fashion, there will be chaos. Think Hobbesian-state-of-nature-style chaos. Because of this, it makes sense for rational actors to agree to be bound by a set of contractually agreed upon moral norms, enforced by social institutions if need be. Such a contract would be to their mutual advantage.

Self-interest may seem like an inauspicious place from which to begin to develop a moral code, but impressive things can be done if, in imagining the bargaining that might take place place between rational actors, the philosopher is allowed to play with his dearest friend: assumption. Take Rawlsian contractualism as an example. Rawls argued that if the bargainers assume themselves to be behind a veil of ignorance (a device that blinds them from their current traits and privileges) they would agree to a set of institutions that guaranteed equal rights and liberties to all, along with generous redistributions of wealth in favour of the least well-off.

There’s much to admire in what Rawls has to say (if not always in the way he says it) but there’s a snag. The egalitarian outcomes envisaged by Rawls are largely a product of (a) the rough empirical equality that exists between humans in terms of their capacity to thwart each others’ interests and (b) the fact that the bargainers are blinded from some but not all of their current traits and privileges. That rough empirical equality is likely to disappear in a posthuman society and so what gets included in the “some” but not “all” of the veil of ignorance is going to be crucial.

Is there any reason to think that posthumans will place their enhancements behind the veil when deciding on the preferred distribution of rights and rewards? Perhaps, but at the same time perhaps not. There are plenty of beings with whom contemporary humans enter into cooperative enterprises without treating them equally or ignoring differences in capacity.

Agar draws the following analogy: humans coordinate their activities with dogs on many occasions, e.g. dogs help the blind to find their way round and the police to sniff out drugs and we provide them with shelter, food and attention in return. But on neither occasion does the mere fact of cooperation grant the dogs moral privileges on a par with human beings. Posthumans could well take a similar attitude toward humans: they could grant us some rewards but only in proportion to our contributions to the pursuit of their desired ends. That hardly seems like a good outcome for humans.

Remember, Agar follows a precautionary principle. So he thinks that the possibility of this is enough to warrant restrictions of radical enhancement.

2. Posthuman Consequentialism
Suppose that instead of adhering to a contractualist ethic, posthumans adopt a consequentialist one. Would there be greater cause for optimism about the fate of the unenhanced then?

At first glance it might seem that there is. Take, for instance, the views of one of the leading contemporary consequentialists, Peter Singer. Singer is a preference utilitarian. He thinks that moral decision-making should have as its goal the maximisation of preference satisfaction. He has used this principle to advocate for vegetarianism and animal rights. He believes that the human preferences satisfied by the ill-treatment of animals pales in comparison to the animal preferences that are thwarted by that ill-treatment. If Singer can use his principle to advocate for the ethical treatment of animals, then surely posthumans would do something similar for ordinary humans?

There’s a slight problem with this. One that even Singer - trenchant utilitarian and animal rights activist though he is - acknowledges. To see what it is, we need to do two things. First, we must accept that some creatures with moral standing (“persons” in Hughes’s world) will count for more on the preference utilitarian metric than others. For instance, Singer suggests that humans, due to their greater cognitive powers and imaginations, will have a greater number of preferences than, say, a donkey. Second, we need to define something I’m going to call a “tragic choice”. This is a decision-making situation in which the options are limited in such a way that anything you choose to do will make at least one “person” worse off.

One of the key features of a consequentialist ethic like preference utilitarianism is that when faced with a tragic choice, there is still clearly a right answer to the question: what ought one do? The answer is that one should pick the option with the best overall consequences, even if some persons will suffer. So, for example, Singer acknowledges that some animal testing might be morally justified.

Could analogous choices arise for posthuman consequentialists? Could they be forced to choose between increasing aggregate satisfaction at a cost to ordinary humans? Indeed, could ordinary humans be drafted into some kind testing programme much like animals are today? Well, we already noted the possibility of medical conscription when discussing Aubrey de Grey’s ideas. And there are others way in which this could happen too. Agar mentions Kurzweil’s prediction that posthumans will seek to colonise the universe with their digitally uploaded minds. This would involve making using of every scrap of available physical matter, including the biological bodies of ordinary human beings. So a coercive programme of uploading (irrational though it may be) could easily be justified on consequentialist grounds.

This might be fanciful. Agar admits as much noting that perhaps one reason why animal testing can be justified is that animals can speak up or fight back using the language of moral reasoning. Humans, on the other hand, can do both of these things. But even still Agar urges caution. Posthumans are likely to far more sophisticated in their moral reasoning: they are unlikely to be swayed by the emotional appeals and fallacies that plagues contemporary moral debate. This might it more difficult for ordinary humans to make their case.

3. Closing Thoughts
In conclusion, Hughes’s arguments - if they are being fairly represented by Agar - seem naive. Even if posthumans are more morally sophisticated than contemporary humans, there is still some reason to think that the dominant moral codes in posthuman societies would be human-unfriendly.

Assuming that we think a human-unfriendly society would be a bad thing, we have two options: (i) force everyone to undergo radical enhancement or (ii) place restrictions on the development of radical enhancements. Agar finds is species-relative values drawing him toward the second option. Agar ends the chapter by noting that this would not lead to an outright ban on enhancement but, rather, a set of regulatory constraints on the enhancement agendas adopted by members of society.

No comments:

Post a Comment