Pages

Monday, April 22, 2013

Is there a Case for Robot Slaves?



Right now it’s Sunday afternoon. There is large pile of washed, but as yet un-ironed clothes on a seat in my living room. I know the ironing needs to be done, and I’ve tried to motivate myself to do it. Honestly. The ironing board is out, as is the iron, I have lots of interesting things I could watch or listen to while I do the ironing, and I have plenty of free time in which to do it. But instead I’m in my office writing this blog post. Why?

The answer is that I simply don’t like doing it. I find it to be the most unpleasant, unrewarding, and ultimately frustrating of all household chores. Cooking I enjoy, and cleaning I can handle, but the pleasures of ironing (if there be any) continue to elude me. If only there was some kind of robot that could do the ironing for me? I would buy one in a heartbeat (money-permitting). But then this raises an interesting question: would it be right to get a robot to do this for me? If I find the task so unpleasant, could I justifiably “enslave” a robot to do it on my behalf?

I guess it all depends on what we mean by the word “robot”. If we simply mean a machine that is designed to perform this particular function, but with no sophisticated human-like capacities, then who cares. After all, I don’t get plagued by ethical doubts every time I switch on my washing machine. But if we mean by “robot” a sophisticated artificial intelligence which, to all outward appearances, seems to have the mental capacity equivalent to or greater than that of an ordinary human being, things might be very different. Unsurprisingly, it is this latter case that I want to consider.

In exploring this issue, I want to consider an argument put forward by Steve Petersen. In his paper, “Designing People to Serve” (which appears in the collection Robot Ethics: The Ethical and Social Implications of Robots), Petersen puts forward a rather provocative thesis. In contradistinction to an intuitively appealing view against robot servitude, Petersen defends the following view:

Petersen’s Thesis: It is possible that: (i) robots (artificial intelligences) could be persons in the morally thick sense of that word; (ii) that as persons they could be designed to be our dedicated servants (i.e. to do the things we want them to do like ironing); and (iii) that they are not wronged by being designed to serve us in this manner.

As I say, this is a provocative thesis, particularly in its third component. To claim that there could be persons who are nonetheless permissibly enslaved looks to be obviously false. If nothing else has been learned from the history of human slavery, surely it is that this kind of enslavement is wrong. This is a judgment that science fictional representations of robots seem to accept. For example, Asimov’s story “The Bicentennial Man” plays upon this theme by depicting the rather tragic life of a robot who must ultimately obey human commands (there is a particularly evocative passage in the story that makes this point).

So what kind of argument can Petersen proffer in defence of his thesis? I map it out below, starting a thought experiment, then looking at the specific argument he makes in favour of robot servitude, and finally considering his responses to some objections. Just to be clear from the outset, the focus here is only on the third prong of Petersen’s thesis. In other words, I’m assuming arguendo that artificial persons are a real possibility and that they could be designed to serve human goals. As it happens, I am inclined to agree with both of these claims, but there are others who do not, and they have arguments that would need to be considered separately.


1. The Person-o-Matic Thought Experiment
One of Petersen’s key points is that the defence of his thesis requires us to overcome some powerful moral intuitions. I can certainly see his point. Designing persons to serve our interests seems to clash with deeply held beliefs about autonomy, flourishing and the well-lived life. To overcome these intuitions, Petersen uses a thought experiment involving a machine, something he calls the “Person-o-matic”. This is a machine that, with the pressing of a few buttons and the twiddling of a few knobs, can create any possible person (organic or artificial). The question is: which buttons can we press and which knobs can we twiddle?

Start with the simplest case. Could we press buttons creating an organic person with unknown/uncertain desires and dispositions? The answer would seem to be “yes, of course”. If it’s permissible to create organic persons through the more traditional means of sexual reproduction, then surely it’s permissible to create an organic person using the machine. The means of production shouldn’t alter the permissibility of the act (certainly not when the machine has no known side effects). But if that’s right, then why couldn’t we create an artificial person with the same characteristics? Surely, the mere fact that the persons are made of different substances doesn’t alter the permissibility conditions for their creation.

To clarify, it may well be that it is impermissible to create persons in certain circumstances. For instance, if there are insufficient resources for them to survive, or if no one will look after them in their early development (this may not apply to artificial persons of course). But this context-specific impermissibility does not undermine the general conclusion that it is (oftentimes) permissible to create persons, organic or artificial.

Having established moral parity between organic and artificial persons, we move on to consider the different innate dispositions and desires we might give these people. First up, let’s ask ourselves: would it be permissible to create an “enhanced” person? Specifically, a person with enhanced desires, for example, the desire to do good in the world, to avoid cigarettes, to enjoy healthy food more than the average human being, and so on. We might feel a bit iffier about this one. The main reason for this is probably that the manipulation of desires in this manner seems to undermine autonomy. Since the person is hardwired to be strongly predisposed to avoid vice and pursue virtue, we might be inclined to say that they aren’t doing these things of their own volition, that they aren’t truly responsible for their actions.

This gets us into stormy philosophical waters. It might be the case that hard determinism is true and that all our intuitions about autonomy and responsibility are metaphysical nonsense anyway. In that event, there would almost certainly be nothing wrong with creating such people. Indeed, there might be great deal to be said in its favour. But even if hard determinism is not true, and autonomy and responsibility are meaningfully applied to the human condition, there is a problem. People are already created with innate sets of dispositions and desires, some stronger than others. Do they thereby have their autonomy undermined? If not, then there’s no reason to think that creating people with enhanced dispositions undermines autonomy. Indeed, if there’s no perfectly neutral starting point, why not bias people toward the good?

We arrive then at the last step in the thought experiment. Would it be permissible to fiddle with the dials on the Person-o-Matic so as to create an artificial person that served our needs? Petersen argues that it would be. In doing so, he appeals to one very simple idea: the contingency of our desires. Desire-fulfillment is a relative property, it arises when there a person’s desires align with the state of their world. Thus, if I desire ice-cream, my desire is fulfilled whenever the world is such that I am given an ice-cream. Furthermore, desire-fulfillment is good, perhaps intrinsically so, according to many axiological theories. Thus, being in a state of desire-fulfillment is (ceteris paribus) a net positive. But the content of my desires is a contingent fact about me. In other words, I could desire pretty much anything, and still be fulfilled whenever my desires, whatever they happen to be, are satisfied. I could desire tea, or a bicycle ride, or a trip to the Moon. To be sure, our evolutionary history has probably predisposed us towards certain types of desire (food, sex, shelter, power etc.), but that doesn’t defeat the point: it is possible for us to desire anything and for us to be benefitted by having our desires fulfilled.

Here’s the key move: when it comes to designing artificial persons, we are not constrained by our evolutionary history in the creation of desires. We could endow an artificial person with any set of desires that it is technically possible to endow such a person with. So why not endow a robot with an overwhelming, deep, second-order desire to do the ironing? Why not make it so that they are in the deepest state of satisfaction whenever they are in the midst of folding my clothes?

The argument, such as it is, boils down to this:


  • (1) It is not wrong (ceteris paribus) for a person to have their deepest desires satisfied. 
  • (2) An artificial person could be created whose deepest desire would be to serve our interests and needs. 
  • (3) We could make it so that the artificial person had the opportunity to serve our interests and needs. 
  • (4) Therefore, it is not wrong (ceteris paribus) to create an artificial person with the deepest desire to serve our interests and needs.


The ceteris paribus clause in the first premise is designed to avoid objections like: “Well, what if they were created with the desire to kill other people?” This would obviously be wrong, but that is because some outcomes are objectively wrong and hence it is wrong to bring them about, even if it does satisfy your desires. That kind of objection is a red herring. There are, however, three more serious objections, each of which is considered by Peterson in his article.


2. Three Objections and Replies
The three main objections to the argument are as follows:

(5) Autonomy Objection: It is wrong to dictate a person’s life plan to them. In creating an artificial person to serve our interests and needs, we would be instrumentalising them, treating them as a means to our own ends, not as a true autonomous agent.
(6)Higher Goods Objection: There is a distinction between higher goods and lower goods such that a minimal quantity of the former is better than a high quantity of the latter (better to be Socrates unhappy than a pig happy). In creating robot slaves we would be creating people who are doomed to a life filled with lower goods.
(7)Slippery-Slope Objection: Even if it is not intrinsically wrong to create robot slaves, it does give rise to a morally worrying slippery slope. Specifically, it seems like it will desensitise us to the needs and interests of human persons, and will thus condition us to act callously toward them when they do not wish to do our dirty work.

Let’s deal with each of these objections now.

The autonomy objection directly targets premise one. It holds that getting a person to satisfy their desires is only good if their desires are truly their own, not if they are means to serving our interests and needs. To some extent, this simply replicates the autonomy-based concerns highlighted above and so a similar set of replies would work once again. In addition to this, however, the objection also raises Kantian concerns about the treatment of artificial agents. It argues, a la Kant, that instrumentalising an agent in this manner is to treat them as a mere means to our ends, not as an end in themselves. This breaches Kantian requirements for the ethical treatment of autonomous agents.

Petersen’s response is to argue that Kant’s objection works only if the agent is being treated as a “mere” means, and not just because they are being treated as a means to an end. The distinction is subtle but crucial. To be treated as a mere means is to be forced to do someone else’s bidding without, at the same time, being given the opportunity to pursue your own ends. This is morally problematic. But to simply be treated as a means to an end is okay, if you are, at the same time, being allowed to pursue your own ends. Thus, to use the classic example: two friends meet weekly to play a game of squash. Both desire the exercise and excitement of the game, both use the other as a means to this end. But both get to pursue their own ends in the process, so where’s the problem? Peterson argues that the same is true for the robot slave. They pursue their own deepest desires in the process of serving our ends.


  • (8) There is nothing wrong with treating a person as means to an end, provided they are not prevented from pursuing their own ends in the process. A robot slave would be pursuing its own ends by serving our interests. Thus, it would not be instrumentalised in a morally objectionable manner.


The Higher Goods objection is slightly more interesting, and also targets premise one. It adopts Mill’s famous dictum about higher and lower goods, holding that the kinds of desires robot slaves would be programmed have — desires to do our laundry, cook our food, clean out our trash and so forth — only allow for lower hedonic forms of pleasure. They do not allow for the higher intellectual and aesthetic goods beloved by Mill (among others). Thus, robot slaves live an impoverished form of life, one that is excluded from the higher goods.

A number of responses suggest themselves here. The first is simply to deny Mill’s dichotomy and argue that pursuing one’s deepest desires (provided those desires are not immoral) constitutes the highest good for that person. But that might index well-being to individual perceptions to an undesirable degree. A second response would be to adopt a simple “less good, but not bad” line of attack. In other words, to argue that although a life filled with higher goods would be better, it does not follow that it is bad, or indeed impermissible, to bring into existence a being that experiences nothing but lower goods. This is especially so if we bear in mind that the robot slave is not wronged by being created with desires for lower goods. After all, they do not exist prior to being brought into this state of affairs. Hence, there is no subject that can be wronged by the act of creation (this is the non-identity problem).


  • (9) Creating a life filled with lower goods is not wrong. This because, although it is less good than an alternative life filled with higher goods, it is not therefore a bad life.


Finally, we have the slippery slope objection, which is not targetted at the premises of the original argument; rather, it is targetted at the conclusion. It holds that there is a causal chain from the acceptance and creation of robot slaves to the desensitisation and callous disregard for our fellow human beings. An analogy might illustrate this point. One objection to the torture and maiming of animals is that even if the animals are not themselves harmed in the process, the mindset that such acts encourage tends to be psychopathic in nature, and the psychopathic mind is more likely to do harm to humans. Thus, we should prevent the former in order to prevent the latter. This logic, it could be said, applies equally well to robot slaves and the enslavement of humans.

I like Petersen’s response to this. He says that it relies on the dubious assumption that the “general population is unable to make coarse-grained distinctions in what different people value”. This is dubious because, in our everyday lives, we don’t make the mistake in thinking that because one of our friends likes haggis, all of our friends must like haggis. We are able to distinguish between the desires of different people. Why couldn’t we do the same when comparing robot slaves with ordinary human beings? As Mill once said, any ethical standard “work[s] ill, if we suppose universal idiocy to be conjoined with it”.


  • (10) The slippery slope objection presumes people will not be able to make coarse-grained distinctions between what different people value. There is no reason to think this will happen since we make such distinctions all the time.



3. Conclusion
In summary, Petersen’s article makes the case for robot slaves. I have tried to lay out his argument in as succinct and straightforward a manner as possible. This means that some interesting digressions and sub-arguments have been neglected. Still, I hope I have made his central claim pretty clear. It is that if we could program a robot to deeply desire to serve our needs and interests, we would not do wrong by bringing such a robot into existence. Thus, if we do not do any wrong, it would follow that the creation of a robot slave is permissible (it might also be desirable, though additional argumentation would be needed for that).

I find Petersen’s argument to be provocative, iconoclastic, and somewhat appealing. He himself admits to being conflicted by it, noting that his intuitions still seem to rebel against the conclusion. He’s just not convinced that he should trust his intuitions in this instance. I feel somewhat similar, but I’m not ready to make the case for robot slaves just yet. At least, not if we assume such agents to be “persons” in the morally thick sense of that term. Still, I like the debate because it raises important issues about the nature of well-being and its connection to rights and wrongs. I think the idea that well-being is ultimately determined by the relationship between desires and the state of the world is a powerful one, and I think the case for robot slaves brings this idea to the fore in an interesting and practically important way. If AI technology continues to advance apace, the day is fast approaching when we will have make our gamble about the propriety of this view.


Anyway, back to the ironing.

No comments:

Post a Comment