Via Rochelle Don on Flickr |
People dispute the ontological status of robots. Some insist that they are tools: objects created by humans to perform certain tasks — little more than sophisticated hammers. Some insist that they are more than that: that they are agents with increasing levels autonomy — now occupying some liminal space between object and subject. How can we resolve this dispute?
One way to do this is by making analogies. What is it that robots seem to be more like? One popular analogy is the animal-robot analogy: robots, it is claimed, are quite like animals and so we should model our relationships with robots along the lines of the relationships we have with animals.
In its abstract form, this analogy is not particularly helpful. ‘Animal’ denotes a broad class. When we say that a robots is like an animal do we mean it is like a sea slug or like a chimpanzee, or something else? Also, even if we agree that a robot is like a particular animal (or sub-group of animals) what significance does this actually have? People disagree about how we ought to treat animals. For example, we think it is acceptable to slaughter and experiment with some, but not others.
The most common animal-robot analogies in the literature tend to focus on the similarities between robots and household pets and domesticated animals. This makes sense. These are the kinds of animals with whom we have some kind of social relationships and upon whom we rely for certain tasks to be performed. Consider the sheep dog who is both a family pet and a farmyard helper. Are there not some similarities between it and a companion robot?
As seductive as this analogy might be, Deborah Johnson and Mario Verdicchio argue that we should resist it. In their paper “Why robots should not be treated like animals” they accept that there are some similarities between robots and animals (e.g. their ‘otherness’, their assistive capacity, the fact that we anthropomorphise and get attached to them etc.) but also argue that there are some crucial differences. In what follows I want to critically assess their arguments. I think some of their criticisms of the animal-robot analogy are valid, but others less so.
1. Using the analogy to establish moral status
Johnson and Verdicchio look at how the analogy applies to three main topics: the moral status of robots, the responsibility/liability of robots, and the effect of human-robot relationships on human relationships with other humans. Let’s start by looking at the first of those topics: moral status.
One thing people are very interested in when it comes to understanding robots is their moral status. Do they or could they have the status of moral patients? That is to say, could they be objects of moral concern? Might we owe them a duty of care? Could they have rights? And so on. Since we ask similar questions about animals, and have done for a long time, it is tempting to use the answers we have arrived at as a model for answering the questions about robots.
Of course, we have to be candid here. We have not always treated animals as though they are objects of moral concern. Historically, it has been normal to torture, murder and maim animals for both good reasons (e.g. food, biomedical experimentation) and bad (e.g. sport/leisure). Still, there is a growing awareness that animals might have some moral status, and that this means they are owed some moral duties, even if this doesn’t quite extend to the full suite of duties we owe to an adult human being. The growth in animal welfare laws around the world is testament to this. Given this, it is quite common for robot ethicists to argue that robots, due to their similarities with animals, might be owed some moral duties.
Johnson and Verdicchio argue that this style of argument overlooks the crucial difference between animals and robots. This difference is so crucial that they repeat it several times in the article, almost like a mantra:
Robots are machines. Animals are sentient organisms, that is, they are capable of perception and they feel, whereas robots do not, at least not in the important sense in which animals do [they acknowledge in a footnote that roboticists sometimes talk about robots sensing and feeling things but then argue that this language is being used in a metaphorical sense].
(Johnson and Verdicchio 2018, pg 4 of the pre-publication version).
The problem is that robots do not suffer and even those of the future will not suffer. Yes, future robots might have some states of being that could be equated with suffering [refs omitted] but, futuristic thinking leaves it unclear what—other than metaphorical representation—it could mean to say that a robot suffers. Thus, the animal–robot analogy doesn’t work here. Animals are sentient beings and robots are not.
(Johnson and Verdicchio 2018, 4-5)
Robots of today do not have sentience or consciousness and do not suffer. Robots of the future might have characteristics that are equated with sentience, suffering, and consciousness, but if these features are going to be independent of each other…they will be fundamentally different from what humans and (some) animals have. It is the capacity to suffer that drives a wedge between animals and robots when it comes to moral status.
(Johnson and Verdicchio 2018, 5)
I quote these passages at some length because they effectively summarise the argument the authors make. It is pretty clear what the reasoning is:
- (1) Animals do suffer/have sentience or consciousness.
- (2) Robots cannot and will not suffer or have sentience or consciousness (even if it is alleged that robots do have those capacities, the terms will be applied metaphorically to the case of robots)
- (3) The capacity to suffer or have sentience or consciousness is the reason why animals have moral status.
- (4) Therefore, the robot-animal analogy is misleading, at least when used to ground claims about robot moral status.
I find this argumentation relatively weak. Beyond the categorical assertion that animals are sentient and robots are not, we get little in the way of substantive reasoning. Johnson and Verdicchio seem to just have a very strong intuition or presumption against robot sentience. This sounds like a reasonable position since, in my experience, many people share this intuition. But I am sceptical of it. I’ve outlined my thinking at length in my paper ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’.
The gist of my position is this. A claim to the effect that another entity has moral status must be justified on the basis of publicly accessible evidence. If we grant that sentience/consciousness grounds moral status, we must then ask: what publicly accessible evidence warrants our belief that another entity is sentient/conscious? My view is that the best evidence — which trumps all other forms of evidence — is behavioural. The main reason for this is that sentience is inherently private. Our best window into this private realm (imperfect though it may be) is behavioural. So if sentience is going to be a rationally defensible basis for ascribing moral status to others, we have to work it out with behavioural evidence. This means that if an entity behaves as if it is conscious or sentient (and we have no countervailing behavioural evidence) then it should be treated as having moral status.
This argument, if correct, undercuts the categorical assertion that robots are not and cannot be sentient (or suffer etc.), as well as the claim that any application of such terminology to a robot must be metaphorical. It suggests that this is not something that can be be asserted in the abstract. You have to examine the behavioural evidence to see what the situation is: if robots behave like sentient animals (granting, for the moment that animals are sentient) then there is no reason to deny them moral status or to claim that their sentience is purely metaphorical. Since we do not have direct epistemic access to the sentience of humans or other animals, we have no basis by which to distinguish between ‘metaphorical’ sentience and ‘actual’ sentience, apart from the behavioural.
This does not mean, of course, that robots as they currently exist have moral status equivalent to animals. That depends on the behavioural evidence. It does mean, however, the chasm between animals and robots with respect to suffering and sentience is not, as Johnson and Verdicchio assert, unbridgeable.
It is worth adding that this is not the only reason to reject the argument. To this point the assumption has been that sentience or consciousness is the basis of moral status. But some people dispute this. Immanuel Kant, for instance, might argue that it is the capacity for reason that grounds moral status. It is because humans can identify, respond to and act on the basis of moral reason that they are owed moral duties. If robots could do the same, then perhaps they should be afforded moral status too.
To be fair, Johnson and Verdicchio accept this point and argue that it is not relevant to their focus since people generally do not rely on an analogy between animals and robots to make such an argument. I think this is correct. Despite the advances in thinking about animal rights, we do not generally accept that animals are moral agents capable of identifying and responding to moral reasons. If robots are to be granted moral status on this basis, then it is a separate argument.
2. Using the analogy to establish rules for robot responsibility/liability
A second way in which people use the animal-robot analogy is to develop rules for robot responsibility/liability. The focus here is usually on domesticated animals. So imagine you own a horse and you are guiding it through the village one day. Suddenly, you lose your grip and the horse runs wild through the farmers’ market, causing lots of damage and mayhem in its wake. Should you be legally liable for that damage? Legal systems around the world have grappled with this question for a long time. The common view is that the owner of an animal is responsible for the harm done by the animal. This is either because liability is assigned to the owner on a strict basis (i.e. they are liable even if they were not at fault) or on the basis of negligence (i.e. they failed to live up to some standard of care).
Some people argue that a similar approach should be applied to robots. The reason is that robots, like animals, can behave in semi-autonomous and unpredictable ways. The best horse-trainer in the world will not be able to control a horse’s behaviour at all times. This does not mean they should avoid legal liability. Likewise, for certain classes of autonomous robot, the best programmer or roboticst will not be able to perfectly predict and control what the robot will do. This does not mean they should be off the hook when it comes to legal liability. Schaerer et al (2009) are the foremost proponents of this ‘Robots as Animals’ framework. As they put it:
The owner of a semi-autonomous machine should be held liable for the negligent supervision of that machine, much like the owner of a domesticated animal is held liable for the negligent supervision of that animal.
(2009, 75)
Johnson and Verdicchio reject this argument. Although they agree with the overall conclusion — i.e. that robot manufacturers/owners should not be ‘off the hook’ when it comes to liability — Johnson and Verdicchio argue that the analogy being made between robots and animals is unhelpful because there are crucial differences between robots and animals:
no matter what autonomy is in robots, the robots will have been created entirely by humans. Differently from what happens in genetics, humans do have a complete knowledge of the workings of the electronic circuitry of which a robot’s hardware is comprised, and the instructions that constitute the robot’s software have been written by a team of human coders. Even the most sophisticated artefacts that are able to learn and perfect new tasks, thanks to the latest machine learning techniques, depend heavily on human designers for their initial set-up, and human trainers for their learning process.
(Johnson and Verdicchio 2018, 7)
They continue to argue that these differences mean we should take a different route to the conclusion that robots manufacturers ought to be liable:
The concepts of strict liability and negligence seem relevant to legal liability for robot behaviour but not because robots are like domesticated animals, but simply because they are manufactured products with some degree of unpredictability. The fundamental difference between animals and robots—that one is a living organism and the other a machine—makes analogies suspect…In the case of animals, owners exert their influence through training of a natural entity; in the case of robots, manufacturers exert their influence in the creation of robots and they or others (those who buy the robots) may also exert influence via training. For this, animals are not a good model.
(Johnson and Verdicchio 2018, 7)
I have mixed feelings about this argument. One minor point I would make is that I suspect the value of the animal-robot analogy will depend on the dialectical context. If you are talking to someone who thinks that robot manufacturers ought not be liable because robots are autonomous (or semi-autonomous), then the analogy might be quite helpful. You can disarm their reasoning by highlighting the fact that we already hold the owners of autonomous/semi-autonomous animals liable. This might cause them to question their original judgment and lead them toward the conclusion preferred by Johnson and Verdicchio. So to say that the analogy is unhelpful or obfuscatory does not strike me as being always true.
More seriously, the argument Johnson and Verdicchio make rests on what are, for me, some dubious assumptions. Foremost among them are (a) there is an important difference between training a natural entity and designing, manufacturing and training an artificial entity, (b) we have complete knowledge of robot hardware (and don’t have complete knowledge of animal hardware) and (c) this knowledge and its associated level of control makes a crucial difference when it comes to assigning liability. Let’s consider each of these in more detail.
The claim that there is some crucial difference between a trained natural entity and a designed/manufactured/trained artificial entity is obscure to me. The suggestion elsewhere in the article is that an animal trainer is working with a system (the biological organism) that is a natural given: no human was responsible for evolving the complex web of biological tissues and organs (etc) that give the animal its capacities. This is very different from designing an artificial system from scratch.
But why is it so different? The techniques and materials needed to create a complex artificial system are also given to us: they are the product of generations of socio-technical development and not the responsibility of any one individual. Perhaps biological systems are more complex than the socio-technical system (though I am not sure how to measure complexity in this regard) but I don’t see why that is a crucial difference. Similarly, I would add that it is misleading to suggest that domesticated animals are natural. They have been subject to artificial selection for many generations and will be subject to more artificial methods of breeding and genetic engineering in the future. Overall, this leads me to conclude that the distinction between the natural and the artificial is a red herring in this debate.
The more significant difference probably has to do with the level of knowledge and control we have over robots vis-a-vis animals. Prima facie, it is plausible to claim that the level of knowledge and control we have over an entity should affect the level of responsibility we have for that entity’s activities, since both knowledge and control have been seen as central to responsibility since the time of Aristotle.
But there are some complexities to consider here. First, I would dispute the claim that people have complete knowledge of a robot’s hardware. Given that robots are not really manufactured by individuals but by teams, and given that these teams rely heavily on pre-existing hardware and software to assemble robots, I doubt whether the people involved in robot design and manufacture have complete knowledge of their mechanics. And this is to say nothing about the fact that some robotic software systems are inherently opaque to human understanding, which compounds this lack of complete knowledge. More importantly, however, I don’t think having extensive knowledge of another entity’s hardware automatically entails greater responsibility for its conduct. We have pretty extensive knowledge of some animal hardware — e.g. we have mapped the genomes and neural circuitry of some animals like c.elegans — but I would find it hard to say that because we have this knowledge we are somehow responsible for their conduct.
Second, when it comes to control, it is worth bearing in mind that we can have a lot of control over animals (and, indeed, other humans) if we wish to have it. The Spanish neuroscientist — Jose Delgado — is famous for his neurological experiments on bulls. In a dramatic presentation, he implanted an electrode array in the brain of a bull and used a radio controller to stop it from charging at a him in a bullring. Delgado’s techniques were quite crude and primitive, but he and others have shown that it is possible to use technology to exert a lot of control over the behaviour of animals (and indeed humans) if you so wish (at the limit, you can use technology to kill an animal and shut down any problematic behaviour).
At present, as far as I am aware, we don’t require the owners of domesticated animals to implant electrodes in their brains and then carry around remote controls that would enable them to shut down problematic behaviour. But why don’t we do this? It would be an easy way to address and prevent the harm caused by semi-autonomous animals. There could be several reasons but the main one would probably be because we think it would be cruel. Animals don’t just have some autonomy from humans; they deserve some autonomy. We can train their ‘natural’ abilities in a particular direction way, but we cannot intervene in such a crude and manipulative way.
If I am right, this illustrates something pretty important: the moral status of animals has some bearing on the level of control we both expect and demand of their owners. This means questions about the responsibility of manufacturers for robots cannot be disentangled from questions about their moral status. It is only if you assume that robots do not (and cannot) have moral status that you assume they are very different from animals in this respect. The very fact that the animal-robot analogy casts light on this important connection between responsibility and status strikes me as being useful.
3. Using the analogy to understand harm to others
A third a way of using the animal-robot analogy is to think about the effect that our relationships with animals (or robots) have on our relationships with other humans. You have probably heard people argue that those who are cruel to animals are more likely to be cruel to humans. Indeed, it has been suggested that psychopathic killers train themselves, initially, on animals. So, if a child is fascinated by torturing and killing animals there is an increased likelihood that they will transfer this behaviour over to humans. This is one reason why we might want to ban or prevent cruelty to animals (in addition to the intrinsic harm that such cruelty causes to the animals themselves).
If this is true in the case of animals then, by analogy, it might also be true in the case of robots. In other words, we might worry about human cruelty to robots because of how that cruelty might transfer over to other humans. Kate Darling, who studies human-robot interactions at MIT has made this argument. She doesn’t think that robots themselves can be harmed by the interactions they have with humans but that human cruelty to robots (simulated though it may be) could encourage and reinforce cruelty more generally.
This style of argument is, of course, common to other debates about violent media. For example there are many people who argue that violent movies and video games encourage and reinforce cruelty and violence toward real humans. Whatever about the merits of those other arguments, Johnson and Verdicchio are sceptical about the argument as it applies to animals and robots. There are two main reasons for this. The first is that the evidence linking violence to animals and violence to humans may not be that strong. Johnson and Verdicchio certainly cast some doubts on it, highlighting the fact that there are many people (e.g. farmers, abattoir workers) whose jobs involve violence (of a sort) to animals but who do not transfer this over to humans. The second reason is that even if there were some evidence to suggest that cruelty to robots did transfer over to humans, there would be ways of solving this problem that do not involve being less cruel to robots. As they put it:
…if it were found to be true that the sight of cruelty to humanoid robots desensitized us to the sight of cruelty in humans or that engaging in cruelty to humanoid robots increased the likelihood that we would be cruel to one another, this would provide some justification for action. The justified action could but need not necessarily be to grant rights to robots. There are at least two different directions that might be taken. One would be to restrict what could be done to humanoid robots and the other would be to restrict the design of robots.
(Johnson and Verdicchio 2018, 8)
They clarify that the restrictive designs for robots could include ensuring that the robot does not appear too humanoid and does not display any signs of suffering. The crucial point then is that this second option is not available to us in the case of animals. To repeat the mantra from earlier: animals suffer and robots do not. We cannot redesign them to prevent this. Therefore there are independent reasons for banning cruelty to animals that do not apply to robots.
I have written about this style of argument ad nauseum in the past. My comments have focused primarily on whether sexual violence toward robots might transfer over to humans, and not on violence more generally, but I think the core philosophical issues are the same. So, if you want to my full opinion on whether this kind of argument works I would suggest reading some of my other papers on it (maybe start with this one and this one). I will, however, say a few things about it here.
First, I agree with Johnson and Verdicchio that the animal-robot analogy is probably superfluous when it comes to making this argument. One reason for this is that there are other analogies upon which to draw, such as the analogy with the violent video games debate. Another reason is that whether or not robot cruelty carries over to cruelty towards humans will presumably depend on its own evidence and not on analogies with animals or violent video games. How we treat robots could be sui generis. Until we have the evidence about robots, it will be difficult to know how seriously to take this argument.
Second, one point I have been keen to stress in my previous work is that it is probably going to be very difficult to get that evidence. There are several reasons for this. One reason is that it is probably going to be very difficult to do good scientific work on the link between human-robot interactions and human-human interactions. We know this from other debates about exposure to violent media. These debates tend to be highly contentious and the effect sizes are often weak. Researchers and funders have agendas and narratives they would like to support. This means we often end up in a epistemically uncertain position when it comes to understanding the effects of such exposure on real world behaviour. This makes sense since one thing we do know is that the causes of violence are multifactorial. There are many levers that can be pulled to both discourage and encourage violence. At any one time, different combinations of these levers will be activated. To think that one such lever — e.g. violence to robots — will have some outsized influence on violence more generally seems naive.
Third, it is worth noting, once again, that the persuasiveness of Johnson and Verdicchio’s argument hinges on whether you think robots have the capacity for genuine suffering or not. They do not think this is possible. And they are very clear in saying that all appearances of robot suffering must be simulative or deceptive, not real. This is something I disputed earlier on. I think ‘simulations’ (more correctly: outward behavioural signs) are the best evidence we have to go on when it comes to epistemically grounding our judgments about the suffering of others. Consequently, I do not think the gap between robots and animals is as definitive as they claim.
Fourth, the previous point notwithstanding, I agree with Johnson and Verdicchio that there are design choices that roboticists can make that might moderate any spillover effects of robot cruelty. This is something I discussed in my paper on ‘ethical behaviourism’. That said, I do think this is easier said than done. My sense from the literature is that humans tend to identify with and anthropomorphise anything that displays agency. But since agency is effectively the core of what it mean for something to be a robot, this suggests that limiting the tendency to over-identify with robots is tantamount to saying that we should not create robots at all. At the very least, I think the suggestions made by proponents of Johnson and Verdicchio’s view — e.g. having robots periodically remind human users that they do not feel anything and are not suffering — need to be tested carefully. In addition to this, I suspect it will be hard to prevent roboticists from creating robots that do ‘simulate’ suffering. There is a strong desire to create human-like robots and I am not convinced that regulation or ethical argumentation will prevent this from happening.
Finally, and this is just a minor point, I’m not convinced by the claim that we will always have design options when it comes to robots that we do not have when it comes to animals. Sophisticated genetic and biological engineering might make it possible to create an animal that does not display any outward signs of suffering (Douglas Adams’s famous thought experiment about the cow that wants to be eaten springs to mind here). If we do that, would that make animal cruelty okay? Johnson and Verdicchio might argue that engineering away the outward signs of suffering doesn’t mean that the animal is not really suffering, but then we get back to the earlier argument: how can we know that?
4. Conclusion
I have probably said too much. To briefly recap, Johnson and Verdicchio argue that the animal-robot analogy is misleading and unhelpful when it comes to (a) understanding the moral status of animals, (b) attributing liability and responsibility to robots, and (c) the likelihood of harm to robots translating into harm to humans. I have argued that this is not true, at least not always. The animal-robot analogy can be quite helpful in understanding at least some of the key issues. In particular, contrary to the authors, I think the epistemic basis on which we ascribe moral status to animals can carry over to the robot case, and this has important consequences for how we attribute liability to actions performed by semi-autonomous systems.
"...dispute the claim that people have complete knowledge of a robot’s hardware [and] software".
ReplyDeleteThis is why the interest in formal correctness proofs of code, along with the awareness that
Turing complete hardware allows undecidable outcomes (halting problem). A quick search suggests that stochastic optimisation (planning) of the kind a decently useful robot would be constantly doing (that is, with incomplete information) is often undecidable.