Pages

Sunday, October 14, 2018

Robots and the Expanding Moral Circle




(I appear in this video from 15:49-25:51) 


[The following is, roughly, the text of a speech I delivered to the Trinity College, Dublin Historical Society on the 10th October 2018 (which you can watch in the video above from 15:49 - 25:51). It was for debate on the topic of AI personhood. The proposition that was up for debate was “That this House would recognize AI as legal persons”. I was supposed to speak in favour of the proposition but, as you’ll see below, I don’t quite do that, though I do argue for something not too from this idea. I find that formal debates present an interesting challenge. They are hardly the best means of getting at the truth, but there is, I think, some value to distilling your arguments on a particular topic down into a short speech. It means you have to focus on what it most relevant to your case and skip some of the nuance and waffle that is common in academic talks. This is my way of saying that what you are about to read is hardly the most careful and sophisticated defence of my views on AI moral personhood, but it has the virtue of brevity.]


(1) Not going to talk about legal personhood
In every debate in which I have participated, I have disagreed with the proposition. Tonight is no different. Unfortunately, I am not going to argue that we should recognize AI as legal persons. I don’t think that is an interesting question for at least three reasons. First, legal personhood is a social construct that can be manipulated and reshaped by us if we choose: it is not something with independent moral content or weight. Second, and this may shock you, it may already be the case that AIs can be recognized as legal persons. Shawn Bayern (a law prof at Northwestern University in the US) has argued that there are loopholes in US corporate law that allow for an AI to legally control a limited liability company. If he is right, then since LLCs are legal persons, AIs can also be legal persons, at least in the US, which could transfer to the EU due to mutual recognition provisions. Third, whether or not this is a good idea – the recognition of AIs as legal persons – depends on something else. Specifically, I think it depends on whether AIs/robots (I will talk about both) have a moral status that deserves legal recognition and protection. That’s what I want to consider.


(2) The Ethical Behaviourist Approach
Now, I am not going to argue that AIs/robots currently have moral status. I am just going to argue that they very plausibly could in the not too distant future. The reason for this is that I am an ethical behaviourist. I believe that all claims about the moral status of a particular entity (e.g. human being, animal) depend on inferences we make from external behaviours and representations made by that entity. In debates about moral status people will talk about things like sentience, the capacity to feel pain, the capacity to have interests, to be a continuing subject of conscious experience, and so on, as if these properties are what matter to judgments of moral status. I don’t disagree with any of that: I think they are what matters. I just think all judgments about the existence of those properties depends on inferences we make from behavioural states.

This posture of ethical behaviourism leads me to endorse a ‘performative equivalency’ standard when it comes to making judgments about moral status. According to this standard, if a robot/AI is performatively equivalent to another entity to whom we afford moral status, then the robot/AI must be afforded the same moral status. This can then translate into legal recognition and protection. I think it is possible (likely?) robots/AI will meet this PE-standard in the near future, and so they should be granted moral status.


(3) An initial defence of Ethical Behaviourism
Why should we embrace this performative equivalency standard? I think this is ultimately a view that is best defended in the negative, but there are three initial reasons I would offer:

The first is the Kantian Reason: we cannot know the thing-in-itself we can only ever know it through its external representations. We do not have direct epistemic access to someone’s conscious experiences of this world (which are central to judgments of moral status); we only have access to their behaviours and performances. It follows from this that the PE standard is the only one we can apply in moral affairs.

The second reason is common sense: we all know this to be true in our day-to-day lives. It’s obvious that we do not know what is going on in someone else’s head and so must make judgments about how they experience the world through their external representations to us. In other words, we are all, already, following the PE standard in our day-to-day moral decision-making.

The third reason is that this chimes pretty well with scientific practice: psychologists who make inferences as to what is going on in a person’s mind do so through behavioural measures; and neuroscientists validate correlations between brain states and mental states through behavioural measures. I’m just advocating the same approach when it comes to ascriptions of moral status.


(4) Objections and Replies
So that’s the initial defence of my position. If you are like the other people with whom I have shared this view you will think it is completely ridiculous. So let me soften the blow by responding to some common objections:


Objection 1: Robots/AIs aren’t made out of the right biological stuff (or don’t have the right biological form) and this is what matters to ascriptions of moral status, not performative equivalency (I sometimes call this the ‘ontology matters’ or ‘matter matters’ objection).

Response: Now, I happen to think this view is ridiculous as it amounts to an irrational form of biological mysterianism, but I would actually be willing to concede something to it just for the sake of argument. I would be willing to concede that being made of the right biological stuff is a sufficient condition for moral status, but that it is not a necessary one. In other words, if you have a human being or animal that doesn’t have a sophisticated behavioural repertoire you might be within your rights to grant it moral status on the grounds of biological constitution alone; it just doesn’t follow from this that it would be right to deny moral status to a robot that does have a sophisticated behavioural repertoire because it isn’t made of the right stuff. They are both sufficient conditions for moral status.

Objection 2: Robots/AIs have different origins to human beings/animals. They have been programmed and designed into existence whereas we have evolved and developed. This undermines any inferences we might make from behaviour to moral status. To slightly paraphrase the philosopher Michael Hauskeller: “[A]s long as we have an alternative explanation for why [the robot/AI] behaves that way (namely, that it has been designed and programmed to do so), we have no good reason to believe that its actions are expressive of anything [morally significant] at all” (Hauskeller 2017)

Response: I find it hard to accept this view because I find it hard to accept that different origins matter more than behaviour in moral judgments of others. Indeed, I think this is a view with a deeply problematic history: it’s effectively the basis for all forms of racism and minority exclusion: that you are judged by racial and ethnic origin, not actual behaviour. Most importantly, however, it’s not clear that there are strong ‘in principle’ differences in origin between humans and AIs of the sort that Hauskeller and others suppose. Evolution is a kind of behavioural programming (and is often explained in these terms by scientists). So you could argue that humans are programmed as well as AIs. Also, with the advent of genetic engineering and other forms of human enhancement the lines between humans and machines in terms of origin are likely to blur even more in the future. So this objection will become less sustainable.

Objection 3: Robots/AI will be owned and controlled by humans; this means they shouldn’t be granted moral status.

Response: I hesitate to include this objection but it is something that Joanna Bryson – one of the main critics of AI moral status –made much of in her earlier work (she may have distanced herself from it since). My response is simple: the direction of moral justification is all wrong here. The mere fact that we might own and control robots/AI does not mean we should deny them moral status. We used to allow humans to own and control other humans. That doesn’t mean it was the right thing to do. Ownership and control are social facts that should be grounded in sound moral judgments, not the other way around.

Objection 4: If performative equivalency is the standard of moral status, then manufacturers of robots/AI are going to engage in various forms of deception or manipulation to get us to think they deserve moral status when they really don’t.

Response: I’m not convinced that the commercial motivations for doing this are that strong, but set that to the side. This is, probably, the main concern that people have about my view. I have three responses to it: (i) I don’t think people really know what they mean by ‘deception/manipulation’ in this context – if a robot consistently (and the emphasis is on consistently) behaves in a way that is equivalent to other entities to whom we afford moral status then there is no deception/manipulation (those concepts have no moral purchase unless cashed out in terms of behavioural inconsistencies); (ii) if you are worried about this, then a lot of the worry can be avoided by setting the ‘performative equivalency’ standard relatively high, (i.e. err on the side of false negatives rather than false positives when it comes to expanding the moral circle though this strategies does have its own risks) and (iii) deception and manipulation are rampant in human-to-human relationships but this doesn’t mean that we deny humans moral status – why should we take a different approach with robots?




(5) Conclusion
Let me wrap up by making two final points. First, I want to emphasise that I am not making any claims about what the specific performative equivalency test for robots/AI should be – that’s something that needs to be determined. All I am saying is that if there is performative equivalency, then there should be a recognition of moral status. Second, my position does have serious implications for the designers of robots/AI. It means that their decisions to create such entities has a moral dimension that they may not fully appreciate and may like to disown. This might be one reason why there is such resistance to the idea. But we shouldn’t allow them to shirk responsibility if, as I believe, performative equivalency is the correct moral standard to apply in these cases. Okay, that’s it from me. Thank you for your attention.








3 comments:

  1. This might interest you: Robots, Slaves, and the Paradox of the Human Condition in Isaac Asimov’s Robot Stories
    http://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.desklight-5b79c396-921d-44f9-ab18-1e29ab9d6028

    ReplyDelete
  2. John,

    Don't you worry about false negatives resulting from this ethical behaviorist approach?

    If consciousness is some complex form of information processing, then presumably this information processing could occur within an entity without the entity possessing actuators (arms, legs, etc.) But if the entity lacks actuators, then it cannot ever live up to our performative equivalency standard. So we would fail to ascribe moral status to all of these conscious beings, despite the fact that, as you say, consciousness is the thing we are fundamentally concerned about.

    You might think that behavior is the only way to determine consciousness, but this doesn't seem right on an information processing account. On this account, presumably we could get some sort of insight into the algorithm an entity is running. Then we could check to see whether this algorithm is doing the sort of stuff that we think amounts to/gives rise to consciousness.

    Best,
    Sean

    ReplyDelete
    Replies
    1. I discuss this a little bit in one of my papers - 'Welcoming Robots into the Moral Circle' - maybe in a footnote somewhere (can't remember exactly). I guess I would say two things:

      (1) I intend ethical behaviourism as a theory regarding what is sufficient for moral status and not what is necessary for it. So it could well be that other entities have moral status for other reasons.

      (2) Notwithstanding (1), I suppose I do have a hunch that theories of moral status that cannot point to any obvious evidence or objective criteria for establishing moral status are problematic. I think this is one reason why I tend to avoid pure sentience based theories. Generally speaking, I agree that sentience grounds moral status (ontologically speaking) but if we have no way of knowing whether an entity is sentient, then I'm not sure what to do with that. This is also something that Jeff Sebo discusses in his paper 'The Moral Problem of Other Minds' (I interviewed him about it on my podcast, if you are interested.)/

      Delete