The EU parliament attracted a good deal of notoriety in 2016 when its draft report on civil liability for robots suggested that at least some sophisticated robots should be granted the legal status of ‘electronic personhood’. British tabloids were quick to seize upon the idea — the report came out just before the Brexit vote — as part of their campaign to highlight the absurdity of the EU. But is the idea really that absurd? Could robots ever count as legal persons?
A recent article by Bryson, Diamantis and Grant (hereinafter ‘BDG’) takes up these questions. In ‘Of, for and by the people: the legal lacuna or synthetic persons’, they argue that the idea of electronic legal personhood is not at all absurd. It is a real but dangerous possibility — one that we should actively resist. Robots can, but should not, be given the legal status of personhood.
BDG’s article is the best thing I have read on the topic of legal personhood for robots. I believe it presents exactly the right framework for thinking about and understanding the debate. But I also think it is misleading on a couple of critical points. In what follows, I will set out BDG’s framework, explain their central argument, and present my own criticisms thereof.
1. How to Think about Legal Personhood
BDG’s framework for thinking about the legal personhood of robots consists of three main theses. They do not give these names, but I will for sake of convenience:
The Fictionality Thesis: Legal personhood is a social fiction, i.e. an artifact of the legal system. It should not be confused with moral or metaphysical personhood.
The Divisibility Thesis: Legal personhood is not a binary property; it is, rather, a scalar property. Legal personhood consists of a bundle of rights and obligations, each of which can be separated from the other. To put it another way, legal personhood can come in degrees.
The Practicality Thesis: To be effective, the granting of legal personhood to a given entity must be practically enforceable or realisable. There is thus a distinction to be drawn between de jure legal personhood and de facto legal personhood.
Each of these three theses is, in my view, absolutely correct and will probably be familiar to lawyers and legal academics. Let’s expand on each.
First, let’s talk about fictionality. Philosophers often debate the concept of personhood. When they do so, they usually have moral or metaphysical personhood in mind. They are trying to ‘carve nature at its joints’ and figure out what separates true persons from everything else. In doing so, they typically fixate on certain properties like ‘rationality’, ‘understanding’, ‘consciousness’, ‘self-awareness’ and ‘continuing sense of identity’. The argue that these sorts of properties are what constitute true personhood. Their inquiry has moral significance because being a person (in this philosophical sense) is commonly held to be what makes an entity a legitimate object of moral concern, a bearer of moral duties, and a responsible moral agent.
Legal personhood is a very different beast. It is related to moral or metaphysical personhood — in the sense that moral persons are usually, though not always, legal persons. And it is perhaps true that in an ideal world the two concepts would be perfectly correlated. Nevertheless, they can and do pull apart. To be a legal person is simply to be an entity to whom the legal system ascribes legal rights and duties, e.g. the right to own property, the right to enter into contracts, the right to sue for damages, the duty to pay taxes, the duty to pay compensation and so on. Legal systems have, historically, conferred the status of personhood on entities — e.g. corporations and rivers — that no philosopher would ever claim to be a metaphysical or moral person. Likewise, legal systems have, historically, denied the status of personhood to entities we would clearly class as metaphysical or moral persons, e.g. women and slaves. The fictional nature of legal personhood has one important consequence for this debate: it means that it is, of course, possible to confer the status of personhood on robots. We could do it, if we wanted to. There is no impediment or bar to it. The real question is: should we?
The divisibility thesis really just follows from this characterisation of legal personhood. As defined, legal personhood consists in a bundle of rights and duties (such as the right to own property and the duty to pay compensation). The full bundle would be pretty hard to set down on paper (it would consist of a lot of rights and duties). You can, however, divide up this bundle however you like. You can grant an entity some of the rights and duties and not others. Indeed, this is effectively what was done to women and slaves historically. They often had at least some of the rights and duties associated with being a legal person, but were denied many others. This is important because it means the debate about the legal status of robots should not be framed in terms of a simple binary choice: should robots be legal persons or not? It should be framed in terms of the precise mix of rights and duties we propose to grant or deny.
This brings us, finally, to the practicality thesis. This also follows from the fictional nature of legal personhood, and, indeed, many other aspects of the law. Since the law is, fundamentally, a human construct (setting debates about natural vs. positive law to one side for now) it depends on human institutions and practices for its enforcement. It is possible for something to be legal ‘on the books’ (i.e. in statute or case law) and yet be practically unrealisable in the real world due to a lack of physical or institutional support. For example, equal status for African-Americans was ‘on the books’ for a long time before it was (if it even is) a practical reality. Similarly, in many countries homosexuality was illegal ‘on the books’ without its illegality being enforced in practice. Lawyers make this distinction between law on the books and law in reality by using the terms de jure and de facto.
The three theses should influence our attitude to the question: should robots be given the status of legal persons. We know now that this is possible since legal personhood is fictional, but we also need to bear in mind which precise bundle of rights and obligations are being proposed for robots, and whether the enforcement of those rights and obligations is practicable.
2. The Basic Argument: Why we should not grant personhood to robots
Despite the nuance of their general framework, BDG go on to present a relatively straightforward argument against the idea of legal personhood for robots. They briefly allude to the practical difficulties of enforcing legal personhood for robots, and they admit that a full discussion of the issue should consider the precise bundle of rights and obligations, nevertheless their objection is couched in general terms.
That objection has a very simple structure. It can be set out like this:
- (1) We should only confer the legal status of personhood on an entity if doing so is consistent with the overarching purposes of the legal system.
- (2) Conferring the status of legal personhood on robots would not be (or is unlikely to be) consistent with the overarching purposes of the legal system.
- (3) Therefore, we ought not to confer the status of legal personhood on robots.
In relation to (1), the clue is in the title ‘Of, for and by the people’. BDG think that legal systems should serve the interests of the people. But, of course, who the people are (for the purposes of the law) is the very thing under dispute. Fortunately, they provide some more clarity. They say the following:
Every legal system must decide to which entities it will confer legal personhood. Legal systems should make this decision, like any other, with their ultimate objectives in mind…Those objectives may (and in many cases should) be served by giving legal recognition to the rights and obligations of entities that really are people. In many cases, though, the objectives will not track these metaphysical and ethical truths…[Sometimes] a legal system may grant legal personhood to entities that are not really people because conferring rights upon the entity will protect it or because subjecting the entity to obligations will protect those around it.
This passage suggests that the basic objective of the legal system is to protect those who really are (metaphysical and moral) people by giving them the status of legal personhood, but that granting legal personhood to other entities could also be beneficial on the grounds that it will ‘protect those around’ the entity in question. Later in the article, they further clarify that the basic objectives of the legal system are threefold: (i) to further the interests of the legal persons recognised (ii) to enforce sufficiently weighty moral rights and obligations and (iii) whenever the moral rights and obligations of two entities conflict, to prioritise human moral rights and obligations (BDG 2017, 283).
All of which inclines me to believe that, for BDG, legal systems should ultimately serve the interests of human people. The conferring of legal status on any other entity should never come at the expense of human priority. This leads me to reformulate premise (1) in the following manner (note: the ‘or’ and ‘and’ are important here):
- (1*) We should only confer the legal status of personhood on an entity if: (a) that entity is a moral/metaphysical person; or (b) doing so serves some sufficiently weighty moral purpose; and (c) human moral priority is respected.
This view might be anathema to some people. BDG admit that it is ‘speciesism’, but they think it is acceptable because it allows for the interests of non-humans to be factored in ‘via the mechanism of human investment in those entities’ (BDG 2017, 283).
Onwards to premise (2). We now have a clearer standard for evaluating the success or failure of that premise. We know that the case for robot legal personhood hinges on the moral status of the robots and the utility of legal personhood in serving the interests of humans. BDG present three main arguments for thinking that we should not confer the status of legal personhood on robots.
The first argument is simply that robots are unlikely to acquire a sufficiently weighty moral status in and of themselves. BDG admit that the conditions that an entity needs to satisfy in order to count as a moral patient (and thus worthy of having its rights protected) are contested and uncertain. They do not completely rule out the possibility, but they are sceptical about robots satisfying those conditions anytime soon. Furthermore, even if robots could satisfy those conditions, a larger issue remains: should we create robots that have a sufficiently weighty moral status? This is one of Bryson’s main contributions to the robot ethics debates. She thinks we have no strong reason to create robots with this status — that robots should always be tools/servants.
The second argument is that giving robots the status of legal personhood could allow them to serve as liability shields. That is to say, humans could use robots to perform actions on their behalf and then use the robot’s status as a legal person to shield themselves from having to pay out compensation or face responsibility for any misdeed of the robot. As noted earlier, corporations are legal persons and humans often use the (limited liability) corporate form as a liability shield. Many famous legal cases illustrate this point. Most law students will be familiar with the case of Salomon v Salomon in which the UK House of Lords confirmed the doctrine of separate legal personhood for corporations (or ‘companies’ to use the preferred British term). In essence, this doctrine holds that an individual owner or manager of a company does not have to pay the debts of that company (in the event that the company goes bankrupt) because the company is a separate legal person. The fear from BDG is that robot legal persons could be used to similar effect to avoid liability on a large scale.
The third argument follows on from this. It claims that robots are much worse than corporations, when it comes to avoiding legal responsibility, in one critical respect. At least with a corporation there is some group of humans in charge. It is thus possible — though legally difficult — to ‘pierce the corporate veil’ and ascribe responsibility to that group of humans. This may not be possible in the case of robots. They may be autonomous agents with no accountable humans in control. As BDG put it:
Advanced robots would not necessarily have further legal persons to instruct or control them. That is to say, there may be no human actor directing the robot after inception.
In sum, the fact that there are no strong moral reasons to confer the status of legal personhood on robots (or to create such robots), coupled with the fact that doing so could seriously undermine our ability to hold entities to account for their misdeeds, provides support for premise (2).
I have tried to illustrate this argument in the diagram below, adding in the extra premises covered in this description.
3. Some criticisms and concerns
Broadly speaking, I think there is much to be said in favour of this line of thinking, but I also have some concerns. Although BDG do a good job setting out a framework for thinking about robot legal personhood, I believe their specific critiques of the concept are not appropriately contextualised. I have two main concerns.
The first concern is slightly technical and rhetorical in nature. I don’t like the claim that legal personhood is ‘fictional’ and I don’t think the use of fictionalism is ideal in this context. I know this is a common turn of phrase, and so BDG are in good company in using, but I still don't like it. Fictionalism, as BDG point out, describes a scenario in which ‘participants in a…discourse engage in a sort of pretense (whether wittingly or not) by assuming a stance according to which things said in the discourse, though literally false, refer to real entities and describe real properties of entities’ (BDG 2017, 278). So, in the case of legal personhood, the idea is that everyone in the legal system is pretending that corporations (or rivers or whatever) are persons when they are really not.
I don’t like this for two reasons. One reason is that I think it risks trivialising the debate. BDG try to avoid this by saying that calling something a fiction ‘does not mean that it lacks real effects’ (BDG 2017, 278), but I worry that saying that legal personhood is a pretense or game of make believe will denigrate its significance. After all, many legal institutions and statuses are fictional in this sense, e.g. property rights, money, and marriage. The other reason — and the more important one — is that I don’t think it is really correct to say that legal personhood is fictional. I think it is more correct to say that it is a social construction. Social constructions can be very real and important — again property rights, marriage and money are all constructed social facts about our world — and the kind of discourse we engage in when making claims about social constructs need not involve making claims that are ‘literally false’ (whatever the ‘literally’ modifier is intended to mean in this context). I think this view is more appropriate because legal personhood is constituted by a bundle of legal rights and obligations, and each of those rights and obligations is itself a social construct. Thus, legal personhood is a construct on a construct.
The second concern is that in making claims about robots and the avoidance of liability, it doesn’t seem to me that BDG engage in the appropriate comparative analysis. Lots of people who research the legal and social effects of sophisticated robots are worried about their potential use as liability shields, and about the prospect of ‘responsibility gaps’ opening up as a result of their use. This is probably the major objection to the creation of autonomous weapon systems and it crops up in debates about self-driving cars and other autonomous machines as well. People worry that existing legal doctrines about negligence or liability for harm could be used by companies to avoid liability. Clever and well-paid teams of lawyers could argue that injuries were not reasonably foreseeable or that the application of strict liability standards in these cases would be contrary to some fundamental legal right.* Some people think these concerns are overstated and that existing legal doctrines could be interpreted to cover these scenarios, but there is disagreement about this, and the general view is that some legal reform is desirable to address potential gaps.
Note that these objections are practically identical to the ones that BDG make and that they apply irrespective of whether we grant robots legal personhood. They form part of a general case against all autonomous robots, not a specific case against legal personhood for said robots. To make the specific case against legal personhood for robots, BDG would need to argue that granting this status will make things even worse. They do nod in the direction of this point when they observe that autonomous robots will inevitably infringe on the rights of humans and that legal personhood ‘would only make matters worse’ for those trying to impose accountability in those cases.
The problem is that they don’t make enough of this comparative point, and it’s not at all clear to me that they defend it adequately. Granting legal personhood to robots would, at least, require some active legislative effort by governments (i.e. it couldn’t be granted as a matter of course). In the course of preparing that legislation, issues associated with liability and accountability would have to be raised and addressed. Doing nothing — i.e. sticking with the existing legal status quo — could actually be much worse than this because it would enable lawyers to take advantage of uncertainty, vagueness and ambiguity in the existing legal doctrines. So, paradoxically, granting legal personhood might be a better way of addressing the very problems they raise.
To be absolutely clear, however, I am not claiming that conferring legal personhood on robots is the optimal solution to the responsibility gap problem. Far from it. I suspect that other legislative schemes would be more appropriate. I am just pointing out that doing nothing could be far worse than doing something, even if that something is conferring legal personhood on a robot. Furthermore, I quite agree that any case for robot legal personhood would have to turn on whether there are compelling reasons to create robots that have the status of moral patients. Bryson thinks that there are no such compelling reasons. I am less convinced of this, but that’s an argument that will have to be made at another time.
* Experience in Ireland suggests that this can happen. Famously, the offence of statutory rape, i.e. sex with a child under the age of 18, (which is strict liability) was held to be unconstitutional in Ireland because it did not allow for a defence of reasonable belief as to the age of the victim. This was held to breach the right to a fair trial.