Pages

Monday, November 24, 2014

The Legal Challenges of Robotics (2): Are robots exceptional?


Baxter Robot


(Previous Entry)

Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.

And what does he conclude? In brief, he thinks that robots do pose moderately exceptional challenges for the legal system. That is to say, there will need to be systemic changes in the legal system in order to successfully regulate robots. In this post, I’m going to try to analyse the argument he offers in support of this view. I’ll do so in three parts. First, I’ll talk about the concept of “exceptionalism” and the moderate form that Calo defends. Second, I’ll formalise his main argument for moderate exceptionalism. And third, I’ll proffer some critical reflections on that argument.


1. The Exceptionalism Debate in Law
People throw the concept of exceptionalism around in many debates. The most tedious is probably the debate about American exceptionalism. Whenever the term is used it is used to capture the notion that some phenomenon is unique, unparalleled or qualitatively distinct from related phenomena. So, for example, in the debate about American exceptionalism, the exceptionalist claims that the United States of America is a unique or unparalleled nation among other nations.

How does exceptionalism apply to debates about technology and the law? It applies by capturing the notion that certain technologies pose unique challenges for the law, and hence require special or specific laws, and maybe even special legal institutions. To give an example, most legal systems have laws in place that deal with harassment and bullying. These laws have been around for a very long time. The types of harassment and bullying in mind when such laws were first drafted were the sorts of harassment that took place in the Real World. That is: the kind that involved some unwanted physical confrontation or intimidation, or pestering via old systems of post and print media. Many people now wonder whether we need special laws to deal with cyberbullying and cyberharassment. In fact, some legal systems (e.g. Nova Scotia) have created such laws. The motivation behind these special laws is that cyberbullying and cyberharassment are qualitatively different from the more traditional forms. Consequently, new laws are needed to deal with these forms of bullying and harassment. Others disagree with this push for new laws, arguing that the old laws are general enough to cover whatever is wrong with cyberbullying and cyberharassment.

We see here a debate between legal exceptionalists and, what we might call, legal universalists. The exceptionalists are pushing for more special laws to deal with what they perceive to be unique technological challenges. The universalists are more sceptical, thinking that most legal rules are general enough to cover most new technologies.

We might think of opinions being arrayed along a spectrum, with strong exceptionalists at one end and strong universalists at the other. The strong exceptionalists would argue that special laws are required for every new technology; the strong universalists would argue that general laws can be stretched to accommodate all new technologies. The correct view probably lies somewhere in between those two extremes.

That’s certainly what Calo tries to argue in his article. He thinks that moderate exceptionalism is appropriate when it comes to dealing with the legal implications of robotics. This moderate position is defined as follows:

Moderate Legal Exceptionalism: A new technology (X) warrants moderate legal exceptionalism if the mainstreaming of that technology would require systemic changes in the law or legal institutions in order to reproduce (i.e. preserve) or displace an existing balance of values.

This requires some further explanation. First, note the “mainstreaming” condition. For Calo, the need for moderate exceptionalism only kicks in once the use of a given technology becomes sufficiently widespread. How widespread is sufficiently widespread? It’s hard to say for sure. The internet is obviously in widespread use and this has required some legal changes in order to deal with its distinctive capabilities (e.g. changes to copyright law and laws on jurisdiction). Where do we stand in relation to robotics? It’s not clear but we are definitely approaching the threshold-point (if we have not already passed it).

Second, note how Calo stipulates that moderate exceptionalism is determined by the need for systemic legal changes in order to preserve or displace and existing set of values. The idea here is that the current legal system embodies certain value judgments. To use a simple example, the law on murder embodies the value judgment that murder is wrong and deserves some kind of harsh response. We hope that most of those value judgments are sound, but it is possible that they are not (the law is not always just or fair). Calo’s point is that certain technologies may force legal changes in order to preserve the existing (good) value judgments or, conversely, may bring into clearer relief the problems with other (bad) value judgments. Cyberbullying may be an example of this. Laws on bullying and harassment incorporate the value judgment that people do not deserve to be repeatedly intimidated or insulted by others. The fact that the internet allows for this to be done anonymously, persistently and from great distances, may require some legal changes. At least if we are to preserve the right not to be repeatedly intimidated or insulted.


2. The Argument for Moderate Robolaw Exceptionalism
With these clarificatory comments out of the way, we can proceed to consider Calo’s argument for moderate robolaw exceptionalism. The argument works like this:


  • (1) If the mainstreaming of a given technology would force the law to adopt systemic changes in order to preserve or displace an existing balance of values, then that technology requires moderate legal exceptionalism.
  • (2) The mainstreaming of robots would force the law to adopt systemic changes in order to preserve or displace an existing balance of values.
  • (3) Therefore, the mainstreaming of robots requires moderate legal exceptionalism.


There’s nothing particularly interesting about this argument yet. The first premise simply incorporates Calo’s stipulative definition of moderate exceptionalism and the second premise is unsupported. The devil is in the detail. How exactly might robots force systemic changes in order to preserve or displace existing values? Calo offers six suggestions. Each of these can viewed as a reason to support the second premise:


  • (2.1) - Because robots blur the object/agent boundary they may belong to a unique ontological category that requires either a novel set of legal rules or a novel mix of old rules (e.g. some rules that typically apply to agents and some that apply to objects). For example, robots may be treated like persons when it comes to a legal action for damages, but not in other contexts (e.g. contract law).
  • (2.2) - Robots may force an increase in the use of strict liability rules. Strict liability arises whenever liability is imposed without the need to prove fault (e.g. intent or negligence). Calo’s claim is that traditional fault rules will be difficult to apply to manufacturers of robots (because robots will display emergent behaviours, not anticipated or programmed by their creators). Consequently, an expansion of strict liability rules will be needed to preserve the availability of compensation.
  • (2.3) - Robots may necessitate the existence of a new class of criminal offence, viz. the offence of putting technologies into play that are capable of causing specific harm and do actually cause specific harm. As Calo puts it, the rationale for such an offence might be “vindicating an injury in the eyes of society and providing a moral and pragmatic check on the overuse of dangerous technology without justification” (p. 141).
  • (2.4) - Robots may alter the relevance of the doctrine of foreseeability. In claims for personal injury it usually must be shown that the injury in question was “reasonably foreseeable”. But because robots will be designed to display emergent (not reasonably foreseeable) behaviours, there may be less role for such a doctrine when it comes to liability claims arising from robotic injury.
  • (2.5) - Robots may force a greater concern for risk mitigation within the law. Risk mitigation is the focus on identifying and developing strategies for minimising risks. Legal rules sometimes require this. In fact, this is already common in some areas of law — e.g. banking law. Calo’s point is that the risks associated with embodied quasi-agents may encourage a broader focus on risk mitigation across a range of legally-regulated industries.
  • (2.6) - They may require a new regulatory infrastructure. Speaking specifically about the US, Calo notes that at present robots are regulated by many different agencies (it depends on what the robot is being used for). He argues that there may be some need for a single regulatory authority (indeed, he makes the case for a Federal Robotics Commission at length elsewhere).


If you add these six reasons together with the previous statement of the argument, you end up with something that looks like this.





3. What should we make of Calo’s Argument?
This leads us to the question: is Calo’s argument for moderate exceptionalism persuasive? On the whole, I’m inclined to say “yes”. I think the widespread use of robots will force certain changes in the legal system. There will be some stretching and adjusting of legal rules, and possibly some novel ones too. Nevertheless, I want to offer some other comments on the argument Calo makes. These are not really intended as direct criticisms, but rather as critical reflections on the issue he raises.

I find it interesting that most of Calo’s examples highlight the ways in which robots might create liability-gaps. If you think about it, the implicit assumption underlying the concern with foreseeability (or strict liability) is that robots will be introduced into the world, they will harm or injure people, and no one will be held liable for the injuries they cause. This is upsetting to present values because the victims deserve compensation for the injuries suffered. Consequently, rules must be created or stretched to ensure that this liability gap does not arise.

It strikes me that there are other interesting “gaps” that might be created by the widespread use of robots. For example, they might create a broader “accountability-gap” (where this extends beyond mere impositions of liability). This could happen if it becomes difficult to hold social actors to account for their decision-making because the decisions are made by robots. This is something I have discussed in relation algorithmic decision-making before.

In addition to this, they might have an interesting effect on the desire for retributive justice. I assume that if robots fall short of full-personhood, but are capable of engaging in sophisticated, novel and harmful behaviours, it will be difficult to hold them responsible for what they do in the manner demanded by proponents retributive justice (i.e. robots won’t be morally culpable wrongdoers). At the same time, the manufacturers of the robots will fail to meet the criteria for retributive justice because the robot is too independent from them (or, alternatively, they will only satisfy a much lesser form of culpability). The result could be a “retribution-gap” in which people look for an appropriate target for retributive blame, but fail to find one.

What implications might this have? If you are less inclined toward the retributive view, you might welcome it. You might hope that the presence of the retribution gap will wean people away from the backward-looking retributivist view of justice, and draw them towards a more forward-looking consequentialist type of justice. But at the same time, you might worry that there are studies suggesting that humans are compulsive, innate retributivists. They may struggle with the new system and end up finding inappropriate targets for their retributive blame (more scapegoats etc.). Either way, I think the social effect is worth thinking about.

Anyway, those are my, no doubt ill-conceived, reflections on Calo’s argument for moderate robolaw exceptionalism. I don’t disagree with the claim, but I think there are other interesting shifts that could be inaugurated by the robotics revolution.

No comments:

Post a Comment