Wednesday, November 26, 2014

The Epistemological Objection to Divine Command Theory




Regular readers will know that I have recently been working my through Erik Wielenberg’s fascinating new book Robust Ethics. In the book, Wielenberg defends a robust non-natural, non-theistic, moral realism. According to this view, moral facts exist as part of the basic metaphysical furniture of the universe. They are sui generis, not grounded in or constituted by other types of fact.

Although it is possible for a religious believer to embrace this view, many do not. One of the leading theistic theories holds that certain types of moral fact — specifically obligations — cannot exist without divine commands (Divine Command Theory or DCT). This is the view defended by the likes of Robert Adams, Stephen Evans, William Lane Craig, Glenn Peoples and many many more.

In this post, I want to share one of Wielenberg’s objections to the DCT of moral obligations. This objection holds that DCT cannot provide a satisfactory account of obligations because it cannot account for the obligations of reasonable non-believers. This objection has been defended by others over the years, but Wielenberg’s discussion is the most up-to-date.

That said, I don’t think it is the most perspicuous discussion. So in what follows I’m going to try to clarify the argument in my usual fashion. In other words, you can expect lots of definitions, numbered premises and argument maps. This is going to be a long one.


1. Background: General Problems with Theological Stateism
Theological voluntarism is the name given to a general family of theistic moral theories. Each of these theories holds that a particular moral status (e.g. whether X is good/bad or whether X is permissible/obligatory) depends on one or more of God’s voluntary acts. The divine command theory belongs to this family. In its most popular contemporary form, it holds that the moral status “X is obligatory” depends on the existence of a divine command to do X (or refrain from doing X).

In his book, Wielenberg identifies a broader class of theistic moral theories, which he refers to under the label ‘theological stateism”:

Theological Stateism: The view that particular moral statuses (such as good, bad, right, wrong, permissible, obligatory etc) depend for their existence on one or more of God’s states (e.g. His beliefs, desires, intentions, commands etc).

Theological stateism is broader than voluntarism because the states appealed to may or may not be under voluntary control. For instance, it may be that God necessarily desires or intends that the torturing of innocent children be forbidden. It is not something that he voluntarily wills to be the case. Indeed, the involuntariness of the divine state is something that many theists find congenial because it helps them to avoid the horns of the Euthyphro dilemma (though it may lead to other theological problems). In any event, all voluntarist theories are subsumed within the class of theological stateism.

The foremost defender of the DCT is Robert M. Adams. As mentioned above, he and other DCT believers think that commands are necessary if moral obligations are to exist. The command must take the form of some sign that is communicated to a moral agent, expressing the view that X is obligatory.

Adams offers several interesting arguments in favour of this view. One of the main ones is that without commands we cannot tell the difference between an obligatory act (one that it is our duty to perform) and a supererogatory act (one that is above and beyond the call of duty). Here’s an analogy I have used to explain the gist of this argument:

Suppose you and I draw up a contract stating that you must supply me with a television in return for a sum of money. By signing our names to this contract we create certain obligations: I must supply the money; you must supply the TV. Now suppose that I would really like it if you delivered the TV to my house, rather than forcing me to pick it up. However, it was never stipulated in the contract that you must deliver it to my door. As it happens, you actually do deliver it to my door. What is the moral status of this? The argument here would be that it is supererogatory (above and beyond the call of duty), not obligatory. Without the express statement within the contract, the obligation does not exist.

Adams’s view is that what is true for you and me in the contract, is also true when it comes to our relationship without God. He cannot create obligations unless he communicates the specific content of those obligations to us in the form of a command. This is why Adams critiques other stateist theories such as divine desire theory. He does so on the grounds that they allow for the existence of obligations that have not been clearly communicated to the obligated. He thinks this is not a sound basis for the existence of an obligation.


2. Reasonable Non-Believers and the Epistemological Objection
The fact that communication is essential to Adams’s DCT creates a problem. If there are no communications, or if the communications are unrecognisable (for at least some segment of the population) then moral obligations do not exist (for at least some segment of the population). The claim made by several authors is that this is true for reasonable non-believers, i.e. those who do not believe in God but who do not violate any epistemic duty in their non-belief.

This has sometimes been referred to as the epistemological problem for DCT, but that can be misleading. The problem isn’t simply that reasonable non-believers cannot know their moral obligations. The problem is that, for them, moral obligations simply don’t exist. Though this objection is at the heart of Wielenberg’s discussion, and though it has been discussed by others in the past, I have nowhere seen it formulated in a way that explains clearly how it works or why it is a problem for DCT. To correct for that defect, I offer the following, somewhat long-winded, formalisation:


  • (1) According to DCT, for any given moral agent (S), an obligation to X (or to refrain from X) exists if and only if God commands S to X (or refrain from X).

  • (2) A theological stateist theory of moral obligations fails to account for the existence of obligations unless the moral agents to whom the obligation applies have knowledge of the relevant theological state.

  • (3) DCT is a theological stateist theory of moral obligations.

  • (4) Therefore, DCT fails to account for the existence of an obligation to X (or to refrain from X) unless S has knowledge of God’s commands (from 1, 2 and 3)

  • (5) If there are reasonable non-believers (i.e. people who don’t believe in God and who do not violate any epistemic duties), then they cannot have knowledge of God’s commands.

  • (6) There are reasonable non-believers.

  • (7) Therefore, on DCT, moral obligations fail to exist for reasonable non-believers (from 4, 5 and 6)

  • (8) DCT cannot be a satisfactory theory of moral obligations if it entails that moral obligations do not exist for reasonable non-believers.

  • (9) Therefore, DCT cannot be a satisfactory theory of moral obligations.





A word or two on each of the premises. Premise (1) is simply intended to capture the central thesis of DCT. I don’t think a defender of DCT would object. Premise (2) is based on Adams’s objections to other stateist theories (and, indeed, his more general defence of DCT). As pointed out above, he thinks awareness of the contents of the command is essential if we are to distinguish obligations from other possible moral statuses, and to avoid the unwelcome possibility of people being obliged to do X without being aware of the obligation. Premise (3) follows from the definition of stateist theories, and (4) then follows as an initial conclusion.

That brings us to premise (5), which is the most controversial of the bunch and the one that defenders of the DCT have been most inclined to dispute. We will return to it below. Premise (6) is also controversial. Many religious believers assume that non-believers have unjustifiably rejected God. This is something that has been thrashed out at length in the debate over Schellenberg’s divine hiddenness argument (which also relies on the supposition of reasonable non-belief). I’m not going to get into the debate here. I simply ask that the premise be accepted for the sake of argument.

The combination of (4), (5) and (6) gives us the main conclusion of the argument, which is that DCT entails the non-existence of moral obligations for reasonable non-believers. I’ve tacked a little bit extra on (in the form of (8) and (9)) in order to show why this is such a big problem. I don’t have any real argument for this extra bit. It just seems right to say that if moral obligations exist at all, then they exist for everybody, not just theists. In any event, and as we are about to see, theists have been keen to defend this view, so they must see something in it.

That’s a first pass at the argument. Now let’s consider the views of three authors on the plausibility of premise (5): Wes Morriston, Stephen Evans and Erik Wielenberg.


3. Morriston on Why Reasonable Non-believers Cannot Know God’s Commands
We’ll start with Morriston who has, perhaps, offered the most sustained analysis of the argument. He tries to defend premise (5). To understand his defence, we need to step back for a moment and consider what it means for God to command someone to perform or refrain from performing some act. The obvious way would be for God to literally issue a verbal or written command, i.e. to state directly to us that we should do X or refrain from doing X. He could do this through some authoritative religious text or other unambiguous form of communication (just as I am unambiguously communicating with you right now). The problem is that it is not at all clear that we have such direct verbal or written commands. At the very least, this is something that reasonable non-believers reasonably deny.

As a result of this, most DCT defenders argue that we must take a broader view of what counts as a communication. According to this broader view, the urgings of conscience or deep intuitive beliefs that doing X would be wrong, could count as communications of divine commands. It may be more difficult for the reasonable non-believer to deny that they have epistemic access to those communications.

Morriston thinks that there is a problem here. His view can be summed up by the following argument:


  • (10) To know that a sign (e.g. an urging of conscience) is an obligation-conferring command, one must know that the sign emanates from the right source (an agent with the ability to issue such a command).

  • (11) A reasonable non-believer does not know that a sign (e.g. an urging of conscience) emanates from the right source.

  • (12) Therefore, a reasonable non-believer cannot know whether a sign (e.g. an urging of conscience) is an obligation-conferring command (and therefore (5) is true).





Premise (10) is key here. Morriston derives support for it from Adams’s own DCT. According to Adams, God’s commands have obligation-conferring potential because God is the right sort of being. He has the right nature (lovingkindness and maximal goodness), he has the requisite authority, and we stand in the right kind of relationship to him (he is our creator, he loves us, we prize his friendship and love). It is only in virtue of those qualities that he can confer obligations upon us through his commands. Hence, Morriston is right to say that knowledge of the source is essential if the sign is to have obligation-conferring potential.

Morriston uses a thought experiment to support his point:

Imagine that you have received a note saying, “Let me borrow your car. Leave it unlocked with the key in the ignition, and I will pick it up soon.” If you know that the note is from your spouse, or that it is from a friend to whom you owe a favour, you may perhaps have an obligation to obey this instruction. But if the note is unsigned, the handwriting is unfamiliar, and you have no idea who the author might be, then it’s as clear as day that you have no such obligation. 
(Morriston, 2009, 5-6)

And, of course, the problem for the reasonable non-believer is that he/she does not know where the allegedly obligation-conferring signs are coming from. They might think that our moral intuitions arise from our evolutionary origins, not from the diktats of a divine creator.

The upshot of this is that premise (5) looks to be pretty solid.


4. Evans’s Response to Morriston
Stephen C. Evans tries to respond to Morriston. He does so with a lengthy thought experiment:

Suppose I am hiking in a remote region on the border between Iraq and Iran. I become lost and I am not sure exactly what country I am in. I suddenly see a sign, which (translated) reads as follows: “You must not leave this path.” As I walk further, I see loudspeakers, and from them I hear further instructions: “Leaving the path is strictly forbidden”. In such a situation it would be reasonable for me to form a belief that I have an obligation to stay on the path, even if I do not know the source of the commands. For all I know the commands may come from the government of Iraq or the government of Iran, or perhaps from some regional arm of government, or even from a private landowner whose property I am on. In such a situation I might reasonably believe that the commands communicated to me create obligations for me, even if I do not know for sure who gave the commands. 
(Evans 2013, p. 113-114)

Evans goes on to say that something similar could be true in the case of God’s commands. They may be communicated to people in a manner that makes it reasonable for them to believe that they have obligation-conferring potential, even if they don’t know for sure who the source of the command is.

Evans’s thought experiment is probably too elaborate for its own good. I’m not sure why it is necessary to set it on the border between Iraq and Iran, or to stipulate that the sign has to be translated. It’s probably best if we simplify its elements. What Evans really seems to be saying is that in any given scenario, if a sign with the general form of a command is communicated to an agent and if it is a live epistemic possibility for that agent that the sign comes from a source with the authority to create obligations (like the government or a landowner) then it is reasonable for that agent to believe that the sign creates an obligation. To express this in an argumentative form:


  • (13) In order for an agent to reasonably believe that a sign is an obligation-conferring command, two conditions must be met: (a) the agent must have epistemic access to the sign itself; and (b) it must be a live epistemic possibility for that agent that the sign emanates from a source with obligation-conferring potential.

  • (14) A reasonable non-believer can have epistemic access to signs that communicate commands and it is a live epistemic possibility for such agents that the signs emanate from God.

  • (15) Therefore, reasonable non-believers can reasonably believe in the existence of God’s obligation-conferring commands (and therefore (5) is false).




5. Wielenberg’s Criticisms of Evans
It is at this point that Wielenberg steps into the debate. And, somewhat disappointingly, he doesn’t have much to say. He makes two brief objections to Evans’s argument. The first is that Evans assumes (as did Morriston) that the sorts of signs available to reasonable non-believers will be understood by them to have a command-like structure. But it’s not clear that this will be the case.

Morriston and Evans both use thought experiments in which the communication to the moral agent takes the form of a sentence with a command-like structure (e.g. “You must not stray from the path”). This means that the agent knows they are being confronted with a command, even if they don’t know where it comes from. The same would not be true of something like a deep moral intuition or an urging of conscience. A reasonable non-believer might simply view that as a hard-wired or learned response to a particular scenario. Its imperative, command-like structure would be opaque to them.

The second point that Wielenberg makes is that Evans confuses reasonable belief in the existence of an obligation with reasonable belief in the existence of an obligation-conferring command. The distinction is subtle and obscured by the hiker thought experiment. In that thought experiment, the hiker comes to believe in the existence of an obligation to stay on the path because they recognise the possibility that the command-like signs they are hearing or seeing might come from a source with obligation-conferring powers. If you cut out the command-like signs — as Wielenberg says you must — you end up in a very different situation. Suppose that the landowner or government has mind control technology. Every time you walk down the path, you are sprayed with a mist of nanorobots that enter your brain and alter your beliefs in such a way that you think you have an obligation to stay on the path. In that case, there is no command-like communication, just a sudden belief in the existence of an obligation. Following Adams’s earlier arguments, that wouldn’t be enough to actually create an obligation: you would not have received the clear command. That’s more analogous to the situation of the reasonable non-believer.

At least, I think that’s how Wielenberg’s criticism works. Unfortunately, he isn’t too clear about it. Nevertheless, I think we can view it as a rebuttal to premise (13) of Evans’s argument.


  • (16) The reasonable non-believer cannot recognise the command-like structure of signs such as the urgings of conscience. At best, for them the urgings of conscience create strong beliefs in the existence of an obligation. Under Adams’s theory, strong belief is not enough for the existence of an obligation. There must be a clear command.




6. Concluding Thoughts
I think the epistemological objection to DCT is an interesting one. And I hope my summary of the debate is useful. Hopefully you can now see why the lack of knowledge of a command poses a problem for the existence of obligations under Adams’s modified DCT. And hopefully you can now see how proponents of the DCT try to rebut this objection.

What do I think about this? I’m not too sure. On the whole, the epistemological objection strikes me as something of a philosophical curio. It’s not the strongest or most rhetorically persuasive rebuttal of DCT. Furthermore, I’m unsure of Wielenberg’s contribution to the debate. I feel like this criticism misses one way in which to interpret Evans’s response. I’ll try to explain.

To me, Evans is making a point about moral/philosophical risk and the effect it has on our belief in the existence of a command, not the contents of that command. I’ve discussed philosophical/moral risk in greater depth before. The main idea in discussions of philosophical/moral risk is that where you have a philosophically contentious proposition (like the possible existence of divine commands) there is usually some significant degree of uncertainty as to whether that proposition is true or false (i.e. there are decent arguments on either side). The claim then is that recognition of this uncertainty can lead to interesting conclusions. For instance, you might be have no qualms about killing and eating sentient animals, but if you recognise the risk that this is morally wrong, you might nevertheless be obliged not to kill and eat an animal. The argument for this is that there is a considerable risk asymmetry when it comes to your respective options: eating meat might be perfectly innocuous, but the possibility that it might be highly immoral trumps this possible innocuousness and generates an obligation to not eat meat. Recognition of the risk generates this conclusion.

It might be that Evans’s argument makes similar claims about philosophical risks pertaining to God’s existence and God’s commands. Even if the reasonable non-believer does not believe in the existence of God or in the existence of divine commands, they might nevertheless recognise the philosophical risk (or possibility) that those things exist. And they might recognise it especially when it comes to interpreting the urgings of their own consciences. The result is that they recognise the philosophical risk that a particular sign is an obligation-conferring command, and this recognition is enough the generate the requisite level of knowledge. The fact that they do not really believe that a particular sign has a command-like structure is, contra Wielenberg, irrelevant. What matters is that they recognise the possibility that has such a structure.

Just to be clear, I don’t think this improves things greatly for the defender of DCT. I think it would be very hard to defend the view that mere recognition of such philosophical risks/possibilities is sufficient to generate obligations for the reasonable non-believer (for one thing, there are far too many potential philosophical risks of this sort). Adams’s arguments seem to imply that a reasonable degree of certainty as to the nature of the command is necessary for any satisfactory theory of obligations. Recognition of mere possibilities seems to fall far short of this.

Monday, November 24, 2014

The Legal Challenges of Robotics (2): Are robots exceptional?


Baxter Robot


(Previous Entry)

Will robots pose exceptional challenges for the law? That’s the question taken up in Ryan Calo’s recent article “Robotics and the Lessons of Cyberlaw”. As noted in the previous entry, Calo thinks that robots have three distinguishing features: (i) embodiment (i.e. they are mechanical agents operating in the real world); (ii) emergence (i.e. they don’t simply perform routine operations, but are programmed to acquire and develop new behaviours); and (iii) social meaning (i.e. we anthropomorphise and attach social meaning to them). So when Calo asks whether robots pose exceptional challenges for the legal system, he asks in light of those three distinguishing features.

And what does he conclude? In brief, he thinks that robots do pose moderately exceptional challenges for the legal system. That is to say, there will need to be systemic changes in the legal system in order to successfully regulate robots. In this post, I’m going to try to analyse the argument he offers in support of this view. I’ll do so in three parts. First, I’ll talk about the concept of “exceptionalism” and the moderate form that Calo defends. Second, I’ll formalise his main argument for moderate exceptionalism. And third, I’ll proffer some critical reflections on that argument.


1. The Exceptionalism Debate in Law
People throw the concept of exceptionalism around in many debates. The most tedious is probably the debate about American exceptionalism. Whenever the term is used it is used to capture the notion that some phenomenon is unique, unparalleled or qualitatively distinct from related phenomena. So, for example, in the debate about American exceptionalism, the exceptionalist claims that the United States of America is a unique or unparalleled nation among other nations.

How does exceptionalism apply to debates about technology and the law? It applies by capturing the notion that certain technologies pose unique challenges for the law, and hence require special or specific laws, and maybe even special legal institutions. To give an example, most legal systems have laws in place that deal with harassment and bullying. These laws have been around for a very long time. The types of harassment and bullying in mind when such laws were first drafted were the sorts of harassment that took place in the Real World. That is: the kind that involved some unwanted physical confrontation or intimidation, or pestering via old systems of post and print media. Many people now wonder whether we need special laws to deal with cyberbullying and cyberharassment. In fact, some legal systems (e.g. Nova Scotia) have created such laws. The motivation behind these special laws is that cyberbullying and cyberharassment are qualitatively different from the more traditional forms. Consequently, new laws are needed to deal with these forms of bullying and harassment. Others disagree with this push for new laws, arguing that the old laws are general enough to cover whatever is wrong with cyberbullying and cyberharassment.

We see here a debate between legal exceptionalists and, what we might call, legal universalists. The exceptionalists are pushing for more special laws to deal with what they perceive to be unique technological challenges. The universalists are more sceptical, thinking that most legal rules are general enough to cover most new technologies.

We might think of opinions being arrayed along a spectrum, with strong exceptionalists at one end and strong universalists at the other. The strong exceptionalists would argue that special laws are required for every new technology; the strong universalists would argue that general laws can be stretched to accommodate all new technologies. The correct view probably lies somewhere in between those two extremes.

That’s certainly what Calo tries to argue in his article. He thinks that moderate exceptionalism is appropriate when it comes to dealing with the legal implications of robotics. This moderate position is defined as follows:

Moderate Legal Exceptionalism: A new technology (X) warrants moderate legal exceptionalism if the mainstreaming of that technology would require systemic changes in the law or legal institutions in order to reproduce (i.e. preserve) or displace an existing balance of values.

This requires some further explanation. First, note the “mainstreaming” condition. For Calo, the need for moderate exceptionalism only kicks in once the use of a given technology becomes sufficiently widespread. How widespread is sufficiently widespread? It’s hard to say for sure. The internet is obviously in widespread use and this has required some legal changes in order to deal with its distinctive capabilities (e.g. changes to copyright law and laws on jurisdiction). Where do we stand in relation to robotics? It’s not clear but we are definitely approaching the threshold-point (if we have not already passed it).

Second, note how Calo stipulates that moderate exceptionalism is determined by the need for systemic legal changes in order to preserve or displace and existing set of values. The idea here is that the current legal system embodies certain value judgments. To use a simple example, the law on murder embodies the value judgment that murder is wrong and deserves some kind of harsh response. We hope that most of those value judgments are sound, but it is possible that they are not (the law is not always just or fair). Calo’s point is that certain technologies may force legal changes in order to preserve the existing (good) value judgments or, conversely, may bring into clearer relief the problems with other (bad) value judgments. Cyberbullying may be an example of this. Laws on bullying and harassment incorporate the value judgment that people do not deserve to be repeatedly intimidated or insulted by others. The fact that the internet allows for this to be done anonymously, persistently and from great distances, may require some legal changes. At least if we are to preserve the right not to be repeatedly intimidated or insulted.


2. The Argument for Moderate Robolaw Exceptionalism
With these clarificatory comments out of the way, we can proceed to consider Calo’s argument for moderate robolaw exceptionalism. The argument works like this:


  • (1) If the mainstreaming of a given technology would force the law to adopt systemic changes in order to preserve or displace an existing balance of values, then that technology requires moderate legal exceptionalism.
  • (2) The mainstreaming of robots would force the law to adopt systemic changes in order to preserve or displace an existing balance of values.
  • (3) Therefore, the mainstreaming of robots requires moderate legal exceptionalism.


There’s nothing particularly interesting about this argument yet. The first premise simply incorporates Calo’s stipulative definition of moderate exceptionalism and the second premise is unsupported. The devil is in the detail. How exactly might robots force systemic changes in order to preserve or displace existing values? Calo offers six suggestions. Each of these can viewed as a reason to support the second premise:


  • (2.1) - Because robots blur the object/agent boundary they may belong to a unique ontological category that requires either a novel set of legal rules or a novel mix of old rules (e.g. some rules that typically apply to agents and some that apply to objects). For example, robots may be treated like persons when it comes to a legal action for damages, but not in other contexts (e.g. contract law).
  • (2.2) - Robots may force an increase in the use of strict liability rules. Strict liability arises whenever liability is imposed without the need to prove fault (e.g. intent or negligence). Calo’s claim is that traditional fault rules will be difficult to apply to manufacturers of robots (because robots will display emergent behaviours, not anticipated or programmed by their creators). Consequently, an expansion of strict liability rules will be needed to preserve the availability of compensation.
  • (2.3) - Robots may necessitate the existence of a new class of criminal offence, viz. the offence of putting technologies into play that are capable of causing specific harm and do actually cause specific harm. As Calo puts it, the rationale for such an offence might be “vindicating an injury in the eyes of society and providing a moral and pragmatic check on the overuse of dangerous technology without justification” (p. 141).
  • (2.4) - Robots may alter the relevance of the doctrine of foreseeability. In claims for personal injury it usually must be shown that the injury in question was “reasonably foreseeable”. But because robots will be designed to display emergent (not reasonably foreseeable) behaviours, there may be less role for such a doctrine when it comes to liability claims arising from robotic injury.
  • (2.5) - Robots may force a greater concern for risk mitigation within the law. Risk mitigation is the focus on identifying and developing strategies for minimising risks. Legal rules sometimes require this. In fact, this is already common in some areas of law — e.g. banking law. Calo’s point is that the risks associated with embodied quasi-agents may encourage a broader focus on risk mitigation across a range of legally-regulated industries.
  • (2.6) - They may require a new regulatory infrastructure. Speaking specifically about the US, Calo notes that at present robots are regulated by many different agencies (it depends on what the robot is being used for). He argues that there may be some need for a single regulatory authority (indeed, he makes the case for a Federal Robotics Commission at length elsewhere).


If you add these six reasons together with the previous statement of the argument, you end up with something that looks like this.





3. What should we make of Calo’s Argument?
This leads us to the question: is Calo’s argument for moderate exceptionalism persuasive? On the whole, I’m inclined to say “yes”. I think the widespread use of robots will force certain changes in the legal system. There will be some stretching and adjusting of legal rules, and possibly some novel ones too. Nevertheless, I want to offer some other comments on the argument Calo makes. These are not really intended as direct criticisms, but rather as critical reflections on the issue he raises.

I find it interesting that most of Calo’s examples highlight the ways in which robots might create liability-gaps. If you think about it, the implicit assumption underlying the concern with foreseeability (or strict liability) is that robots will be introduced into the world, they will harm or injure people, and no one will be held liable for the injuries they cause. This is upsetting to present values because the victims deserve compensation for the injuries suffered. Consequently, rules must be created or stretched to ensure that this liability gap does not arise.

It strikes me that there are other interesting “gaps” that might be created by the widespread use of robots. For example, they might create a broader “accountability-gap” (where this extends beyond mere impositions of liability). This could happen if it becomes difficult to hold social actors to account for their decision-making because the decisions are made by robots. This is something I have discussed in relation algorithmic decision-making before.

In addition to this, they might have an interesting effect on the desire for retributive justice. I assume that if robots fall short of full-personhood, but are capable of engaging in sophisticated, novel and harmful behaviours, it will be difficult to hold them responsible for what they do in the manner demanded by proponents retributive justice (i.e. robots won’t be morally culpable wrongdoers). At the same time, the manufacturers of the robots will fail to meet the criteria for retributive justice because the robot is too independent from them (or, alternatively, they will only satisfy a much lesser form of culpability). The result could be a “retribution-gap” in which people look for an appropriate target for retributive blame, but fail to find one.

What implications might this have? If you are less inclined toward the retributive view, you might welcome it. You might hope that the presence of the retribution gap will wean people away from the backward-looking retributivist view of justice, and draw them towards a more forward-looking consequentialist type of justice. But at the same time, you might worry that there are studies suggesting that humans are compulsive, innate retributivists. They may struggle with the new system and end up finding inappropriate targets for their retributive blame (more scapegoats etc.). Either way, I think the social effect is worth thinking about.

Anyway, those are my, no doubt ill-conceived, reflections on Calo’s argument for moderate robolaw exceptionalism. I don’t disagree with the claim, but I think there are other interesting shifts that could be inaugurated by the robotics revolution.

Sunday, November 23, 2014

Critiquing the Kalam Cosmological Argument (Series Index)





The Kalam Cosmological Argument is one the most widely-discussed arguments for the existence of God. Though it can be traced back to the work of Islamic theologians and philosophers, its most famous modern proponent is William Lane Craig. The basic argument can be stated like this:


  • (1) Whatever begins to exist must have a cause of its existence.

  • (2) The universe began to exist.

  • (3) Therefore, the universe has a cause of its existence.


Additional argumentation is then introduced to show why the cause must be an immaterial, eternal and personal being (i.e. God).

Is the argument any good? I have looked at several critiques of the arguments over the years. I thought it might be useful to collect all of those discussions in one place. So that's exactly what I have done.


1. Must the Beginning of the Universe have a Personal Cause?
This four-part series of posts looked at an article by Wes Morriston, who is probably the foremost critic of the Kalam. In the article, Morriston argues that the first premise of the argument is flawed and, more importantly, that there is no reason to think that a personal being is required to explain the beginning of the universe. This series appeared on the blog Common Sense Atheism (when it was still running), so the links given below will take you there:



2. Schieber's Objection to the Kalam Cosmological Argument
Justin Schieber is one of the co-hosts of the Reasonable Doubts podcast, and a prominent atheist debater. Back in 2011 he offered a novel and interesting critique of the Kalam argument. Briefly, he cast doubt on the claim that God could have brought the universe into existence with a timeless intention. I tried to analyse and formalise this critique in one blog post:



3. Hedrick on Hilbert's Hotel and the Actual Infinite
The second premise of the Kalam is often defended by claiming that the past cannot be an actual infinite because the existence of an actual infinite leads to certain contradictions and absurdities. This is probably the most philosophically interesting aspect of the Kalam argument. One of the thought experiments Craig uses to support the argument is Hilbert's Hotel. In this series of posts, I look at Landon Hedrick's criticisms of this thought experiment.


4. William Lane Craig and the Argument from Successive Addition
Even if the existence of an actual infinite is not completely absurd, Craig argues that it is impossible to form an actual infinite by successive addition. But this is exactly what would be required if the past is without beginning. In this post, I look as Wes Morriston's criticisms of this argument:

5. Puryear on Finitism and the Beginning of the Universe
This post was part of my journal club. It looked at Stephen Puryear's recent, novel, objection to the Kalam. It is difficult to explain in a summary format, but suffice to say it provides an interesting, and refreshing, perspective on the debate:


6. Beginning to Exist and the Kalam Cosmological Argument
The central concept in the Kalam argument is that of 'beginning to exist'. But what does it mean to say that something begins to exist? This post looks at Christopher Bobier's interpretation of the phrase, arguing that there is no plausible interpretation that retains the intuitive appeal of the argument.



Monday, November 17, 2014

Podcast Interview - Review the Future on the Threat of Algocracy




I was interviewed on the latest episode of the Review the Future podcast. The interview dealt with the topic of algocracy, which is something I have looked at repeatedly over the past year. An algocracy is a state in which we are ruled by algorithms rather than human beings. I had a great time talking to the two hosts (Jon Perry and Ted Kupper), and I think we managed to explore most of the important aspects of this issue. Please check it out and let me know what you think:




Review the Future is a podcast that takes an in depth look at the impact of technology on culture. As I say in the interview, I'm a big fan and I encourage everyone to listen. Here are some of my favourite episodes so far:





Monday, November 10, 2014

Is there a defensible atheistic account of moral values?



There two basic types of ethical fact: (i) values, i.e. facts about what is good, bad, or neutral; and (ii) duties, i.e. facts about what is permissible, obligatory and forbidden. In this post I want to consider whether or not there is a defensible non-theistic account of values. In other words, is it possible for values to exist in the godless universe?

Obviously, I think it is, and I have defended this view in the past. But today I’m going to look at Erik Wielenberg’s defence of this position, as outlined in his excellent little book Robust Ethics. The view he defends can be called robust ethical non-naturalism. According to it, moral facts are non-natural and metaphysically basic. Wielenberg holds that this is true of all moral facts (i.e. duties as well as values) but I’m only going to focus on values for the time being.

Robust ethical non-naturalism is difficult to support in a positive way — i.e. in terms of arguments for its specific conclusions. It tends to be defended in a negative way — i.e. by showing how no other argument succeeds in defeating it. This makes sense given that it holds that ethical facts are metaphysically basic. Such facts tend to be those that are left standing after all attempts to reduce them to other facts or to argue against their existence seem to fail.

So it is no surprise that Wielenberg’s defence of the view is largely negative in nature. But this negative structure allows him to do something important: it allows him to show how robust non-naturalism provides an account of moral value that is — at the very least — no worse (and possibly a good deal better) than the theistic accounts that are commonly used against it. In particular, he shows how the account of moral value supported by Robert M. Adams, William Lane Craig and J.P. Moreland is vulnerable to many of the same objections they level against robust ethical non-naturalism. I am going to try to show how he does that in the remainder of this post.


1. A Brief Sketch of Robust Ethical Non-Naturalism
We need to start with a slightly more detailed understanding of robust ethical non-naturalism. The view relies heavily on the distinction between intrinsic and extrinsic value. Something is intrinsically valuable if it is good in and of itself (i.e. irrespective of its consequences and other extrinsic properties). Robust ethical non-naturalism holds that all moral value is ultimately rooted in a set of metaphysically basic, but intrinsically valuable states of affair.

In fact, it goes further than this and holds that those intrinsically valuable states of affair are necessarily good or bad. To take two examples: the experience of pain is deemed to be intrinsically and necessarily bad; while the experience of pleasure is deemed to be intrinsically and necessarily good. But these are only the most obvious examples. There are others. For instance, Wielenberg thinks that being in a loving relationship with another person is necessarily and intrinsically good.

Why think that these things are intrinsically good? Wielenberg admits that this is difficult to prove, but he follows other philosophers (GE Moore and Scott Davison) in suggesting that two tests are apposite.

The Isolation Test: Imagine that the phenomenon of interest (e.g. pain, or being in a loving relationship) exists in a simple, isolated universe (i.e. a universe in which all of the usual extrinsic accoutrements are stripped away). Does it still seem to have the value you originally attached to it?
The Annihilation Test: Imagine that the phenomenon of interest is completely annihilated (i.e. no trace of it is left in the universe). Is the universe now shorn of the value it had (i.e. does the universe seem better or worse off)?

Wielenberg argues that things like pain, pleasure or being in a loving relationship pass both of these tests. For example, if you imagine a universe in which nothing except your loving relationship exists, then it still seems like you have something that is good; and conversely, if you imagine a universe in which that loving relationship is completely annihilated, it seems like the universe is slightly worse off as a result. Consequently, being in a loving relationship seems like it is intrinsically good. This isn’t a water-tight argument, to be sure, but there is nothing obviously wrong with it.

What then of necessity of such facts? Wielenberg thinks that all ethical properties arise, necessarily, from an underlying set of non-moral properties, in such a way that the non-moral facts make (cause to be) the moral facts. (I discussed this view of moral supervenience in a previous post). But this doesn’t mean that he thinks that all ethical facts are groundless and metaphysically basic. Some ethical facts are grounded in others. For example, the wrongness of torture could be grounded in facts about the badness of pain and the moral status of sentient beings.

That said, Wielenberg does not think that ethical facts can be reduced to non-moral facts. Indeed, he thinks that there are several problems with the notion that ethical facts can be reduced in such a manner (problems discussed by the likes of David Hume, GE Moore and Mark Schroeder). So instead, he holds that there is a set of necessarily true, and metaphysically basic ethical facts from which all others proceed. These are likely to include things like the intrinsic badness of pain; the goodness of love; the badness of injustice; and so forth.

That, in a nutshell, is Wielenberg’s account of moral value. The question now is how it stacks up against theistic alternatives.


2. Robert Adams’s Theistic Account of Value
Not all theists think that God accounts for moral facts. For instance, Richard Swinburne has famously argued that certain foundational ethical truths are analytic in nature, and so do not depend on God for their existence. For those theists who deny the connection between God and moral value, Wielenberg’s account may seem pretty attractive.

But there are others who insist that God is the origin of all things, including moral facts. For them, Wieleberg’s account represents a challenge. To see whether they can fend off that challenge, we must first consider the view they themselves hold. There is, of course, no single view that garners universal approval, but the one that is typically trotted out is Robert M. Adams’s account from Finite and Infinite Goods. This is used by William Lane Craig and JP Moreland in their defence of the Christian worldview.

Adams tries to offer an account of three phenomena (i) the Good, which is the transcendent and perfect form of goodness; (ii) finite goodness, which is the type of goodness we find our world; and (iii) moral obligations. We’ll ignore the third for now and focus on the first two.

According to Adams, the Good is simply equivalent to God’s divine nature. In other words: Good = God. The divine nature simply is the transcendent and perfect paradigm of goodness. This is an identity claim, not an explanatory claim or a semantic claim. Adams is not saying that the divine nature explains goodness or that the term “Good” is semantically equivalent to the term “God”. In fact, Adams models his “Good = God” claim after another identity claim, the “Water = H2O” claim. We are all now familiar with this latter identity claim. It tells us that the substance we call water simply is the molecule captured by the chemical formula H2O. That molecule does not explain the existence of water, nor are references to H2O semantically equivalent to references to water. It is just that the latter is equivalent to the former. So it is with Good = God.

Adams’s account of finite goodness then builds upon this identity claim. In brief, Adams holds that all finite goods — like the goodness of a loving relationship — are such because of their resemblance to the divine nature. We can say that a relationship is good because it bears a resemblance to one of God’s key attributes. This is a particular account of moral supervenience — the resemblance account — that I outlined in a previous post. I offer the same diagram I offered there to illustrate how it works.



There are two important features of Adams’s account. First, like Wielenberg, Adams accepts the existence of certain metaphysically basic ethical facts. In Adams’s case those facts include things like “the Good exists, and that the Good is loving, that the Good is merciful and that the Good is just”. These facts are ethically basic because of the way in which Adams links God to the Good. Second, and related to this, Adams’s account does not provide a metaphysical foundation for the Good. Just as it would be nonsense to claim that H2O is the foundation of water; so too would it be nonsense to claim that God is the foundation of the Good. On the contrary, the Good has no foundation on Adams’s account because, like most theists, he thinks that God has no metaphysical foundation (He just is). Hence, facts about his nature are ethically basic facts.

As we shall now see, Wielenberg exploits these features in his defence of the atheistic view.


3. Does the Atheistic View make Sense?
While Robert Adams is himself open to the possibility of values in a non-theistic universe, other prominent Christian philosophers are more closed. William Lane Craig, for instance, argues that without God there can be no moral value. Furthermore, he explicitly relies on Adams’s account of goodness in defending his position. But what is it that the atheistic view lacks that Adams’s view has?
In a passge written with fellow Christian philosopher JP Moreland, Craig makes the case:

Atheistic moral realists affirm that objective moral values and duties do exist and are not dependent on evolution or human opinion, but they also insist that they are not grounded in God…They just exist. It is difficult, however, even to comprehend this view. What does it mean to say, for example, that the moral value justice just exists? It hard to know what to make of this. It is clear what is meant when it is said that a person is just; but it is bewildering when it is said that in the absence of any people, justice itself exists. Moral values seem to exist as properties of persons, not as mere abstractions — or at any rate, it is hard to know what it is for a moral value to exist as a mere abstraction. Atheistic moral realists seem to lack any adequate foundation in reality for moral values but just leave them floating in an unintelligible way. 
(Craig and Moreland 2003, 492 - passage is repeated in many other writings by Craig).

We get from this that they are incredulous at the notion of robust ethical non-naturalism, but they don’t formulate their objections as an argument. For ease of analysis, I will try to rectify this. I think what they are saying can re-interpreted in the following way:


  • (1) If an account of moral values entails that moral values (i) “just exist”, (ii) are not properties of persons, and (iii) float free of metaphysical foundation, then that account is false (or inadequate).
  • (2) Robust ethical non-naturalism entails (i), (ii) and (iii).
  • (3) Therefore, robust ethical non-naturalism does not provide an adequate account of moral value.


The obvious corollary to this is that a theistic account can provide an adequate account. But is that right?

Wielenberg argues that it isn’t. Let’s start with the claim that on robust ethical non-naturalism moral values “just exist”. Is this right? Sort of. As we saw above, Wielenberg thinks that there is a set of metaphysically basic ethical facts. These facts are necessarily true because they necessarily supervene on certain non-moral facts. There is nothing more to be said about them. But that doesn’t differentiate Wielenberg’s account from Adams’s. After all, on Adams’s account God just exists, and facts about His nature are equivalent to metaphysically basic facts. So if the “just exists” condition undermines robust ethical non-naturalism, it must also undermine the theistic view, since the divine nature just exists.

What then of the claim that ethical non-naturalism denies the fact that values are properties of persons? Wielenberg points out that this view has little to recommend to it. For starters, Adams’s view also entails that values are not properties of persons. Adams says that the Good = God, not that goodness is property of God. In other words, he is claiming that the Good is a person, not a property of a person (if it were then it would be a mere abstraction). So, again, if (ii) really is a criticism of robust ethical non-naturalism, it must also be a criticism of the theistic view. In any event, it seems silly to insist that values must be properties of persons. As environmental ethicists have long pointed it, it is arguable that values supervene on states of affairs concerning animals and the natural environment that have no persons involved in them.

Finally, what of the claim that on Wielenberg’s view values float free of a metaphysical foundation? This is true, but it is, once again, also true of Adams’s view. As I outlined above, Adams does not think that God provides a metaphysical foundation for the Good. God is the good; like water is H2O. Furthermore, there is nothing deeply mysterious or unintelligible about the account that Wieleberg is proposing. His view rests on the notion that values necessarily supervene on states of affairs and the non-moral properties of those states of affairs. Consequently, his view is no more unintelligible than any metaphyiscal view that posits the existence of states of affairs and properties (which is pretty much all of them). As he puts it himself:

With respect to justice, my view is that there are various obtaining states of affairs concerning justice, and that when individual people have the property of being just, it is (in part) in virtue of the obtaining of some of these states of affairs. For instance, I hold that it is just to give people what they deserve, thus, anyone who gives others what they deserve thereby instantiates the property of justice. The state of affairs that it is just to give people what they deserve obtains whether or not any people actually exist, just as various states of affairs about dinosaurs obtain even though there are no longer any dinosaurs….This approach is perfectly intelligible and no more posits mysterious, floating entities than does any view committed to the existence of properties and states of affairs. 
(Wielenberg 2014, 46)

Craig and Moreland’s critique is, consequently, unpersuasive. It does nothing to support the Christian worldview over the atheistic one.





4. Conclusion
To briefly recap, I have tried in this post to answer the question “Is there a defensible atheistic account of moral value?”. I have used Erik Wielenberg’s work on robust ethical non-naturalism to answer that question. According to robust ethical non-naturalism, certain moral values necessarily supervene on certain states of affairs. Some of these values are metaphysically basic (e.g. the goodness of pleasure; the badness of pain etc.). They are not founded in a deeper set of ethical or non-ethical facts.

This account of moral value is intelligible and certainly no worse than Robert Adams’s beloved theistic account of moral value. Indeed, the criticisms levelled against the atheistic view by the likes of William Lane Craig and JP Moreland can easily be turned back on the theistic view they themselves defend. Both views hold that certain ethical values “just exist”, that the values are not always properties of persons, and that the values float free from a deeper metaphysical foundation.

Friday, November 7, 2014

Three Types of Moral Supervenience


This post will share some useful conceptual distinctions, specifically ones that help us to better understand the tricky notion of moral supervenience. I take the distinctions Erik Wielenberg’s recent book Robust Ethics, which should be read by anyone with an interest in metaethics.

As you know, metaethics is about the ontology and epistemology of morality. Take a moral claim like “torturing innocent children for fun is wrong”. A metaethicist wants to know what, if anything, entitles us to make such a claim. On the ontological side, they want to know what is it that makes the torturing of innocent children wrong (what grounds or explains the ascription of that moral property to that event?). On the epistemological side, they wonder how it is that we come to know that the torturing of innocent children is wrong (how to we acquire moral knowledge?). Both questions are interesting — and vital to ask if you wish to develop a sensible worldview — but in discussing moral supervenience we are focused primarily on the ontological one.

I’ll break the remainder of this discussion into two parts. First, I’ll give a general overview of the problem of moral supervenience. Second, I’ll share Wielenberg’s taxonomy of supervenience relations.


1. The Supervenience Problem in Brief
Supervenience is a type of metaphysical relationship that exists between different sets of properties. It is defined by the Stanford Encyclopedia of Philosophy as follows:

Supervenience: A set of properties A supervenes upon another set B just in case no two things can differ with respect to A-properties without also differing with respect to their B-properties. In slogan form, “there cannot be an A-difference without a B-difference”.

This is difficult to understand in the abstract. So let’s consider a concrete example (in fact, let’s consider the most common example given in the literature). Suppose you have two paintings. One is Van Gogh’s original Starry Night and the other is a forgery. Suppose that the forgery is a perfect replica of the original. Every physical detail on the canvas is identical, down to the individual brushstrokes and splotches of paint. The only difference is that the canvases were painted by different people.

In such a case, there are certain properties of the paintings (their form; their colour; and what they represent) that are supervenient on the physical properties of the canvases (the brushstrokes, the splotches of paint etc.). The first set properties are supervenient because they are invariant across both canvases. In other words, it is because the physical properties of the canvases are the same that the more abstract properties must be the same. If the abstract properties of form, colour and representation were different, then it must be because there is some difference in the physical properties. This is what is captured in the slogan “there cannot be an A-difference without a B-difference”.

(Just so we are clear, this does not mean that paintings have identical abstract properties. They were painted by different people and this would have an effect on an abstract property like “value”. It just means that some of the abstract properties are invariant because they supervene on the physical properties of the canvas.)

Why is this relevant to metaethics? It is relevant because moral properties are typically said to supervene on non-moral properties (typically, physical or natural properties). Take the earlier example of “Torturing an innocent child for fun is wrong”. Most people take that claim to mean that the moral property of wrongness supervenes on certain set of natural properties (specifically, the act of inflicting pain on a child for the purposes of amusement). So imagine there were two such acts of torture. If the natural properties varied in some way — for example, if the child did not experience pain and it was not done for amusement — then there may be some variation in the moral property (I say “may be” because wrongness could be ascribed to many different natural events). But if the natural properties are the same in both cases, then they are both wrong. This is thought to hold for all moral properties, like goodness, badness, rightness and wrongness.



The supervenience of the moral on the non-moral is generally thought to give rise to a philosophical puzzle. JL Mackie famously argued that the if the moral truly did supervene on the non-moral, then this was metaphysically “queer”. We were owed some plausible account of why this happens. He didn’t think we had such an account, which is one reason why he was an moral error theorist. Others are less pessimistic. They think there are ways in which to account for moral supervenience.


2. Three Accounts of the Supervenience Relation
This is where Wielenberg’s taxonomy comes in to play. He suggests that there are three main ways in which to account for the supervenience relation between moral and non-moral properties (what he calls “base” properties).

The first is to adopt a reductive account of supervenience. Here, you simply argue that the moral properties are completely constituted by the base properties, to such an extent that they reduce to the base properties. Certain naturalist metaethical theories take this form. For example, Frank Jackson argues that all moral statements ultimately reduce to descriptive statements about natural entities, events and states of affairs. That is: moral statements are nothing but particular kinds of descriptive statements. Wielenberg refers to this as R-supervenience.


The second account is slightly more complicated, but can be referred to as the exemplar or resemblance account. According to this, there is a tripartite relationship between the moral properties, the base properties and another set of necessarily existent properties. The relationship is such that the base properties resemble or exemplify the other set of necessarily existent properties. This account is commonly adopted by theists, with Robert M Adams’s theory of finite goodness being a prime example. Adams argues that the moral goodness of, say, a person is attributable to the fact that the person resembles God’s nature (where God is a necessarily existent being). So in this case, the characteristics of the person are the base properties; God’s nature is the set of necessarily existent properties; and goodness is the moral property. The moral property supervenes on the base properties arises because of the resemblance between the base properties and God’s nature. Wielenberg calls this A-supervenience (in honour of Adams).





The third account is the making account. According to this, moral properties supervene on base properties because the base properties make the moral properties. That is to say: the base properties explain the existence of the moral properties. This is not a reductive account because the two sets of properties are still deemed to be distinct; but it is an explanatory account because the former explain the latter. Wielenberg identifies two types of moral explanation: grounding explanations (which ground, but do not reduce, one set of properties in another) and causal explanations (which explain how one set of properties causes the existence of another). Wielenberg himself prefers causal explanations, mainly because there is much ambiguity as to the precise nature of grounding explanations. But causal explanations of moral supervenience have their problems too. Since moral supervenience is thought to be a necessary relationship, causal explanations are deemed inapt. This is because causation is generally thought to cover contingent relationships not necessary ones. Wielenberg says this is wrong, and that a causal explanation of morality may be possible. I won’t get into the intricacies of that argument now. You can read the book if you want the details.



Some people may argue that the making account is not really about supervenience at all. This is correct, to some degree. The making account goes beyond supervenience by trying to provide an explanation for the existence of moral properties. It is a more robust type of metaphysical relationship, one that is often confused with pure supervenience. Think about it like this: pure supervenience is about modal co-variation, i.e. how properties remain the same across different entities and/or possible worlds. There could be some properties that co-vary across possible worlds without standing in an explanatory relationship to one another (i.e. not all necessary relationships have explanations). But Wielenberg thinks that moral properties probably do, and that the challenge in metaethics is to provide a making-account of moral supervenience. This is interesting to me because I have written an entire paper arguing that explanations of certain moral properties may not be required.

Anyway, Wielenberg calls the making account D-supervenience in honour of another philosopher, Michael DePaul. You can ignore the name if you like.


3. Conclusion
That’s all I wanted to say in this post. To briefly recap, moral properties are commonly believed to supervene on non-moral properties. The existence of this supervenience relation is thought to be puzzling, and so many philosophers think we are owed some account of how it comes to be. Wielenberg suggests that there are three accounts that we could give: (i) a reductive one, according to which moral properties are nothing but non-moral properties; (ii) a resemblance one, according to which moral properties supervene on non-moral properties because the latter resemble some third set of necessarily existent properties; and (iii) a making one, according to which non-moral properties explain the existence of moral ones.

Wednesday, November 5, 2014

The Legal Challenges of Robotics (1)


Baxter robot


We are entering the age of robotics. Robots will soon be assisting us in our homes; stacking our warehouses; driving our cars; delivering our Amazon purchases; providing emergency medical care; and generally taking our jobs. There’s lots to ponder as they do so. One obvious question — obvious at least to lawyers — is whether the age of robotics poses any unique challenges to our legal system?

That’s a question Ryan Calo tries to answer in his article “Robotics and the Lessons of Cyberlaw”. He does so by considering the lessons learned from the last major disruptive technology: the internet. When it was originally introduced in the late 80s and early 90s, the ultimate fate of the internet was uncertain (and still is, to an extent). Nevertheless, it clearly created new opportunities and new challenges for the law. Some of those challenges have been dealt with; some have not.

Robots are distinct from the internet. Although they may be integrated into it — and thus form part of the ever-expanding internet-of-things — they have a number of unique technological properties. Still, Calo thinks there is something to be learned from the internet era. Over the next couple of posts, I want to see what he has to say.

I start today by looking at his take on the distinctive properties of robots vis-a-vis the distinctive properties of the internet. This takes the form of a compare-and-contrast exercise. I start by considering Calo’s take on the three key features of the internet, and the challenges and opportunities created by those three features. I then follow-up by looking at his take on the three key features of robotics, and the challenges and opportunities they pose. I won’t offer much in the way of evaluation and criticism, except to say that I think there is much to mull over in what Calo has to say. Anyone with an interest in the social implications of robotics should be interested in this.


1. Three Key Features of the Internet and the Challenges they Pose(d)
There are a number of technical and not-so-technical definitions of the “internet”. A technical definition might say that “the internet switches ‘packets’ of data between nodes; it leverages a set of protocols to divide digital information up into separate containers and to route those containers between end points for reassembly and delivery” (Calo 2014, 106). A not-so technical definition might talk in terms of “information superhighways” or the creation of “cyberspaces” in which information is exchanged.

Whatever the definition you use, the internet (according to Calo) has three distinctive features:

Connection: The internet allows for “promiscuous and interactive flows of information” (Calo 2014, 107). Anyone, anywhere can access the same sorts of information as anyone else. What’s more, this can be done at low cost (much lower than old systems for information exchange), and the system enables people to be information producers, as well as consumers. For example, the internet allows me to produce this blog and for you to read it.

Collaboration: The internet allows for the creation of shared virtual meeting places. Within these virtual spaces people can collaborate on various projects, e.g. producing text, video, software and so on. These meeting places also serve as salons for debate, discussion and other kinds of collaborative conversation. For example, this blog creates a virtual salon, though the volume of debate and discussion is relatively minimal in comparison to other forums (e.g. more popular blogs; discussion boards; reddit).

Control: The internet allows for either new forms of control and manipulation, or more exquisite versions of existing forms of control and manipulation. In other words, people now have a medium for controlling certain aspects of their lives with more precision or in a manner that wasn’t previously available to them. A simple example of this would be the way in which the internet facilitates shopping. With online shopping I am given much more freedom and control over my shopping experience (time, product, place etc) than is the case with traditional high-street shops. Another example, would be how virtual learning environments (like Blackboard and Moodle) allow me to create and share information about the courses I am teaching with the students I teach in a much more user-friendly and expansive form.

These three features bring with them a set of opportunities and challenges. The challenges are particularly important from a legal perspective because they tend to stretch traditional legal rules to breaking point. That may be a good thing, if the rules protect interests that don’t deserve to be protected; but it might also be a bad thing, if legitimate interests are protected by the rules but the rule is ill-equipped for the characteristics of the internet. There’s no point talking about this in the abstract though. Let’s go through each of the challenges and opportunities.

First, with regard to connection, it’s clear that this has tremendous potential for the sharing, copying and production (“democratisation”) of information. I, for one, am very glad to have all the knowledge of the world at my fingertips. It makes research, writing and dissemination of my own work so much easier to do. Likewise, in the commercial context, it allows for nimble, internet-savvy startups to take over from the lumbering behemoths of the corporate world. But it is clearly not good news for all. The internet makes it easy for artists to create and promote their work, but difficult to protect their property rights in that work. This is because the traditional intellectual property rules were not designed to deal with a world in which information is so readily copied and shared. Indeed, it is not clear that any set of legal rules can effectively deal with that problem (though there are some models, e.g. creative commons, DMCA). Likewise, the promiscuous flow of information makes it much harder to protect rights to privacy. We all now leave digital “trails” through cyberspace that can followed, stored and manipulated. This is something that is subject to increasing scrutiny, and some laws are in place to deal with it, but again the technology stretches the traditional regimes to breaking point.

Moving on to collaboration, it is pretty obvious how this could be positive. Creating communities that allow for collaborative work and conversations can benefit individuals and society. But it also creates problems. Legally, the sorts of collaborative work done online can create issues when it comes to responsibility and liability. For example, who is responsible for creating defamatory publications (videos/text) when they are produced through some online (often anonymous) collaborative endeavour? Or who is responsible for defective non-commercial software? To some extent, we follow traditional legal rules in relation to authorship and control, but it’s not clear that they are always appropriate. Another obvious problem with collaboration is that the internet allows groups to work together for good and ill. Criminals and terrorists can create sub-regions within cyberspace in which they can promote nefarious ideologies and plan coordinated attacks.

Finally, in relation to control, there are obvious benefits to be had here in terms of autonomy and individual choice. We can now do more things and access more goods than we ever could before. But at the same time, technological elites (including both corporate and governmental entities) can use the same technology to monitor and control our activities. This creates problems when it comes to individual and collective rights (e.g. tradeoffs between individual choice and state security). These are issues that have surfaced repeatedly in recent years.


2. The Three Key Features of Robotics and the Challenges they Pose
Calo argues that robotics has three key features too and that identifying them can help to illuminate the challenges and opportunities of the robotics era. I’ll talk about those three features in a moment. First, I must note some of the restrictions Calo imposes on his own analysis. It is common in philosophical and futurist circles to discuss the classic science fiction questions of whether a robot could be conscious, whether it could possess human-level intelligence, whether it could qualify for personhood and so on. These are fascinating issues, no doubt about it. But Calo avoids them. As he likes to put it, he is a conservative about the technology and a radical about its social implications. In other words, he thinks that robotics technology doesn’t have to reach the level of sophistication required for potential personhood (or whatever) to have major social implications. Much more mundane robots can pose challenges for the legal system. He wants to focus on those more mundane examples.

With that in mind, we can look at the three key features of (more mundane forms of) robotics technology:

Embodiment: Robots will be mechanical agents that perform actions in the real world. Unlike artificially intelligent software programs that send outputs to some screen or digital signalling device; robots will have a more diverse set of actuators that allow them to do things in the real world. For example, a military drone can actually fly and deliver a payload to a target; a robot vacuum cleaner can move around your house, sucking up dirt; a robot worker like Baxter LINK can lift, sort and otherwise manipulate physical objects. The list goes on and on. You get the basic idea.

Emergence: Robots will not simply perform routine, predictable actions. The gold-standard from now on will be to create robots that can learn and adapt to circumstances. This will result in “emergent” behaviour. Emergent in the sense that the behaviour will not always be predicted or anticipated by the original creators. Calo prefers the term “emergent” to the more commonly-used “autonomous” because the latter is too closely associated with human concepts such as intent, desire and free will.

Social Meaning: This is a little more obscure than the other two. Calo points out that humans will have a tendency to anthropomorphise robots and imbue them with greater social meaning, perhaps more than we do with various software programs. He cites Julie Carpenter’s work on attachment to bomb disposal robots in the military as an example of this. Carpenter found that operators developed relationships with robots that were somewhat akin to the relationships between humans and beloved pets. More generally, robots threaten to blur the object-agent distinction and may belong in a whole new ontological category.


We can easily imagine ways in which these three features could be used to good effect. Embodiment allows robots to act in ways that humans cannot. For example, robo-surgeons could perform surgery with a level of precision and reliability that is not available to human beings. Likewise, emergence creates exciting possibilities for robots to adapt to challenges and engage in creative problem-solving. Finally, with social meaning, robots can be used not simply to substitute for emotional and affective labour as well (e.g. robot carers) as well as physical labour.

These three features also pose challenges. I’ve discussed some non-legal ones before, such as the threat to employment. Here, I’ll focus on the legal ones.

First, in relation to embodiment, Calo points out that the law has, traditionally, been much more concerned when activities result in physical (“tangible”) effects than intangible ones. This is something that has shielded internet companies from many forms of liability. Because internet companies trade in intangible information, they are exempt from many product liability laws (Calo cites some specific US statutes in support of this point). This shielding will no longer be possible with robots. Robots can act in the real world and their actions can have real physical effects. They are much likely to rub-up against traditional product liability rules. (Calo makes a more esoteric point as well about how robots blur the boundary between information and products — i.e. that they are information embodied. I’m ignoring that point here because it gets into an analogy with 3-D printing that would take too long to flesh out).

Second, with emergence, certain challenges are posed when it comes to those traditional product liability rules. If a robot’s code is the result of collaborative effort, and if its behaviour involves some degree of learning and emergence, questions can rightly be asked about who is liable for the harm that results from the robot’s actions. It is not like the case of a faulty toaster: there is much more disconnect between the human creator(s) and the “faulty” robot. Indeed, there are already cases that test traditional liability rules. Calo gives the example of a tweetbot created by Stephen Colbert that uses a simple algorithm to produce tweets about Fox news anchors. If written by a human being, the tweets could give rise to claims in defamation. What will happen when robots do things which, if performed by a human, would clearly give rise to liability? This is, perhaps, the classic question in robolaw, one that people have talked about for decades but which is fast becoming a practical problem. (It should also be noted that emergence presents challenges for IP law and ownership rights over products. If you damage a robot are you liable to someone for the damage caused?)

Finally, with social meaning, and the associated blurring of the object-agent distinction, we get other interesting challenges to existing legal regimes. If robots are imbued with human-like meaning, it will become much more common to blame them and praise them for what they do, which may in turn affect liability rules. But it will raise other issues too. For example, robot care workers in the home could create a greater sense of comfort, but also of intrusion and surveillance: it will be like we are being watched and scrutinised by another human being. Another example has to do with the way in which human contact has traditionally affected the operation of the law. For instance, it has been found that patients are less likely to sue for malpractice if they meet with their doctor for longer periods of time and get a sense that he/she is competent. What will happen if patient care is delivered by robots? Will patients be less likely to sue if they meet with a robo-surgeon prior to surgery? Should such meetings be factored in by hospitals?

These are all interesting questions, worth pursuing in more detail.


3. Conclusion
That brings us to the end of this post. To quickly recap, the distinctive features and challenges of robotics and not the same as the distinctive features and challenges of the internet. The internet was characterised by connection, collaboration and control; robotics is characterised by embodiment, emergence and social meaning. Despite this, they both pose similar kinds of challenges for the law. Where the internet stretched and threatened pre-existing legal regimes of ownership, privacy and liability, robotics is likely to do the same, albeit in a different way. Because of their physical embodiment and social meaning, robots may initially seem to “fit” within traditional legal rules and categories. But because of their distinct ontological status, they will force us to confront some of the assumptions and limitations underlying those rules and categories.

All this raises the question: is there something about the legal challenges posed by robotics that demand novel or exceptional legal analysis? That's a question I'll take up in part two.

Sunday, November 2, 2014

The Philosophy of Intelligence Explosions and Advanced Robotics (Series Index)


Hal, from 2001: A Space Odyssey


Advances in robotics and artificial intelligence are going to play an increasingly important role in human society. Over the past two years, I've written several posts about this topic. The majority of them focus on machine ethics and the potential risks of an intelligence explosion; others look at how we might interact with and have duties toward robots.

Anyway, for your benefit (and for my own), I thought it might worth providing links to all of these posts. I will keep this updated as I write more.


  • The Singularity: Overview and Framework: This was my first attempt to provide a general overview and framework for understanding the debate about the technological singularity. I suggested that the debate could be organised around three main theses: (i) the explosion thesis -- which claims that there will be an intelligence explosion; (ii) the unfriendliness thesis -- which claims that an advanced artificial intelligence is likely to be "unfriendly"; and (iii) the inevitability thesis -- which claims that the creation of an unfriendly AI will be difficult to avoid, if not inevitable.

  • The Singularity: Overview and Framework Redux: This was my second attempt to provide a general overview and framework for understanding the debate about the technological singularity. I tried to reduce the framework down to two main theses: (i) the explosion thesis and (ii) the unfriendliness thesis.


  • AIs and the Decisive Advantage Thesis: Many people claim that an advanced artificial intelligence would have decisive advantages over human intelligences. Is this right? In this post, I look at Kaj Sotala's argument to that effect.

  • Is there a case for robot slaves? - If robots can be persons -- in the morally thick sense of "person" -- then surely it would be wrong to make them cater to our every whim? Or would it? Steve Petersen argues that the creation of robot slaves might be morally permissible. In this post, I look at what he has to say.

  • The Ethics of Robot Sex: A reasonably self-explanatory title. This post looks at the ethical issues that might arise from the creation of sex robots.



  • Bostrom on Superintelligence (2) The Instrumental Convergence Thesis: The second part in my series on Bostrom's book. This one examines the instrumental convergence thesis, according to which an intelligent agent, no matter what its final goals may be, is likely to converge upon certain instrumental goals that are unfriendly to human beings.








  • Is anyone competent to regulate AI? - Second post looking at Matt Scherer's work. This one looks at the three main regulatory bodies in any state (the legislature; specific regulatory agencies; and the courts) and examines their competencies. It ends with a brief evaluation of Scherer's proposed regulatory model.