Tuesday, July 21, 2015

Epistemology, Communication and Divine Command Theory


I have written about the epistemological objection to divine command theory (DCT) on a previous occasion. It goes a little something like this: According to proponents of the DCT, at least some moral statuses (like the fact that X is forbidden, or that X is bad) depend for their existence on God’s commands. In other words, without God’s commands those moral statuses would not exist. It would seem to follow that in order for anyone to know whether X is forbidden/bad (or whatever), they would need to have epistemic access to God’s commands. That is to say, they would need to know that God has commanded X to be forbidden/bad. The problem is that there is a certain class of non-believers — so-called ‘reasonable non-believers’ — who don’t violate any epistemic duties in their non-belief. Consequently, they lack epistemic access to God’s commands without being blameworthy for lacking this access. For them, X cannot be forbidden or bad.

This has been termed the ‘epistemological objection’ to DCT, and I will stick with that name throughout, but it may be a bit of a misnomer. This objection is not just about moral epistemology; it is also about moral ontology. It highlights the fact that at least some DCTs include a (seemingly) epistemic condition in their account of moral ontology. Consequently, if that condition is violated it implies that certain moral facts cease to exist (for at least some people). This is a subtle but important point: the epistemological objection does have ontological implications.

Anyway, in this post I want to take another look at this so-called epistemological objection. I do so through the lens of Glenn Peoples’s article, simply entitled ‘The Epistemological Objection to Divine Command Ethics’. Peoples is a theist and a proponent of DCT (or so I believe). He thinks that the epistemological objection fails. His paper focuses on two versions of the objection and two versions of DCT. The first version of the objection he views as being ‘crude’; the second is slightly more sophisticated and comes from work done by Wes Morriston.

I’m going to ignore what Peoples says about the ‘crude’ versions. I tend to agree that they are crude and, frankly, uninteresting. So I’ll focus on Morriston’s version instead. As will become clear, I much more favourably disposed to Morriston’s line of argument than Peoples seems to be. I will try to explain why as I go along.

I’ll do so in three parts. First, I’ll try to explain the differences between the two versions of DCT mentioned in Peoples’s article. Second, I’ll outline and analyse Peoples’s argument for thinking that the epistemological objection fails in the case of the first version of the DCT. And third, I’ll outline and analyse his argument for thinking that it fails in the case of the second version of DCT. I’ll offer my own responses in each section.


1. Two Versions of Divine Command Theory
Sloppy terminology is abundant in philosophy. This is a real shame since it often means that participants in philosophical debates end up talking past each other. This is particularly true in debates about DCTs, where several of the theories that are grouped under that heading are not really properly called ‘command’ theories at all.

Obviously, DCTs all share the claim that certain (perhaps all) moral statuses depend on God in some way. On a previous occasion I followed Erik Wielenberg’s suggestion and drew a distinction between two classes of these divine-dependency theories. The first, and more general, class is that of ‘theological stateism’. All theories in this class claim that certain moral statuses depend for their existence on one or more of God’s states of being (e.g. his nature, his beliefs, his desires etc). The second, and more narrowly circumscribed class, is that of ‘theological voluntarism’. Theories in this class claim that certain moral statuses depend for their existence on one or more of God’s voluntary acts (e.g. his willing or intending X; his commanding X). Voluntarist theories are a subset of stateist theories, and DCTs are a further subset of voluntarist theories. I have tried to illustrate this below.




Hopefully that is reasonably clear. Within the class of command theories, Morriston and Peoples introduce further two further distinctions. They are:


Causal Divine Will Theories: These theories hold that some moral statuses (most commonly that status of being obligatory) are dependent for their existence on God’s willing that they be so. This sort of view was defended by Philip Quinn, and was referred to as a ‘command’ theory, but Morriston argues that it is not really about commands per se since on Quinn’s view the commands need not be communicated. Whether that is sufficient to disqualify it from being a ‘command’ theory is debateable. For now, I’ll view it as such.

Modified Divine Command Theories: These theories hold that some moral statuses (most commonly the status of being obligatory) are dependent for their existence on God’s commanding and communicating that they be so. This is the sort of view defended by Robert Adams and is, according to Morriston, properly called a ‘command’ theory since communication is essential to the creation of the particular moral status.


Adams’s view is worthy of further consideration here since it is quite popular among contemporary DCTers. I have discussed it on a few previous occasions. In essence, Adams thinks that axiological moral statuses (i.e. the status of being good or bad) do not depend for their existence on God’s commands. But he thinks that God’s commands are necessary for the creation of certain deontic moral statuses, in particular the status of being obligatory. Indeed, Adams argues that without commands from an authoritative agent we cannot know the difference between something’s being morally supererogatory (i.e. above and beyond our moral obligations) and morally obligatory. For instance, it might be a morally excellent thing for me to send half my income to charitable organisations in the developing world, but without an authoritative command we cannot say that it is obligatory.

Communication of commands is consequently essential to Adams’s theory since without being told (in some way) that X is obligatory we cannot know that it really is. This need for communication turns out to be important when assessing the strength of Morriston’s critique. I will return to it later.


2. The Epistemological Objection and Causalist Theories
Now that we have distinguished between these two versions of theological voluntarism, we can proceed to assess the strength of epistemological objection in relation to each. We start with the causalist theory propounded by Quinn. Peoples argues that the epistemological objection has no real impact on this theory. I am less convinced of this.

We have to understand what he argues first. Peoples, following Quinn, argues that divine will theories are pure ontological theories. In other words, they do not incorporate an epistemic condition into their account of moral ontology. He doesn’t put it in these terms, but that’s the gist of it. To illustrate, he offers the following quote from Quinn on the epistemological objection:


Our theory asserts that divine commands are conditions causally necessary and sufficient for moral obligations and prohibitions to be in force. It makes no claims at all about how we might come to know just what God has commanded. For all the theory says, it might be that we can come to know what God has commanded by first coming to know what is obligatory and forbidden. After all, it is a philosophical truism that the causal order and the order of learning need not be the same. 
(Quinn 2006, 44-45)


Quinn is clear in this passage that his theory (unlike Adams’s) makes ‘no claims at all’ about moral epistemology. It only claims that an act of the divine will is necessary to bring moral obligations into existence. How people come to learn of those obligations is irrelevant. I have tried to illustrate this in the diagram below. The bit in the shaded box represents Quinn’s account of moral ontology; ordinary moral agents sit outside this box. They may come to know what the moral truths are, or they may not. This does not upset the plausibility of the underlying ontological theory.



Peoples seems to think that this is right. He thinks that if Quinn says his theory contains no epistemic conditions, then his theory contains no epistemic conditions. The epistemological objection has no foothold against such a theory. In saying this, Peoples is assisted by the fact that Morriston himself concedes that the objection has no impact on Quinn’s theory. I’m less convinced about this. For one thing, I don’t believe that the proponent of a theory is always the final arbiter of what that theory does or does not entail. For another, I believe that any plausible account of moral ontology probably has to include some implicit epistemic condition.

I am not alone in this belief. It seems to be pervasive in contemporary metaethics. I wrote a series of posts on this topic a few years back. In them, I looked at typical methodological approaches in metaethics. Oftentimes, proponents of a particular metaethical theory will assess that theory relative to a number of plausibility conditions, i.e. things that they think any good metaethical theory should account for. Included in those conditions there is usually something about how moral facts ‘join up’ with the reasoning capacities of moral agents. This typically requires some plausible account of how a moral agent comes to know what its relevant moral obligations are. A failure to account for this renders a theory less plausible. This is why there is so much discussion of debunking arguments in the literature. It is also why I wrote so much about those debunking arguments. For instance, in the debate between moral realists and moral anti-realists, some anti-realists argue that realism is implausible because it doesn’t explain how evolved beings like us could come to have knowledge of moral reality.

It could be that this approach to metaethics is fundamentally misconceived. But if it is not, then it seems like epistemic conditions must be folded into any plausible account of moral ontology. Thus, we should not be so eager to embrace Quinn’s statement that his theory ‘makes no claims at all’ about moral epistemology. It probably has to, if it is to be plausible.


3. The Epistemological Objection to Modified Command Theories
Let’s move on to Adams’s theory. As I mentioned above, Adams’s seems to concede that his account of moral ontology includes an epistemic condition. For him, moral obligations do not exist unless they are commanded and communicated to a moral agent by God. Remember how the communication is necessary in order for the moral agent to be able to distinguish between what is supererogatory and what is obligatory. I’ve tried to illustrate this in the diagram below. You should be able to see from this how different Adams’s theory is from Quinn’s. Whereas Quinn left the agent’s awareness of the command out of his account of moral ontology; Adams’s incorporates it into his account.




Morriston seizes upon this in presenting his version of the epistemological objection. It goes a little something like this:



  • (1) According to Adams, in order for X (or not-X) to be a moral obligation it must be commanded by God and communicated to the moral agent to whom it applies.
  • (2) In order for a command to X (or not-X) to be communicated to a moral agent it must be communicated via a sign that the agent is capable of identifying and understanding.
  • (3) A reasonable non-believer has no epistemic vices, but cannot identify an/or understand divine commands.
  • (4) Therefore, a reasonable non-believer cannot have moral obligations (under the terms of Adams’s theory).



We need to clarify certain aspects of this argument before we can evaluate it. First, we need to clarify the concept of a reasonable non-believer. A reasonable non-believer is someone who honestly searches for proof of God’s existence, but cannot find any evidence that brings them to believe. In doing this, the reasonable non-believer does not violate any epistemic duties. They are not bitter or biased or closed to potential sources of evidence. They simply cannot find any. The reasonableness of these non-believers is crucial to Morriston’s argument. We can safely assume that Adams’s theory does not require that commands be understood by the insane or the morally evil. It is only those who are epistemically open that are affected. Another point of clarification is that the conclusion of the argument can be taken in a number of different ways. I like to use it to argue that the modified DCT fails to provide a fully plausible account of moral ontology. Others like to use it as something akin to a reductio of the modified DCT. In other words, they say things like ‘but of course reasonable non-believers have knowledge of moral obligations; therefore, the DCT is absurd’. Maybe there is no practical difference between these two positions. Just a difference in style.

Moving on to the evaluation of the argument, there is really only one premise that is at issue. That is premise (3). A proponent of the DCT could target the first part of premise (3) and argue that there is no such thing as a reasonable non-believer. Since I like to think of myself as a reasonable non-believer, I’m not inclined to accept that line of argument. But Peoples thinks there may be something to it, though he doesn’t discuss it at any great length. That leaves the second part of premise (3) as the other potential target. They could argue that a reasonable non-believer does in fact have the ability to identify and understand the relevant divine commands. To make this argument credible, they would need to offer a fuller account of what it means for an obligation to be communicated to a moral agent. This means they need to go back into premise (2) and flesh out the standard of communication that is being implied by that premise.

Now, in his discussion of the argument, Morriston seems to have a very narrow conception of the possible forms of divine communication. He seems to think that (on Adams’s theory) God must communicate his commands in the form of a speech act. Peoples, rightly in my opinion, argues that no proponent of the DCT has such a narrow conception of divine communication. Instead, they all talk about multiple possible forms of divine communication (e.g. via moral intuition, general revelation, special revelation, and natural law). So to make the epistemological objection compelling, you must show that communication fails across these multiple possible forms.

And this is where Peoples thinks the argument falls down. Morriston argues that in order to have the requisite knowledge of the divine command, the moral agent must know the source of the command. That is to say, they must know that the command emanated from God. But of course this is exactly what a reasonable non-believer cannot know. Peoples thinks this is wrong. He says they only need to have knowledge of the content of the command. To underscore his point, he relies on Adams’s brief sketch of what it takes for God to communicate a command to an agent:

Adams’s Communicative Standard: “In my opinion, a satisfactory account of [this standard] will have three main points: (1) A divine command will always involve a sign, as we may call it, that is intentionally caused by God; (2) In causing the sign God must intend to issue a command, and what is commanded is what God intends to command thereby; (3) The sign must be such that the intended audience could understand it as conveying the intended command.” (Adams, Finite and Infinite Goods).

Peoples makes much of condition (3). He points out that this condition says nothing about the agent needing to understand the source of the command:

“Adams did not say that a sign needs to be such that a person can understand that it conveys a divine command, but only that he can understand it as conveying “the intended command”. He does not even need to know that it is a command….In slogan form: People need knowledge of the command, not knowledge about the command.” 
(Peoples 2011)

He then goes on to give an example of how someone might know the content of a command without knowing its source:

“Consider for example the possibility that God conveys the ‘sign’ to people regarding some act (let’s pick murder) via a proper function of the human conscience. Nobody needs to know what conscience is, how we got one, or that God uses it to ensure that we have some true beliefs in order for them to know, via conscience, that murder is wrong.” 
(Peoples 2011)

What he is imagining here is a case in which someone has a really strong innate feeling that murder is forbidden, without knowing how or why they came to have it. Even still, God has successfully communicated his command to them. This is why Peoples thinks that Morriston’s argument fails. He goes on to point out that in such a case a reasonable non-believer might have incomplete moral knowledge, or might fail to appreciate how bad the violation of that command is, but that this is irrelevant to whether they satisfy the epistemic condition in Adams’s argument.

I have some problems with this. To repeat something I said earlier, I don’t think we can merely take Adams’s word for it regarding the communicative standard implied by his theory. He might think that knowledge of content is all that is required; but that doesn’t mean he is right. Remember the importance of the supererogation/obligation distinction. In his original work, Adams’s seems pretty clear that a command from a being with the right kind of authority is needed in order for an agent to be able to distinguish an obligation from an act of supererogation. As best I can tell, this implies that the agent must have knowledge of the source of the command as well as knowledge of its content. It is not enough that the agent knows that killing is really bad, or that giving money to charity is really good. They must know that these things are morally required of them. And under Adams’s theory knowing that these things were commanded by the right kind of entity is critical to drawing the distinction between what is great and what is obliged.

Admittedly, this is merely the sketch of an argument. But it seems to be truer to the communicative demands of Adams’s theory. If so, the epistemological objection still has some bite because reasonable non-believers will be incapable of knowing that a command (be it communicated via speech or conscience or whatever) emanates from the right kind of source. This is something I discussed at much greater length in my previous post on this topic.


Right, I'm exhausted with this topic now. That's it for this post.

Monday, July 20, 2015

The Philosophical Importance of Algorithms


IBM's Watson (Image from Clockready via Wikipedia)

In the future, every decision that mankind makes is going to be informed by a cognitive system like Watson…and our lives will be better for it. 
(Ginni Rometty commenting on IBM’s Watson)

I’ve written a few posts now about the social and ethical implications of algorithmic governance (algocracy). Today, I want to take a slightly more general perspective on the same topic. To be precise, I want to do two things. First, I want to discuss the process of algorithm-construction and the two translation problems that are inherent to this process. Second, I want to consider the philosophical importance of this process.

In writing about these two things, I’ll be drawing heavily from the work done by Rob Kitchin, and in particular from the ideas set out in his paper ‘Thinking critically about and researching algorithms’. Kitchin is currently in charge of The Programmable City research project at Maynooth University in Ireland. This project looks closely at the role of algorithms in the design and function of ‘smart’ cities. The paper in question explains why it is important to think about algorithms and how we might go about researching them. I’ll be ignoring the latter topic in this post, though I may come back to it at a later stage.


1. Algorithm-Construction and the Two Translation Problems
The term ‘algorithm’ can have an unnecessarily mystifying character. If you tell someone that a decision affecting them was made ‘by an algorithm’, or if, like me, you talk about the rise of ‘algocracy’, there is a danger that you present an overly alarmist and mysterious picture. The reality is that algorithms themselves are relatively benign and easy to understand (at least conceptually). It is really only the systems through which they are created and implemented that give rise to problems.

An algorithm can be defined in the following manner:

Algorithm: A set of specific, step-by-step instructions for taking an input and converting into an output.

So defined, algorithms are things that we use everyday to perform a variety of tasks. We don’t run these algorithms on computers; we run them on our brains. A simple example might be the sorting algorithm you use for stacking books onto the shelves in your home. The inputs in this case are the books (and more particularly the book titles and authors). The output is the ordered sequence of books that ends up on your shelves. The algorithm is the set of rules you use to end up with that sequence. If you’re like me, this algorithm has two simple steps: (i) first you group books according to genre or subject matter; and (ii) you then sequence books within those genres or subject areas in alphabetical order (following the author’s surname). You then stack the shelves according to the sequence.

But that’s just what an algorithm is in the abstract. In the modern digital and information age, algorithms have a very particular character. They lie at the heart of the digital network created by the internet of things, and the associated revolutions in AI and robotics. Algorithms are used to collect and process information from surveillance equipment, to organise that information and use it to form recommendations and action plans, to implement those action plans, and to learn from this process.

Everyday we are exposed to the ways in which websites use algorithms to perform searches, personalise advertising, match us with potential romantic partners, and recommend a variety of products and services. We are perhaps less-exposed to the ways in which algorithms are (and can be) used to trade stocks, identify terrorist suspects, assist in medical diagnostics, match organ donors to potential donees, and facilitate public school admissions. The multiplication of such uses is what gives rise the phenomenon of ‘algocracy’, i.e. rule by algorithms.

All these algorithms are instantiated in computer code. As such, the contemporary reality of algorithm construction gives rise to two distinct translation problems:


First Translation Problem: How do you convert a given task into a human-language series of defined steps?

Second Translation Problem: How do you convert that human-language series of defined steps into code?


We use algorithms in particular domains in order to perform particular tasks. To do this effectively we need to break those tasks down into a logical sequence of steps. That’s what gives rise to the first translation problem. But then to implement the algorithm on some computerised or automated system we need to translate the human-language series of defined steps into code. That’s what gives rise to the second translation problem. I call these ‘problems’ because in many cases there is no simple or obvious way in which to translate from one language to the next. Algorithm-designers need to exercise judgment, and those judgments can have important implications.

Kitchin uses a nice example to illustrate the sorts of issues that arise. He discusses an algorithm which he had a role in designing. The algorithm was supposed to calculate the number of ‘ghost estates’ in Ireland. Ghost estates are a phenomenon that arose in the aftermath of the Irish property bubble. When developers went bankrupt, a number of housing developments (‘estates’) were left unfinished and under-occupied. For example, a developer might have planned to build 50 houses in a particular estate, but could have run into trouble after only fully completing 25 units, and selling 10. That would result in a so-called ghost estate.

But this is where things get tricky for the algorithm designer. Given a national property database with details on the ownership and construction status of all housing developments, you could construct an algorithm that sorts through the database and calculates the number of ghost estates. But what rules should the algorithm use? Is less than 50% occupancy and completion required for a ghost estate? Or is less than 75% sufficient? Which coding language do you want to use to implement the algorithm? Do you want to add bells and whistles to the programme, e.g. by combining it with another set of algorithms to plot the locations of these ghost estates on a digital map? Answering these questions requires some discernment and judgment. Poorly thought-out answers can give rise to an array of problems.


2. The Philosophical Importance of Algorithms
Once we appreciate the increasing ubiquity of algorithms, and once we understand the two translation problems, the need to think critically about algorithms becomes much more apparent. If algorithms are going to be the lifeblood of modern technological infrastructures, if those infrastructures are going to shape and influence more and more aspects of our lives, and if the discernment and judgment of algorithm-designers is key to how they do this, then it is important that we make sure we understand how that discernment and judgment operates.

More generally than this, if algorithms are going to sit at the heart of contemporary life, it seems like they should be of interest to philosophers. Philosophy is divided into three main branches of inquiry: (i) epistemology (how do we know?); (ii) ontology (what exists?); and (iii) ethics/morality (what ought we do?). The growth of algorithmic governance would seem to have important repercussions for all three branches of inquiry. I’ll briefly illustrate some of those repercussions here though it should be noted that what I am about to say is by no means exhaustive (Note: Floridi discusses similar ideas under his concept of information philosophy).

Looking first to epistemology, it is pretty clear that algorithms have an important impact how we acquire knowledge and on what can be known. We witness this in our everyday lives. The internet and the attendant growth in data-acquisition has resulted in the compilation of vast databases of information. This allows us to collect more potential sources of knowledge. But it is impossible for humans to process and sort through those databases without algorithmic assistance. Google’s Pagerank algorithm and Facebook’s Edgerank algorithm effectively determine a good proportion of the information with which we a presented on day-to-day basis. In addition to this, algorithms are now pervasive in scientific inquiry and can be used generate new forms of knowledge. A good example of this is the C-Path cancer prognosis algorithm. This is a machine-learning algorithm that was used to discover new ways in which to better assess the progression of certain forms of cancer. IBM hope that their AI system Watson will be provide similar assistance to medical practitioners. And if we believe Ginni Rometty (from the quote at the top of this post) use of such systems will effectively become the norm. Algorithms will shape what can be known and will generate knew forms of knowledge.

Turning to ontology, it might be a little trickier to see how algorithms can actually change our understanding of what kinds of stuff exists in the world, but there are some possibilities. I certainly don’t believe that algorithms have an effect on the foundational questions of ontology (e.g. whether reality is purely physical or purely mental), though they may change how we think about those questions. But I do think that algorithms can have a pretty profound effect on social reality. In particular, I think that algorithms can reshape social structures and create new forms of social object. Two examples can be used to illustrate this. The first example draws from Rob Kitchin’s own work on the Programmable City. He argues that the growth in so-called ‘smart’ cities gives rise to a translation-transduction cycle. On the one hand, various facets of city life are translated into software so that data can be collected and analysed. On other hand, this new information then transduces the social reality. That is to say, it reshapes and reorganises the social landscape. For example, traffic modeling software might collect and organise data from the real world and then planners will use that data to reshape traffic flows around a city.

The second example of ontological impact is in the slightly more esoteric field of social ontology. As Searle points out in his work on this topic, many facets of social life have a subjectivist ontology. Objects and institutions are fashioned into existence out of our collective imagination. Thus, for instance, the state of being ‘married’ is a product of a subjectivist ontology. We collectively believe in and ascribe that status to particular individuals. The classic example of a subjectivist ontology in action is money. Modern fiat currencies have no intrinsic value: they only have value in virtue of the collective system of belief and trust. But those collective systems of belief and trust often work best when the underlying physical reality of our currency systems is hard to corrupt. As I noted before, the algorithmic systems used by cryptocurrencies like Bitcoin might provide the ideal basis for a system of collective belief and trust. Thus, algorithmic systems can be used to add to or alter our social ontology.

Finally, if we look to ethics and morality we see the most obvious philosophical impacts of algorithms. I have discussed examples on many previous occasions. Algorithmic systems are sometimes presented to people as being apolitical, technocratic and value-free. They are anything but. Because judgment and discernment must be exercised in translating tasks into algorithms, there is much opportunity for values and to affect how they function. There are both positive and negative aspects to this. If well-designed, algorithms can be used to solve important moral problems in a fair and efficient manner. I haven’t studied the example in depth, but it seems like the matching algorithms used to facilitate kidney exchanges might be a good illustration of this. I have also noted, on a previous occasion, Tal Zarsky’s argument that well-designed algorithms could be used to eliminate implicit bias from social decision-making. Nevertheless, one must also be aware that implicit biases can feed into the design of algorithmic systems, and that once those systems are up and running, they may have unanticipated and unexpected outcomes. A good recent example of this is the controversy created by Google’s photo app, which used a facial recognition algorithm to label photographs of some African-American people as ‘gorillas’.

Anyway, that’s all for this post. Hopefully the challenges of algorithm construction and the philosophical importance of algorithmic systems is now a little clearer.


Wednesday, July 15, 2015

How should you title an academic article?




I have two guiding presumptions about the nature of academic publishing. The first is that academics want their work to be read. Academia is, for better or worse, a popularity contest. Academics want their work to be popular among other academics, and among policy-makers and the general public (depending on their goals and the nature of their research). ‘Popular’ doesn’t necessarily mean respected or admired. It is, of course, better to be popular and right, or popular and interesting, or popular and thought-provoking. But if you can’t be any of these things, then being debated and discussed is probably better than being ignored (within reason: if you are so controversial or stupid that you are constantly ridiculed, harassed or threatened, it is unlikely to be pleasant; anonymity might be better in that case).

In saying this, I don’t mean to downplay the intrinsic merits or rewards of writing and research. There is a lot to be said for the process of thinking and puzzling out an issue; of gaining private insight into some important concept or truth. But if you are only in it for these intrinsic rewards, then you don’t need to publish at all. If you are publishing your work, then popularity must matter at some level. This is true even if you only care about publishing in terms of the material rewards it brings. In the modern academy, career advancement depends, to a large extent, on how popular your work is. Universities love popularity metrics (e.g. reputational rankings). And the importance of all this is reflected in the fact that most academic publishers now provide you with a variety of popularity metrics whenever you publish your work with them. These include things like the number of downloads, shares on social networking sites, and citation rates. Academics often reference these things when looking for promotion or employment (I know I do).

My second presumption about the nature of academic publishing is that attention spans are incredibly short, and probably getting shorter all the time. This is certainly true for me. The internet is a rich cornucopia of information, and academic papers are published at an alarming rate. Deciding which papers to read is like trying to drink from a firehose. This means that if you want your work to be read, you really need to grab the potential reader’s attention. But how can you do this? I have a tendency to use my own experience as a guide — based on the assumption that there is nothing abnormal or non-average about me. A more data-driven approach would be useful but I’m quite lazy on that front. In any event, based on my own experience, two things determine whether or not I will read an article: the first is the article title; the second is the article abstract.

Now, I have a pretty rigid set of views about what an article abstract should look like. I think it should provide a very clear summary of the argument (or arguments) that will be defended in the article. The reader should be left in no doubt about the position(s) you will end up with at the end of the article. I also have a preferred template or test I use when writing an abstract. I wrote about this on a previous occasion. But despite my well-ordered approach to writing article abstracts, my approach to article titles is completely haphazard. I come up with something that feels or looks intuitively adequate, and then I think about it no more.

But if the goal is to be read, then this is pretty odd approach to take. In many ways, the title is likely to be more important than the abstract. The title is the first thing the reader sees. It will determine whether or not they even look at the abstract. So I really should be thinking about article titles in a more systematic manner. This post is a first step in this direction. I want to use it to catalogue some of my previous article-titling strategies, and to offer some reflections or thoughts on these strategies. And I also want to use it as a springboard for debate and discussion. It would be great if people could share their own thoughts and reflections on how to come up with article titles in the comments section.

I’ll start the ball rolling by describing my own approaches. As I just said, I’m pretty haphazard on this front. Nevertheless, there are some patterns and rules to what I do. The main rule is that I don’t like overly ‘clever’ or ‘funny’ titles. When I first started reading academic journal articles, I was enamoured with what I took to be funny or clever titles. I won’t name or shame anybody but you can imagine the kind of thing. Articles with titles like: ‘Bitch Better Have My Money: On the wisdom of debt forgiveness’. Over time I grew tired and suspicious of these title. Maybe this is irrational, but I think titles of this sort have a tendency to obscure. My own preferred titling-strategies settle into four categories:


The Question Title: A title which contains a provocative question of some sort. Some people hate question-titles. There is long-standing trope in journalism that any headline in the form of a question can always be answered ‘no’. But this isn’t true and I think question-titles have great merit. Questions can raise intriguing issues that pique a reader’s curiosity, and I think they can convey the subtle implication that the approach taken in the article will be inquisitive and non-ideological in nature (even if concrete conclusions are reached). I have only used a question in two of my article titles in the past — “Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised?” and “Hyperagency and the Good Life: Does Extreme Enhancement Threaten Meaning? — but I think I will do it more often in the future.

The Propositional Title: A title which contains (implicitly or explicitly) a clear statement of the main proposition(s) that will be defended in the article. I think this is a good approach to take, provided that the propositions being defended are interesting and capable of being stated succinctly. Many of my article titles are implicitly propositional, but I think only one or two have been successful on this front. My article on AI risk was titled “Why AI Doomsayers are like Sceptical Theists and Why it Matters”, which sets out pretty clearly what I will attempt to argue in main body of the article. And my article on the death penalty was titled “Kramer’s Purgative Rationale for Capital Punishment: A Critique”, which just about manages to imply what will be argued, though it doesn’t explain exactly what the problem with Kramer’s rationale is. I would like to experiment with more explicit propositional titles in the future.

The Descriptive-Triplet or -Doublet Title: A title which mentions the two or three key concepts or topics that will be addressed in the article. Descriptive titles definitely have their merits. I like them because they can be effective ways of conveying to the reader what the article is about, and they can allow readers to easily identify whether the concepts or topics covered are relevant to their own areas of research. I also think that doublets and triplets can be succinct, memorable and pleasing to the ear. Nevertheless, this is definitely a format that I tend to overuse, and I often fall back on it when desperate. For example, my last two articles have adopted the descriptive-triplet format — “Human Enhancement, Social Solidarity and the Distribution of Responsibility” and “Common Knowledge, Pragmatic Enrichment and Thin Originalism”. These seem dull and uninspiring to me now. I’m not sure I would have any interest in reading an article with a similar-sounding title.

The Ridiculous Title: A title which attempts to be provocative, descriptive or propositional but which fails due to length or obscurity. This is really just a catch-all category for the article titles I have come up with which seem — to my eyes — to fail miserably to provide an interesting hook for a reader. My favourite example of this from my own work is the article I published earlier this year on brain-based lie detection. For some unknown reason I thought the following would be a good title: “The Comparative Advantages of Brain-Based Lie Detection: The P300 Concealed Information Test and Pre-Trial Bargaining”. I think the idea was to provide a title that covered the main concepts and ideas, and then gave a sense of what the argument would be (something about the ‘comparative advantages’ of brain-based lie detection tests, whatever they are). But I think it fails miserably because it is replete with jargon (what is a “P300 Concealed Information Test”?) and is overly long. If I were given the chance, I would definitely re-title it to something like “Stopping the Innocent from Pleading Guilty: How Brain-Based Lie Detection Might Help” — which would give a clearer sense of what is being argued in the piece and why it is important.

So those are my strategies and thoughts. Do you have any thoughts on this topic? Do you know of any good data-based studies of academic article titles? (Someone must have looked into this in a systematic way). If so, please share in the comments section.

Tuesday, July 14, 2015

New Paper - Human Enhancement, Social Solidarity and the Distribution of Responsibility




I have a new paper coming out in the journal Ethical Theory and Moral Practice. This one deals with two objections to human enhancement. In both cases I first try to strengthen and clarify the objections before arguing why I think they ultimately fail. Fuller details are below. The official version of the paper won't be published for a couple of months, but you can access the final pre-publication drafts at the links I provide (philpapers is open access; academia.edu may require free sign-up in order to access).

Title: Human Enhancement, Social Solidarity and the Distribution of Responsibility
Journal: Ethical Theory and Moral Practice
Links: Philpapers; Academia; Official
Abstract: This paper tries to clarify, strengthen and respond to two prominent objections to the development and use of human enhancement technologies. Both objections express concerns about the link between enhancement and the drive for hyperagency (i.e. the ability to control and manipulate all aspects of one’s agency). The first derives from the work of Sandel and Hauskeller and is concerned with the negative impact of hyperagency on social solidarity. In responding to their objection, I argue that although social solidarity is valuable, there is a danger in overestimating its value and in neglecting some obvious ways in which the enhancement project can be planned so as to avoid its degradation. The second objection, though common to several writers, has been most directly asserted by Saskia Nagel, and is concerned with the impact of hyperagency on the burden and distribution of responsibility. Though this is an intriguing objection, I argue that not enough has been done to explain why such alterations are morally problematic. I try to correct for this flaw before offering a variety of strategies for dealing with the problems raised. 

Friday, July 10, 2015

The Case for a Marriage-Free State





The last couple of months have seen major victories for marriage equality. In May, Ireland voted to legalise same-sex marriage in a national referendum — the first country in the world to do so by popular vote. In June, the US Supreme court issued a landmark 5-4 decision legalising same-sex marriage throughout the United States. These were important steps toward building a fairer and more just society. If marriage is to continue to exist as a legally-recognised relationship status, then it is important that it do so in an egalitarian and inclusive manner. I don’t think anyone should doubt this.

But there is something worth doubting in the midst of all these victories. Should marriage continue to exist as a legally-recognised relationship status? Think about what this means. We enter into relationships with other human beings all the time. These relationships tend to support a number of different functions or roles. Some are purely commercial or business oriented; some are concerned with friendship and sociality; some are sexually intimate; some are directed towards property-sharing; some are about rearing children; some are about mutual caregiving and support. Legal recognition of these relationship functions usually results in the parties to them gaining a number of legal rights and duties. The distinctive feature of marriage-recognition (in modern liberal societies) is that it focuses on a particular kind of relationship — viz. a monogamous relationship — which fulfils a number of these functions — typically caregiving, sexual intimacy and child-rearing — and bundles together a bunch of rights and duties that then attach to the members of that relationship. The question is whether this special status and bundling of rights should continue.

This is a question that has long exercised certain feminist theorists. They view marriage as a problematic institution with a number of troubling properties. Some are inclined to support alternative kinds of relationship-recognition. Clare Chambers’s article ‘The Marriage Free State’ offers an interesting perspective on this debate. She argues that the state should stop legally recognising marriage and should not replace marriage-recognition with some alternative type of relationship-recognition (e.g. civil unions). Instead, she argues that the state should regulate relationship functions on a piecemeal basis.

In the remainder of this post, I want to take a look at Chambers’s argument for the marriage-free state. Two caveats before I do so. First, as I understand it, the defence of the marriage-free state contained in the paper I read is an incomplete and imperfect overview of the case that she will present in a forthcoming book. Second, this is not a topic with which I am overly familiar. Reading and writing about Chambers’s paper is way for me to feel my way into this debate. I’ll offer some of my own thoughts along the way, but these are very much preliminary and, no doubt, naive.


1. The Feminist Critique of Marriage
If you’re going to make the case for a marriage-free state, then it probably makes sense to first ask whether marriage-recognition is a bad thing. After all, societies have been affording married couples a special legal status for centuries (millenia even). If we are going to move away from this societal status quo, we’ll need some convincing (not that loyalty to the status quo is always a good thing; just that it usually takes something dramatic to push people away from it). Fortunately, many leading feminist theorists have already obliged on this front by providing a number of reasons to doubt the value of marriage-recognition.

The problem, as Chambers notes, is that there is a faintly paradoxical air to two of the standard critiques:

Critique One: Marriage is a deeply patriarchal and sexist institution that oppresses and harms women.

Critique Two: Marriage is an inegalitarian institution because it is (traditionally anyway) heterosexist and so excludes homosexual couples from the benefits enjoyed by heterosexual couples.

The second critique has obviously found favour, and marriage-equality is now on the ascendancy in many Western countries. But the second critique appears to be in tension with the first. The problem is this: the first critique seems to suggest that marriage is bad for some of the people who enter into it (specifically women); the second seems to suggest that marriage is good for the people who enter into it and should therefore be expanded to others. How can marriage be both of these things? We need to think in a little bit more detail about the alleged harms of marriage.

These harms can be divided into two main categories:

Practical Harms of Marriage: These are harms to the material or legal condition of the people who enter into the marriage. They result directly from being in this particular kind of relationship.

Symbolic Harms of Marriage: These are harms that result from the social meanings that attach to the institution of marriage. These harms need not result from being in this particular kind of relationship. Indeed, they are often felt by people who are not party to a marital relationship.

Historically, there were many practical harms to women who entered into marriage. The most obvious of these were legal. Married women lost legal status and effectively became the chattel of their husbands. This then exposed them to a number of potential material harms such as domestic abuse, marital rape, loss of opportunity, and increased burdens of carework and child-rearing. Obviously, the status of non-married women in such societies wasn’t exactly stellar either (though it did improve over time) so it may be difficult to say whether women were worse off when married, but this doesn’t detract from the fact that, historically, there were a number of practical harms clearly associated with the institution.

What’s the position nowadays? In most Western societies, the legal disbenefits of marriage have disappeared. Women are no longer their husband’s chattel. They retain their independent legal status and associated legal rights. They also gain legal rights associated with inheritance, tax, and property-sharing (though this varies from jurisdiction-to-jurisdiction). Still, these changes have been costly: many women had to suffer in the process. Furthermore, many material harms of marriage persist, including in particular the disproportionate share of home/care based work that is taken on by women. On the whole, though, it is probably very difficult to determine whether being married is, on net, practically bad for women. The effects are likely to vary greatly, depending on the woman, the partner, and the relevant social and cultural norms. (Also, though it is not mentioned in Chambers’s article, there are some studies suggesting various health benefits of marriage. They would presumably need to be factored into any overall assessment - though cause and effect is difficult to disentangle)

The symbolic harms of marriage are rather different. Symbolically, marriage tends to reinforce a certain view of women and their role or status in society. This is clear in the symbolism of the traditional ‘white wedding’. Chambers describes it aptly:

The white wedding is replete with sexist imagery: the father ‘giving away’ the bride; the white dress symbolising the bride’s virginity (and emphasising the importance of her appearance); the vows to obey the husband; the minister telling the husband ‘you may now kiss the bride’ (rather than the bride herself giving permission, or indeed initiating or at least equally participating in the act of kissing); the reception at which, traditionally, all the speeches are given by men; the wife surrendering her own name and taking her husband’s. 
(Chambers 2012)

In addition to this, the social meaning that attaches to the institution of marriage has a number of indirect effects on women. Chambers does quite a good job outlining these, and I am loathe to cut out all the details and examples she gives, but I don’t want to repeat everything she says so I’ll just cut to the bottom line: the social meaning tends to reinforce the view that being married is what women should ultimately aspire to; that not being married is to be in an inferior state of existence; and this has the effect of narrowing women’s aspirations and opportunities.

Now, it might be argued in reply that these symbolic effects have improved over time. Women don’t have to take on their husband’s names, they don’t have to have traditional ‘white weddings’ (though the pressures are still there) and so on. But whether marriage can symbolically break with the negative features of its past is unclear. One of the reasons why marriage is socially valued and protected is because it is traditional. This means that the symbolic meaning is directly linked with the history of the institution. Consequently, it is much more likely that marriage is symbolically tainted by its past meanings; and hence much more difficult to drain it of those meanings.

This brings us back then to the second critique of marriage: its heterosexist and inegalitarian nature. Why is it, if marriage is so bad, that many feminist and homosexual theorists and activists support marriage equality? The apparent paradox is easily resolved. What these theorists recognise is that (a) marriage-recognition does bring with it some practical (primarily legal) benefits and (ii) the symbolic value of marriage would, if extended to homosexual couples, help foster a greater sense of social belonging and acceptance. Nevertheless, accpeting these two things is consistent with believing in the practically and symbolically negative aspects of marriage too. In other words, the position can be that it would be better if marriage-recognition is extended to include homosexual couples, but it would be even better again if the state stopped legally recognising marriage.


2. The Case for Piecemeal Relationship-Recognition
But if the state is going to stop recognising marriage, what is it going to do instead? Presumably, relationships will still happen and will still need to be regulated. This is where Chambers’s paper gets really interesting. To think about what happens in a marriage-free state, we need to recall what it means, legally speaking, to recognise marriage as a special relationship status. It means that you single out a particular kind of relationship (monogamous unions) for special recognition, you presume that this relationship brings together a number of important relationship functions, and you bundle together a bunch of rights and duties and apply them to the members of these relationships. When it comes to non-marital forms of relationship-recognition, this implies two choices either: (i) we continue to bundle or (ii) we don’t.

The difference is between holistic relationship recognition and piecemeal relationship recognition. In the former case, we establish a new unique relationship status that replaces marriage (e.g. a civil union) and assign a bundle of rights and duties to that relationship. In the latter case, we don’t establish a new unique relationship status. Instead, we look to the different relationship functions, and regulate those functions individually (i.e. on a piecemeal basis).

The case for alternative holistic regulation has been set out by others. Chambers points in particular to the work of Elizabeth Brake and Tamara Metz who both call for the state to provide special recognition for caregiving relationships in lieu of marriage. Metz argues for recognising intimate caregiving unions (ICGUs); Brake argues for minimal marriage, which is a relationship based on caregiving. In Brake’s case, it is argued that there should be no upper limit on the number of parties to such a relationship, nor any restriction on entry on the basis of sex/gender.

Chambers argues that there are two problems with these holistic approaches to relationship-recognition:

The Bundling Problem: The holistic approach assumes that most of the important functions of life can be satisfied in one core relationship. In other words, that we can get what we need in terms of property-sharing, intimacy, caregiving and child-rearing (among other things) in one special relationship. It also assumes that the state is well-placed to determine and regulate the nature and extent of that relationship. Bundling also has an exclusionary effect insofar as the rights and duties are only obtainable by those who are in such relationships.

The Opt-In Problem: Proposals for holistic regulation invariably assume that the special relationship status is one that people will opt into. On the one hand, this makes sense: people should be free to determine whether they want the bundle of rights and duties associated with that relationship status. On the other hand, the opt-in approach often works against weaker and more vulnerable relationship partners. People can be involved in factually equivalent relationships and yet not have the associated legal rights because they have not opted-in or because the status quo favours one of the relationship partners not opting in. This used to be a particular problem for non-married co-habiting couples, and still is in some jurisdictions, though more favourable rules are now in place.

In light of these problems, Chambers argues for a piecemeal approach to relationship recognition. This approach rejects bundling. It focuses instead on the different relationship-functions and regulates those individually. Thus, for example, there would be one set of regulations for the child-rearing function, another for the property-sharing function, another for the sexual-intimacy function and so on. There would be no particular ex ante restrictions on who could share these functions. The regulations for each function would have to be developed and argued for independently. More controversially, Chambers argues that the regulation of these functions should not be conducted on an opt-in basis. Instead, the rules should apply simply by virtue of the fact that people share those functions with others.

I have some resistance to this prima facie compulsory system of regulation, but there are three points worth bearing in mind. First, it is possible in many cases that people will consent to sharing those relationship functions with others and will be aware, in advance, of the rights and duties associated with doing so. Second, we already impose some relationship regulations on people without their explicit consent. For instance, many of the rights and duties associated with parenting and child-rearing now apply irrespective of the parents’ official marital status (though there are still many that do). This is usually justified on the grounds that the child’s interests take precedence (i.e. that there is a greater good at stake). And third, Chambers suggests that in some cases the rules and regulations could apply on an opt-out basis. This would preserve liberty by an alternative means.



3. Conclusion
Anyway, that’s it for this post. The briefly recap, there is a standard feminist critique of the institution of marriage. This critique holds that marriage is heterosexist and oppressive to women for both practical and symbolic reasons. Finding some alternative form of relationship-recognition would, therefore, be welcome. When looking for an alternative, we have two options with which to contend. We can adopt a holistic approach and look for some alternative relationship status into which we bundle rights and duties, e.g. civil unions or intimate caregiving unions. The problem with this holistic approach is that it assumes most of the important life functions can be satisfied within one relationship status and that people should be free to opt-into a bundle of rights and duties. This is often factually inaccurate, exclusionary and problematic for the more vulnerable members of a relationship. Consequently, Chambers thinks that we should regulate relationships on a piecemeal basis, focusing on the different relationship functions instead of one particular relationship status. I think this proposal is interesting and I look forward to seeing her flesh it out in more detail in her forthcoming book.

Tuesday, July 7, 2015

Is effective regulation of AI possible? Eight potential regulatory problems




The halcyon days of the mid-20th century, when researchers at the (in?)famous Dartmouth summer school on AI dreamed of creating the first intelligent machine, seem so far away. Worries about the societal impacts of artificial intelligence (AI) are on the rise. Recent pronouncements from tech gurus like Elon Musk and Bill Gates have taken on a dramatically dystopian edge. They suggest that the proliferation and advance of AI could pose a existential threat to the human race.

Despite these worries, debates about the proper role of government regulation of AI have generally been lacking. There are a number of explanations for this: law is nearly always playing catch-up when it comes to technological advances; there is a decidedly anti-government libertarian bent to some of the leading thinkers and developers of AI; and the technology itself would seem to elude traditional regulatory structures.

Fortunately, the gap in the existing literature is starting to be filled. One recent addition to it comes in the shape of Matthew Scherer’s article ‘Regulating Artificial Intelligence Systems’. Among the many things that this article does well is that it develops the case for thinking that AI is (and will be) exceptionally difficult to regulate, whilst at the same time trying to develop a concrete proposal for some form of appropriate regulation.

In this post, I want to consider Scherer’s case for thinking that AI is (and will be) exceptionally difficult to regulate. That case consists of three main arguments: (i) the definitional argument; (ii) the ex post argument and (iii) the ex ante argument. These arguments give rise to eight specific regulatory problems (illustrated below). Let’s address in each in turn.

(Note: I won’t be considering whether the risks from AI are worth taking seriously in this post, nor will I be considering the general philosophical-political question of whether regulation is a good thing or a bad thing; I’ll be assuming that it has some value, however minimal that may be)





1. The Definitional Argument
Scherer’s first argument focuses on the difficulty of defining AI. Scherer argues that an effective regulatory system needs to have some clear definition of what is being regulated. The problem is that the term ‘artificial intelligence’ admits of no easy definition. Consequently, and although Scherer does not express it in this manner, it seems like the following argument is compelling:


  • (1) If we cannot adequately define what it is that we are regulating, then the construction of an effective regulatory system will be difficult.
  • (2) We cannot adequately define ‘artificial intelligence’.
  • (3) Therefore, the construction of an effective regulatory system for AI will be difficult.


Scherer spends most of his time looking at premise (2). He argues that there is no widely-accepted definition of an artificially intelligent system, and that the definitions that have been offered would be unhelpful in practice. To illustrate the point, he appeals to the definitions offered in Russell and Norvig’s leading textbook on artificial intelligence. These authors note that definitions of AI tend to fit into one of four major categories: (i) thinking like a human, i.e. AI systems are ones that adopt similar thought processes to human beings; (ii) acting like a human, i.e. AI systems are ones that are behaviourally equivalent to human beings; (iii) thinking rationally, i.e. AI systems are ones that have goals and reason their way toward achieving those goals; (iv) acting rationally, i.e. AI systems are ones that act in a manner that can be described as goal-directed and goal-achieving. There are further distinctions then depending on whether the AI system is narrow/weak (i.e. focused on one task) or broad/strong (i.e. focused on many). Scherer argues that none of these definitions is satisfactory from a regulatory standpoint.

Thinking and acting like a human was a popular way of defining AI in the early days. Indeed, the pioneering paper in the field — Alan Turing’s ‘Computing Machinery and Intelligence’ — adopts an ‘acting like a human’ definition of AI. But that popularity has now waned. This is for several reasons, chief among them being the fact that designing systems that try to mimic human cognitive processes, or that are behaviourally indistinguishable from humans, is not very productive when it comes to building actual systems. The classic example of this being the development of chess-playing computers. These systems do not play chess, or think about chess, in a human-like way; but they are now better at chess than any human being. If we adopted a thinking/acting like a human definition for regulatory purposes, we would miss many of these AI systems. Since these systems are the ones that could pose the largest public risk, this wouldn’t be very useful.

Thinking and acting rationally is a more popular approach to AI definition nowadays. These definitions focus on whether the system can achieve a goal in narrow or broad domains (i.e. is the system capable of optimising a value function). But they too have their problems. Scherer argues that thinking rationally definitions are problematic because thinking in a goal-directed manner often assumes, colloquially, that the system doing the thinking has mental states like desires and intentions. It is very difficult to say whether an AI system has such mental states. At the very least, this seems like a philosophical question that legal regulators would be ill-equipped to address (not that philosophers are much better equipped). Acting rationally definitions might seem more promising, but they tend to be both under and over-inclusive. They tend to be over-inclusive insofar as virtually any machine can be said to act in a goal directed manner (Scherer gives the example of a simple stamping machine). They tend to be under-inclusive insofar as systems that act irrationally may pose an even greater risk to the public and hence warrant much closer regulatory scrutiny.

I think Scherer is right to highlight these definitional problems, but I wonder how serious they are. Regulatory architectures are made possible by law, and law is expressed in the vague and imprecise medium of language, but problems of vagueness and imprecision are everywhere in law and that doesn’t prove an insuperable bar to regulation. We regulate ‘energy’ and ‘medicine’ and ‘transport’, even though all these things are, to greater or lesser extent, vague.

This brings us back to premise (1). Everything hinges on what we deem to be an ‘adequate’ definition. If we are looking for a definition that gives us necessary and sufficient conditions for category membership, then we are probably looking for the wrong thing. If we are looking for something that covers most phenomena of interest and can be used to address the public risks associated with the technology, then there may be reason for more optimism. I tend to think we should offer vague and over-inclusive definitions in the legislation that establishes the regulatory system, and then leave it to the regulators to figure out what exactly deserves their scrutiny.

In fairness to him, Scherer admits that this argument is not a complete bar to regulation, and goes so far as to offer his own, admittedly circular, definition of an AI as any system that performs a task that, if it were performed by a human, would be said to require intelligence. I think that might be under-inclusive, but it is a start.


2. The Ex Post Argument: Liability Gaps and Control Problems
The terms ‘ex post’ and ‘ex ante’ are used frequently in legal scholarship. Their meanings will be apparent to anyone who has studied Latin or is familiar with the meanings of ‘p.m.’ and ‘a.m.’. They mean, roughly and respectively, ‘after the fact’ and ‘before the fact’. In this case, the ‘fact’ in question relates to the construction and implementation of an AI system. Scherer argues that regulatory problems arise both at the research and development of the AI (the ex ante phase) and once the AI is ‘unleashed’ into the world (the ex post phase). This might seem banal, but it is worth dividing up the regulatory challenges into these distinct phases just so as to get a clearer sense of the problems that might be out there.

We can start by looking at problems that arise once the AI is ‘unleashed’ into the world. It is, of course, very difficult to predict what these problems will be before the fact, but there are two general problems that putative regulators would need to be aware of.

The first is something we can call the ‘foreseeability problem’. It highlights the problem that AI could pose for traditional standards for legal liability. Those traditional standards hold that if some harm is done to another person somebody else may be held liable for that harm provided that the harm in question was reasonably foreseeable (there’s more to the legal standard than that, but that’s all we need to know for now). For most industrial products, this legal standard is more than adequate: the manufacturer can be held responsible for all injuries that are reasonably foreseeable from use of the product. With AI things might be trickier. AI systems are often designed to be autonomous and to act in creative ways (i.e. ways that are not always reasonably foreseeable by the original designers and engineers).

Scherer gives the example of C-Path, a cancer pathology machine learning algorithm. C-Path found that certain characteristics of stroma (supportive tissue) around cancerous cells were better prognostic indicators of disease progression than actually cancerous cells. This surprised many cancer researchers. If autonomous creativity of this sort becomes common, then what the AI does may not be reasonably foreseeable and people may not have ready access to legal compensation if an AI program causes some injury or harm.

While it is worth thinking about this problem, I suspect that it is not particularly serious. The main reason for this is that ‘reasonable foreseeability’ standards of liability are not the only game in town. The law already provides from strict liability standards (i.e. liability in the absence of fault) and for vicarious liability (i.e. liability for actions performed by another agent). These forms of liability could be expanded to cover the ‘liability gaps’ that might arise from autonomous and creative AI.

The second ex post problem is the ‘control problem’. This is the one that worries the likes of Elon Musk, Bill Gates and Nick Bostrom. It arises when an AI program acts in such a way that it is no longer capable of being controlled by its human makers. This can happen for a number of reasons. The most extreme reason would be that the AI is smarter and faster than the humans; less extreme reasons could include flawed programming and design. The loss of control can be particularly problematic when the interests of the AI and the programmers no longer align with one another. Scherer argues that there are two distinct control problems:

Local Control Problem: Arises when a particular AI system can no longer be controlled by the humans who have been assigned legal responsibility for controlling that system.
Global Control Problem: Arises when an AI can no longer be controlled by any humans.

Both of these control problems would present regulatory difficulties, but the latter would obviously be much more worrying than the former (assuming the AI is capable of doing serious harm).

I don’t have too much to say about this since I agree that this is a problem. I also like this particular framing of the control problem insofar as it doesn’t place too heavy an emphasis on the intelligence of an AI. The current furore about artificial superintelligence is philosophically interesting, but it can serve to obscure the fact that AI systems with much lower levels of ability could pose serious problems if they act outside the control of human beings (be that locally or globally).


3. The Ex Ante Argument: Discreetness, Diffuseness, Discreteness and Opacity
So much for the regulatory problems that arise after the creation and implementation of an AI system. What about the problems that arise during the research and development phase? Scherer argues that there are four such problems, each associated with the way in which AI research and development could leverage the infrastructure that has been created during the information technology age. In this sense, the regulatory problems posed by AI are not intrinsically different from the regulatory problems created by other systems of software development, but the stakes might be much higher.

The four problems are:

The Discreetness Problem: AI research and development could take place using infrastructures that are not readily visible to the regulators. The idea here is that an AI program could be assembled online, using equipment that is readily available to most people, and using small teams of programmers and developers that are located in different areas. Many regulatory institutions are designed to deal with largescale industrial manufacturers and energy producers. These entities required huge capital investments and were often highly visible; creating institutions than can deal with less visible operators could prove tricky.

The Diffuseness Problem: This is related to the preceding problem. It is the problem that arises when AI systems are developed using teams of researchers that are organisationally, geographically, and perhaps more importantly, jurisdictionally separate. Thus, for example, I could compile an AI program using researchers located in America, Europe, Asia and Africa. We need not form any coherent, legally recognisable organisation, and we could take advantage of our jurisdictional diffusion to evade regulation.

The Discreteness Problem: AI projects could leverage many discrete, pre-existing hardware and software components, some of which will be proprietary (so-called ‘off the shelf’ components). The effects of bringing all these components together may not be fully appreciated until after the fact. (Not to be confused with the discreetness problem).

The Opacity Problem: The way in which AI systems work may be much more opaque than previous technologies. This could be for a number of reasons. It could be because the systems are compiled from different components that are themselves subject to proprietary protection. Or it could be because the systems themselves are creative and autonomous, thus rendering them more difficult to reverse engineer. Again, this poses problems for regulators as there is a lack of clarity concerning the problems that may be posed by such systems and how those problems can be addressed.

Each of these problems looks to be serious and any regulatory system would need to deal with them. To my mind, the diffuseness and opacity problems are likely to be the most serious. The diffuseness problem suggests that there is a need for global coordination in relation to AI regulation, but past efforts at global coordination do not inspire confidence (e.g. climate change; nuclear proliferation). The opacity problem is also serious and likely to be compounded by the growing use of (and need for) AI in regulatory decision-making. I have written about this before.

Scherer, for his part, thinks that some of these problems may not be as serious as they first appear. For instance, he suggests that although discreetness is a possibility, it is still likely that AI research and development will be undertaken by largescale corporations or government bodies that are much more visible to potential regulators. Thus, from a regulatory standpoint, we should be thankful that big corporations like Google, Apple and Facebook are buying-up smaller scale AI developers. These bigger corporations are easier to regulate given existing regulatory institutional structures, though this must be balanced against the considerable lobbying power of such organisations.

Okay, that’s it for this post. Hopefully, this gives you some sense of the problems that might arise with AI regulation. Scherer says much more about this topic in his paper, and develops his own preferred regulatory proposal. I hope to cover that in another post.

Saturday, July 4, 2015

Humanism, Transhumanism, and Speculative Posthumanism




I have recently been working my through David Roden’s book Posthuman Life: Philosophy at the Edge of the Human. It is a unique and fascinating work. I am not sure that I have ever read anything quite like it. In the book, Roden defends a position which he refers to as speculative posthumanism. This holds, roughly, that the future we are creating through technological change could give rise to truly weird and alien forms of posthuman life.

In defending this position, Roden takes the reader on a philosophical romp through contemporary debates about transhumanism and artificial intelligence, suffusing this with discussions of Kantianism, pragmatism, phenomenology and postmodernism. It is this fusion of literatures, combined with Roden’s engaging use of sci-fi examples and illustrations, that makes the work so unique and interesting (in my opinion).

Anyway, there’s lots of good stuff in the book, and I hope to cover some of its meatier elements in future posts. Today, I just want to cover something relatively straightforward — but critical if you want to understand the significance of the thesis being defended in Roden’s book. Like many who debate the ethics of transhumanism, I’m sometimes confused by the terminology that is thrown around by the participants. In particular, I find myself confused by the distinction between terms like humanism, transhumanism and posthumanism. I know that others have tried to identify these distinctions in the past — including Kevin LaGrandeur in his article ‘What is the difference between posthumanism and transhumanism? — but I have rarely found those discussions illuminating.

This is one place where Roden’s work is particularly useful. He helps the reader to understand the distinctions between these different concepts by paying close attention to how they have been used in the literature, and he also further clarifies the existing literature by distinguishing between two major forms of posthumanism. This, I think, is very helpful since it is that term ‘posthumanism’ and the overlap/disjunction between it and ‘transhumanism’ that is the source of most confusion.

You can read Roden’s book for the full analysis; I’m just going to share the results of that analysis here, which focuses on four discrete concepts (he discusses several more in the book). They are: humanism, transhumanism, critical posthumanism and speculative posthumanism.


Humanism: This is any view that singles out humans from other forms of life. This is generally based on the notion that humans possess some special faculty or attribute (reason; intelligence; consciousness; rationality; autonomy; humour; similarity to God etc) that differentiates them from all other forms of life. These special faculties are then typically taken to warrant special ethical treatment. Humanism also usually encompasses the protection, celebration and glorification of these unique attributes.

Transhumanism: This is a socio-ethical view holding that advanced forms of technology can be used to transcend certain limitations of the human condition. The appealed-to forms of technology are referred to as NBIC technologies by Roden (nanotech; biotech; information technology; and cognitive science). The forms of transcension that these technologies make possible are various. I like the summary adopted by David Pearce, who argues that transhumanists are committed to the three ‘supers’, i.e. super-longevity, super-intelligence, super well-being. In other words, transhumanists are committed to using NBIC technologies to live radically longer lives, increase their cognitive abilities, and achieve higher states of conscious bliss and satisfaction. The interesting thing about transhumanism, from Roden’s perspective, is that it works very much within the humanist ideology. That is to say, transhumanists are often committed to enhancing and improving the kinds of attributes that humanists single out as being unique and special markers of humanity (rationality, intelligence, autonomy etc). They just want to do so through technology.

Critical Posthumanism: This is the view, common in the critical humanities, that takes issue with humanism. In other words, that challenges the view of the human subject as something that is unique and worthy of glorification. One of the most widely-challenged humanist views is the one associated with Descartes. The Cartesian view is that the human is a single, unified, rational, self-governing entity that sits apart from the external world in which it operates. Critical posthumanists argue that this Cartesian view of the human subject is mistaken, highlighting various fluid relationships between the mind, body and external world, noting how those relationships are made even more fluid by modern technologies, and attempting to deconstruct the notion of a single unified self. Critical posthumanists often scoff at certain transhumanist projects, like mind-uploading, on the grounds that such projects implicitly assume the false Cartesian view.

Speculative Posthumanism: This is a view that shares certain elements of transhumanism and critical posthumanism. It shares the transhumanist fascination with the ways in which technology can be used to modify and enhance human attributes. But it also shares some of the critical posthumanist belief that the single, unified, rational human subject may be wiped out by these technological changes. Thus, speculative posthumanism is committed to the notion that future technological successors of the human race could be radically alien and different. Indeed, speculative posthumanists hold that these beings could completely cease to be human by virtue of technological change. This could lead to a radical restructuring of the values inherent in present social orders. For example, the construction of a hivemind or Borg-like society could negate many of the supposedly valuable and humanistic features of contemporary societies.


Anyway, that’s all I wanted to share in this post. The majority of Roden’s book is spent defending the possibility of speculative posthumanism, encouraging us to take it seriously, and mapping out some of the weirder possible contents of the posthuman future. This strikes me as being a valuable project.