Saturday, December 27, 2014

Review of 2014: Favourite and Most Viewed Posts


As the year now winds to close I thought it might be appropriate to briefly review what has happened on this blog in the past 12 months. Not that I want this to be deeply introspective or anything like that. I just want to give you a bunch of lists, reviewing my favourite posts, some readers' picks, and the most popular posts by page views. Obviously there is some overlap between the lists -- it would be disappointing if my favourite posts never overlapped with the most viewed -- but there are plenty of differences too.

Let's start with favourites. I've tried to pick one "favourite" from each month, substituting some readers' picks in November and December. It was pretty difficult to do this since some months were clearly better than others, and there were several that I wanted to include but didn't. But I published over 110 posts this year so something had to be left on the cutting room floor. If you're curious about all those others, why don't you check them out for yourself. They are available in the archives in the right hand column of this page. And if you have alternative recommendations, feel free to add them in the comments.



Favourite Posts














Moving on then to page views. I know some people like to think of Bitcoin as the currency of the internet, but as we all know the real currency of the internet is page views. How did the blog do on that front? Pretty well, all things considered. I crossed the one million page view mark back in October (according to Google stats anyway) and I ended the year averaging about 40,000 hits per month. That's not amazing, but it is up about 15,000 per month on last year. I also had far more posts this year that scored over 1,000 hits, which is nice. Anyway, here are the top ten posts by number of views (how many actually read them is another matter entirely):



Top Ten posts on Philosophical Disquisitions by page views












Finally, over the past two years my work has been republished several times on various other popular weblogs. Just this past year, I've had posts published on Humanity Plus, Disinfo and Practical Ethics. But my main other outlet is the Institute for Ethics and Emerging Technologies (IEET) blog. They republish nearly everything I publish on here (a big thank you to Kris Notaro for supporting my work). Here are the top ten posts (by page views) over on IEET.



Top Ten Posts on IEET by page views



So that's it for 2014. Let's see what happens next year.






Monday, December 22, 2014

Academic Papers 2014




End of year navel-gazing exercises seem to be the norm on blogs. Here's the first of mine. It's a list of all the peer-reviewed papers I have had accepted for publication in the past year. Not as many as in 2013, but hey I couldn't keep that pace up forever. Two of these have already been published. The other two won't be published until 2015. You can follow links to copies of all four (if you are so inclined):

  • The Normativity of Linguistic Originalism: A Speech Act Analysis (2015) Law and Philosophy, forthcoming - Originalism is a theory of constitutional interpretation, according to which a constitution ought to be interpreted in light of its original meaning. This is my attempt to critique a certain type of originalism. Specifically, the linguistic originalism associated with the likes of Lawrence Solum and Jeffrey Goldsworthy. Both claim that the meaning of a constitution simply is its original meaning, not something else that we morally desire or wish it to be. I argue that this is wrong, even if we agree with the versions of originalism espoused by Solum and Goldsworthy: working out the communicated content of a constitution is not a purely factual/empirical affair; it is also a deeply normative and moral affair. (Official; AcademiaPhilpapers)
  • Robotic Rape and Robotic Child Sexual Abuse: Should they be criminalised? (2015) Criminal Law and Philosophy, DOI 10.1007/s11572-014-9362-x - With sophisticated sex robots likely to become a reality in the not-too-distant future, this paper asks what happens when they are used to provide realistic facsimiles of rape and child sexual abuse. Should this be outlawed? This paper provides an extremely tentative argument for criminalisation, based on some leading theories of criminalisation. The argument is not intended to be conclusive, but rather to provide a framework for future debate. (OfficialAcademiaPhilpapers)
  • The Comparative Advantages of Brain-Based Lie Detection: the P300 Concealed Information Test and Pre-trial Bargaining (2015) 19(1) International Journal of Evidence and Proof  DOI: 10.1177/1365712714561189 - This paper looks at the possible forensic uses of the P300 Concealed Information Test. It argues that this technology could be used to empower innocent defendants during pre-trial plea bargaining in criminal cases. This is because it would offer a better solution to the "innocence problem" (the phenomenon whereby innocent defendants are incentivised to plead guilty) than any other currently-proposed solution. (Official; AcademiaPhilpapers)
  • Sex Work, Technological Unemployment and the Basic Income Guarantee (2014) 24(1) Journal of Evolution and Technology 113-130 - This paper looks at the possible impacts of sex robots on the sex work industry. It considers the arguments for two competing hypotheses. The Displacement Hypothesis, which claims that human sex workers will eventually be displaced by robots; and the The Resiliency Hypothesis, which claims that human sex work will remain resilient to technological unemployment. It also looks at how these possibilities affect the case for the basic income guarantee. (OfficialAcademiaPhilpapers)

Sunday, December 21, 2014

Stopping the innocent from pleading guilty: Can brain-based recognition detection tests help?

P300 Concealed Information Test


So I have another paper coming out. It’s about plea-bargaining, brain-based lie detection and the innocence problem. I wasn’t going to write about it on the blog, but then somebody sent me a link to a recent article by Jed Radoff entitled “Why Innocent People Plead Guilty”. Radoff’s article is an indictment of the plea-bargaining system currently in operation in the US. Since my article touches upon same thing, I thought it might be worth offering a summary of its core argument.

The gist of it is that I think that it may be possible to use a certain type of brain-based lie detection — the P300 Concealed Information Test (P300 CIT) — to rectify some of the problems inherent in current systems of plea bargaining. The word “possible” is important here. I don’t believe that the technology is currently ready to be used in this way. I think further field testing needs to take place. But I don’t think the technology is as far away as some people might believe either.

What I find interesting is that, despite this, there is considerable resistance to the use of the P300 CIT in academic and legal circles. Some of that resistance stems from unwarranted fealty to the status quo, and some stems from legitimate concerns about potential abuses of the technology (miscarriages of justice etc.). In the article, I try to overcome some of this resistance by suggesting that the P300 CIT might be better than other proposed methods for resolving existing abuses of power within the system. Hence my focus on plea-bargaining and the innocence problem.

Anyway, in what follows I’ll try to give a basic outline of my argument. As ever, for the detail, you’ll have to read the original paper.


1. Plea Bargaining and the Innocence Problem
Plea bargaining is the common practice whereby by a defendant charged with a particular offence will plead guilty to a lesser offence, in an effort to reduce their expected punishment. Radoff’s article describes how the practice currently operates in the US. Similar practices operate in other countries, though they are possibly less extreme than that found in the US.

Plea-bargaining is attractive to both prosecutors and defendants. It is attractive to prosecutors because they are incentivised to achieve the maximum amount of punishment for the minimum expenditure of prosecutorial resources. Plea bargaining enables them to do this by eliminating the costs associated with lengthy trials. It is attractive to defendants because they are incentivised to minimise their expected amount of punishment. Going to trial is risky from their perspective because it carries with it a higher possible sentence. If they are being prudent, pleading guilty to a lesser offence is often going to be the safer bet.

In fact, things are probably skewed more heavily in favour of entering a guilty plea than I am letting on. As Radoff points out in his article, certain changes to sentencing law (mandatory minimums) coupled with differential power as between prosecutors and (most) defence lawyers, will make entering a guilty plea nearly always the sensible thing to do. For example, most defence lawyers are at a considerable informational disadvantage when they first meet with the prosecutors. They will have had limited opportunities to meet with their clients, whereas the prosecutor will have a full police report, witness testimony and forensic evidence (assuming there is any):

Against this background, the information-deprived defense lawyer, typically within a few days after the arrest, meets with the overconfident prosecutor, who makes clear that, unless the case can be promptly resolved by a plea bargain, he intends to charge the defendant with the most severe offenses he can prove…If, however, the defendant wants to plead guilty, the prosecutor will offer him a considerably reduced charge—but only if the plea is agreed to promptly (thus saving the prosecutor valuable resources). 
(Radoff 2014)

Under these conditions who wouldn’t be inclined to plead guilty?

“The truly innocent”, you might respond. But that is not the case. Again, as Radoff points out in his article, studies have shown that a number of innocent defendants have opted to plead guilty in order to avoid more serious charges. The Innocence Project, which seeks to exonerate innocent defendants on the basis of DNA evidence, has identified 30 people (approx. 10% of their total) who pleaded guilty despite later turning out to be innocent. Similarly, the National Registry of Exonerations (in Michigan Law School) has found that 10% (or 151 cases) of legally acknowledged exonerations since 1989 have involved false guilty pleas. Of course, the real number is difficult to know since many of those who plead guilty despite their innocence will never be uncovered. But the 10% figure from these sources looks worrisome.

This is plea bargaining’s innocence problem: the incentives are such that innocent defendants are persuaded to plead guilty more often than we would like.


2. The Innocence Problem as a Signalling Problem
There are many possible causes of the innocence problem. Long-standing structural and political issues are part of the problem, as are the idiosyncrasies of particular cases and personalities. But at the heart of them all is a basic signalling problem. Innocent defendants feel the pull of the guilty plea because they know they have no way in which to credibly signal their innocence to the prosecutors.

The classic signalling problem can be found in the biblical tale of King Solomon and the two women. I think I’ve shared this many times on the blog so you’ll have to forgive me if I do it one more time. According the traditional version of the story, two women came to King Solomon with a dispute as to parental rights. Apparently, each woman had recently had a child. One woman had rolled over her child while sleeping and the child suffocated and died. She then stole the other woman’s child and claimed it as her own. This is what led to the dispute coming before King Solomon.

The problem for Solomon was that the signals sent to him by the women were the same. They both claimed to be the mother, with equal vigour, and in the absence of further evidence there was no reason to believe one over the other. Economists sometimes refer to this as a pooling equilibrium: both the fake mother and the real mother are incentivised to adopt the same signalling strategy. The claim I’m making here — and I’m certainly not the only one to make it — is that a similar sort of pooling takes place in the typical criminal case. It doesn’t matter how much the truly innocent defendant protests their innocence. Their signals will tend to be pooled with the signals of guilty defendants who also protest their innocence. In the absence of overwhelming evidence to the contrary, there is no reason for the prosecutors to believe either.

How can the signalling problem be resolved? Well, speaking in very abstract terms, you need to change the incentives so as to avoid the pooling equilibrium. That’s exactly what King Solomon did in the case of the two women. He decreed that the child be cut in half and shared equally between the them. The false mother was happy to go along with this (she had already lost her child and wished to punish the true mother), but true mother was not (she didn’t want her child to die). Consequently, she was incentivised to concede the dispute to the false mother, which allowed Solomon to work out her real identity. The signals were suddenly separated.

Can something similar be done in the case of the innocence problem? Can we change the incentives so that there is some signal that innocent defendants are more likely to send to the prosecutors than guilty ones?


3. The P300 CIT as a device for Credible Signalling
In a lengthy article, which I covered last year, Russell Covey has argued that the signalling problem can be solved by introducing a “subwager” into the pre-trial bargaining game that is being played between prosecutors and defendants. He explains the idea by analogy to a simple card game. Since I went through the details of that card game before, I’ll just skip to the conclusion here. The subwager is akin to a bet that a truly innocent defendant would be willing to take, while a guilty one would not. In other words, it is a bet with asymmetrical risks: it is high risk to the guilty defendant but low risk to the innocent.

My claim is that the willingness to undergo a voluntary P300 CIT could count as such a subwager. Now, you may be wondering, what exactly is a P300 CIT and how can it count as a subwager? In brief, a P300 CIT is a type of brain-based lie detection. Actually, no, scrap that: it’s not really a form of lie detection. Rather, it is a type of memory or recognition detection test. It provides evidence for whether or not a suspect recognises information that was present at a crime scene. Thus, it can be used (as part of an appropriate inductive inference) to either link a guilty defendant to a crime scene or separate an innocent defendant from a crime scene. It does so by detecting the presence or absence of a particular of a brainwave known as the P300. Hence the name. The assumption underlying the test — and backed up by experimental tests thereof — is that this brainwave is detected when a suspect — or, perhaps more correctly, a suspect’s brain — recognises information.

I don’t want to get into the evidence supporting the reliability of the P300 test here. I cover that at some length in my article, and there is an excellent review paper covering all the experimental evidence in favour (and against) the version of the test that I think stands the best chance of actual forensic use. Suffice to say, I think the evidence for the test is more impressive than you might think (though certainly not without its flaws). It can be used to distinguish those who recognise crime-relevant information from those who do not at a rate that is far better than chance (with several experimental tests reporting accuracy levels above 90%). To be sure, there have been dubious uses of the test in past — for instance, Lawrence Farwell’s use of a P300 “brain fingerprinting” test has been criticised — and we should guard against dubious uses in the future, but I nevertheless believe that with more extensive field testing this technology could be used in forensic contexts.

But, as I say, I don’t want to dwell on the evidence in favour of the P300. Instead, I want to highlight how it could be used to resolve the innocence problem. In brief, I think the test provides a way for innocent defendants to credibly signal their innocence to investigators and prosecutors of crimes. Why so? Because the test has the asymmetrical risk profile needed for a successful subwager. It presents a low-risk to innocent defendants (indeed, one of the nice things about the P300 test is its low rate of false positives, particularly when compared with classic forms of lie detection), but a high-risk to guilty defendants. Innocent defendants could thus voluntarily submit to such a test and credibly signal their innocence to prosecutors. In the article, I develop this argument in more detail, explaining why it is important that the use of the test be truly voluntary and why it is important not to simply infer guilt from an unwillingness to undergo such a test. To summarise the argument:


  • (1) The innocence problem arises from a signalling problem: signals sent by innocent defendants are indistinguishable from the signals sent by guilty defendants.

  • (2) Introducing a subwager into the pre-trial bargaining game can help solve this signalling problem by giving those with private knowledge of innocence a credible way to distinguish themselves from others.

  • (3) Giving defendants the option of voluntarily submitting to a P300 CIT provides them with just such a subwager.

  • (4) Therefore, giving defendants the option of voluntarily submitting to a P300 CIT can help solve the innocence problem.



Don’t read too much into the wording of this conclusion. I don’t think that my proposal will fully “solve” the innocence problem. At best it will provide a partial solution, applicable in a certain range of cases. But I think this is nothing to be sniffed at and could be considered seriously in the not too distant future.



4. Criticisms of my proposal?
This initial argument for my proposal will probably seem unpersuasive in and of itself. That’s why I insist upon developing the argument within a comparative advantage framework. In other words, within a framework that explicitly compares the proposal to other possible solutions to the innocence problem. When considered in this light, I believe it becomes a good deal more persuasive. I’ll try to explain by considering three other possible solutions to the innocence problem (there are more — for example other types of forensic evidence like DNA testing can be used and have been used by the Innocence Project — but in my analysis I’m limiting my focus to cases in which these other forms of evidence are not available).

The first solution is the one proposed by Russell Covey, from whom I got the idea of the subwager. He thinks that voluntary submission to interrogation functions as a credible signalling device for innocent defendants. In other words, if I am truly innocent, I should forego my right to silence and submit myself to robust questioning by the authorities. Since I am innocent, I am more likely to “pass” the interrogation test than a guilty defendant. The asymmetry of risks needed for the subwager is present in this decision. To be fair, Covey adduces some empirical evidence to suggest that innocent defendants really are better off if they voluntarily submit to interrogation. But I think we should be cautious about this proposal. Interrogation, particularly if the methods become more robust, is open to abuse and comes with no known error rates. The P300 CIT has an advantage over interrogation in that it is a scientifically based test, with known error rates, that has to be administered in accordance with strict protocols.

Another possible solution would be to use other methods of lie detection — e.g. fMRI lie detection. The reasoning would be similar: they represent a low risk to innocent defendants and a high risk to guilty defendants. But, again, I think we should be cautious about such a proposal. Other methods of lie detection tend to follow a control question test (CQT) format, which is open to abuse and has been used, in the past, as little more than an interrogation prop. Also, I think we should be much more suspicious of the evidence claimed on behalf of fMRI-based tests: the signals can be overinterpreted, and it is much more difficult to test whether someone is lying in a lab setting than it is to test whether they recognise certain information. I think the P300 CIT has the advantage once more.

Finally, there is what I call the “sousveillance” solution. This isn’t a subwager-like proposal. This is something far more radical. The idea behind it is that everybody wears veillance technologies at every moment in their lives. This technology will allow them to record and detail everything they have ever done. This will provide them will reliable and credible documentary evidence of their movements and, if they are truly innocent, it should provide them with a way to document their innocence. I accept that this may resolve the innocence problem. And I accept that the evidence produced by such veillance technologies may be more reliable than that produced by a P300 CIT. But, again, I think the P300 has some advantages over the sousveillance solution. For one thing, the sousveillance solution would require prospective implementation, i.e. everyone would need to be using such technologies before any crime were committed. The P300 CIT can be implemented retrospectively, i.e. to investigate crimes after they have taken place. Since we may not be able to guarantee the widespread use of sousveillence technologies, the P300 CIT seems like it could be more useful. For another thing, the widespread use of sousveillance would have a range of other social costs (and benefits) associated with it. It should not be adopted as a targeted solution to the innocence problem. Still, I accept that certain technological trends may be pushing us in this direction. (Note: The sousveillance solution is something I wanted to discuss in the article but the editor asked me to remove the discussion of it before publication. I am grateful to have the opportunity to add it in here)

When considered in light of these other possible solutions, the P300 CIT “solution” to the innocence problem looks more promising. There are other objections to the proposal too, but I’ll leave you read about those in the article itself.


5. Conclusion
To briefly sum up, there is an innocence problem inherent in existing systems of plea-bargaining. The incentives of the system are such that innocent defendants are sometimes persuaded to plead guilty. Ideally, we should avoid this problem. Although there are many possible causes, one of the chief ones is the inability of innocent defendants to credibly signal their innocence to prosecutors. I have argued that a brain-based recognition detection test — specifically the P300 CIT — may help to correct for that inability. The technology is not ready for this use just yet, but may be in the near future.

Wednesday, December 17, 2014

Meaning, Value and the Collective Afterlife: Must others survive for our lives to have meaning?



Samuel Scheffler made quite a splash last year with his book Death and the Afterlife. It received impressive recommendations and reviews from numerous commentators, and was featured in a variety of popular outlets, including the Boston Review and the New York Review of Books. I’m a bit late to the party, having only got around to reading it in the past week, but I think I can see what all the fuss was about.

The book really does offer some interesting, and novel, insights into what it takes to live a meaningful life. The most interesting of those insights comes from Scheffler’s defence of the collective afterlife dependency thesis. According to this thesis, much of what makes our lives valuable is dependent on the existence of a collective afterlife. This collective afterlife is not, according to Scheffler, to be understood in supernatural or religious terms; it is to be understood in secular and naturalistic terms. It is the continued existence of beings like us in an environment which is roughly equivalent to the one in which we now live.

Scheffler is quite careful in his development of this thesis. He distinguishes three different versions of it, and clarifies (to some extent) exactly what needs to be preserved in this collective afterlife. I’m going to skip over some of this nuance in what follows. I’m just going to look at Scheffler’s defence of the unrefined version of the dependency thesis, as well as some criticisms of that idea. In particular, I’m going to look at Mark Johnston’s criticism, which claims that if Scheffler is right, then life is nothing more than a Ponzi scheme: it needs an infinite stream of future generations to “pay in” in order to make life meaningful for the current generation.


1. What is this “collective afterlife” you speak of?
Before looking at the argument proper, we need to clarify the central thesis. As I just said, it all hinges on the notion of a collective afterlife. Scheffler alludes to this idea several times in the book. He knows that his use of that term is contentious — “afterlife” brings with it a rich set of religious connotations — but that’s part of the fun. Here is a quick definition, based on my own reading between the lines:

Collective Afterlife: The continued existence of human-like beings in conditions roughly equivalent to those in which you now live, after your death.

A couple of points about this definition. First, note how it refers to “human-like beings”, not humans. This is my addition. Throughout the book Scheffler talks (or implies) that his imagined collective afterlife involves the existence of human beings, but I take it that it is not absolutely essential for the beings that exist in the collective afterlife to be human (i.e. genetic members of homo-sapiens). Human-like beings, with similar properties of personhood and similar goals and aspirations would be sufficient. That brings us to the other part of the definition, which is also mine, and which claims that they must live in conditions roughly equivalent to those in which we now live. It turns out that the precise conditions in which future generations must live is somewhat contentious as between Scheffler and his critics. It’s pretty clear that, in order to confer meaning on our lives, the lives of future generations must share at least some of our values, aspirations and needs, and that they must not live in a state of abject immiseration and deprivation, but they probably don’t need to have lives that are exactly the same as ours. I’ll return to this later when looking at Johnston’s criticism. Finally, note how the definition makes no appeal to the continued existence of humans that are particularly close to us (i.e. friends and family). This is important because one of things that Scheffler points out in his book is that, in order to confer value on our lives, the lives of future beings need not bear a close relation to us.

So much for that. What role does the collective afterlife play in our lives? Scheffler claims that it plays quite a big role. He claims that much of what we value in life (our plans, hopes, projects, activities and so on) depends for its value on the existence of a collective afterlife:

…our conception of a human life…relies on an implicit understanding of such a life as itself occupying a place in an ongoing human history, in a temporally extended chain of lives and generations. 
(Scheffler 2013, p. 43)

This is the dependency thesis:

The Collective Afterlife Dependency Thesis (CADT): The existence of a collective afterlife is an important condition for living a valuable life; without a collective afterlife our present lives would be denuded of much of their value.

To be clear, this is my definition of the thesis, not Scheffler’s. He is much more careful in his discussion. He distinguishes between an attitudinal, evaluative and justificatory version of the thesis. These distinctions look into whether the collective afterlife is something that merely affects our attitudes to our lives, whether it actually affects what is valuable about our lives, and whether the actual (as opposed to believed) existence of the afterlife is essential. I’m going to ignore these distinctions for now. You’ll also note that my definition refers to the collective afterlife as an “important” condition for value in life. I use that term because I don’t think Scheffler intends for it to be understood as either a necessary or a sufficient condition; but he does clearly think it has a significant impact on the amount of value in our lives. Hence “important” seems like the most appropriate descriptor.


2. Scheffler’s argument for the CADT
Scheffler doesn’t present a formal argument for the CADT in his book. Instead, he presents a series of thought experiments and reflections upon those thought experiments. As always, I would like to recover as much formal structure from these reflections as possible. So in what follows I’ll try to show how those thought experiments can be used as part of a semi-formal defence of the CADT. There are two thought experiments that are particularly important for this purpose.

The first thought experiment is:

Doomsday Thought Experiment: Suppose that you will live a long, normal human life, but that 30 days after your death, all human life will be destroyed in some catastrophic event (for example, an asteroid collision). Suppose, further, that you know this catastrophic event will take place as you are living your life. What effect would this have?

Scheffler suggests, in a long and thoughtful analysis, that it would have a pretty devastating affect on your life. It would rob many of your projects and activities of their value, and would probably induce a significant amount of despair, grief and existential hand-wringing. He further contends that it is not really plausible to react to the scenario with indifference. As he puts it:

[F]ew of us would be likely to say… “So what? Since it won’t happen until thirty days after my death, it isn’t of any importance to me. I won’t be around to experience it, and so it doesn’t matter to me in the slightest.” 
(Scheffler 2013, p. 19)

Of course, it’s always dangerous when philosophers play these intuition-mongering games. There may be some people who do react with utter indifference (think Kirsten Dunst in Melancholia - if you think life is pretty pointless anyway you might not be too bothered). But I still sympathise with what Scheffler is saying. I certainly don’t think that I would react with utter indifference. The possibility of the doomsday scenario after my death would probably change my attitude to life.

Scheffler thinks these likely reactions tell us something interesting about what it takes to live a valuable life. In particular, he thinks they suggest that there is a strong nonexperiential aspect to what makes life worth living. In the doomsday scenario, your life and experiences are unaffected — you do not die prematurely — but nevertheless the value of your life is, somehow, affected. He also thinks that these reactions suggest that there is a significant conservatism to what makes our lives valuable. In other words, we want the things we currently value and care about to continue to exist after we die. Combined, these two implications provide some support for the CADT. They point to the need for the continued existence of beings like us, living lives like ours, in order for our lives to have as much value as we seem to think they do.



One problem with the doomsday thought experiment, however, is that it conflates the continued existence of beings with lives that are close to our own with the continued existence of beings with lives like our own. What do I mean by this? I mean it could be, for all the doomsday thought experiment suggests, that what induces all the despair and existential angst is the fact that our children, friends and family, or any other being close to us, will die. Although Scheffler thinks the continued existence of such beings is an important part of what confers value on our lives, he thinks that their existence alone does not do justice to the CADT. This leads to the second thought experiment:

Collective Infertility Thought Experiment: Suppose that the entire human race is infertile. In other words, the current generation of humans is the last generation of humans that will ever live. (A situation depicted in the novel and film The Children of Men). What effect would that have on our lives?

Again, Scheffler suggests that it would have a pretty devastating effect. It would induce a significant amount of despair and existential angst. Indeed, this is something that the Children of Men tries to illustrate in some rich, imaginative detail. We are shown a world in which anarchy and anomie reign supreme, and in which only an extremely authoritarian government can keep control. In the book, it is said to give rise to ennui universel, and that only those who “lack imagination” or who are in the grip of an extreme egotism are immune from the negative effects.

In these respects, the collective infertility scenario is similar to the doomsday one. But there are some crucial differences. As Scheffler points out, the despair in the collective infertility scenario is not just caused by the prospective deaths of ourselves and people we care about. In fact, we already know that everyone we know and love will someday die and yet this, in and of itself, does not induce the same degree of existential angst. The despair in the collective infertility scenario is caused by the fact that everyone — including those with whom we have no special or personal connection — is gradually going extinct. The fact that we feel despair at this generalised extinction tells us something interesting. It tells us that there is a strong altruistic element to the role of the collective afterlife in our own lives. We care about the general fate of humankind, not just the fate of people we know and love. Once again, this seems to support the CADT.



To summarise all this in a simple formal argument, we could construct the following:


  • (1) If our intuitive reaction to certain thought experiments suggests that the continued existence of human-like beings in conditions roughly equivalent to those in which we now live is an important condition for meaning and value in our lives, then we are warranted in accepting the CADT.

  • (2) Our intuitive reactions to the Doomsday Thought Experiment and the Collective Infertility Thought Experiment suggest that the continued existence of human-like beings in conditions roughly equivalent to those in which we now live is an important condition for meaning and value in our lives.

  • (3) Therefore, we are warranted in accepting the CADT.



You might think it’s silly to spell out the argument in this level of detail. But one thing I like about this semi-formal reconstruction is that it renders transparent the type of inference that is taking place. Scheffler is defending the CADT on the basis of our reactions to certain thought experiments. Though this is a common methodology in philosophy, there are no doubt people who will worry about inferring such a significant thesis from such a limited set of reflections. All I can say to such people is that Scheffler’s reflections are much more detailed than I am making them out to be in this post, and even if his argument is ultimately lacking, it provides much food for thought.


3. The Ponzi Scheme Problem
There are several criticisms and commentaries on Scheffler’s argument. Some of them are modest in nature. For example, Susan Wolf — in a response contained within the original book — argues that much of what we value (e.g. certain intellectual and artistic pursuits) could still retain value in the face of the Doomsday scenario. This is modest insofar as it doesn’t completely deny that the collective afterlife plays a role in conferring value on our present lives. But there are also critics who take issue with the CADT as a whole. One of them is Mark Johnston who, in his review of the book, argues that if we take the CADT seriously, life ends up being akin to a Ponzi Scheme. And since he feels that this is implausible, he rejects the CADT.

Let’s try to make sense of this criticism. As best I can tell, it works as a reductio of the CADT:


  • (4) If the CADT is true, then the possibility of our lives being full of value and meaning is dependent on the existence of future generations living lives full of value and meaning.

  • (5) If the possibility of our lives being full of value and meaning depends on the existence of future generations living lives full of value and meaning, then life turns out to be a Ponzi scheme: we need an infinite stream of future generations to pay into the system in order to make our lives meaningful.

  • (6) But we are not going to have an infinite stream of future generations paying into the system.

  • (7) Therefore, our current lives are denuded of much of their value and meaning.

  • (8) It is implausible to think that our current lives are denuded of much of their value and meaning.

  • (9) Therefore, the CADT is implausible.


Johnston’s argument appeals to the “rough equivalence” concept that I introduced earlier on. As you’ll recall, I said that in order for the collective afterlife to confer value on our present lives, it cannot be the case that future generations live in a state of abject immiseration and deprivation, and that they must live lives that are roughly equivalent to those that we now live. Johnston is taking this a step further and arguing that their future lives must be very similar to our own, at least with respect to the amount of value and meaning in them. He then combines this with a transference principle for the conferral of value:

Transference Principle: If human generationn (Gn) lacks value and meaning in their lives, then so too does Gn-1, and Gn-2, all the way back to G1.

In other words, the lack of meaning and value in one future generation transfers back to the present generation. As Antti Kauppinen puts it, Johnston here seems to be endorsing a kind of Recursive Afterlifism. The question is whether this is itself a plausible construal of the CADT.

Kauppinen thinks that it is not, and I have similar feelings. While I appreciate the metaphor of the Ponzi scheme, I have a hard time accepting the transference principle upon which Johnston’s criticism is based. Kauppinen suggests in his commentary that future generations need not match us in terms of value and meaning in order for our activities and projects to have value conferred upon them by the existence of those future generations. For example, finding a cure for cancer in the present generation would be a valuable activity if it benefitted some future generations (e.g. 10 future generations). It would not be robbed of its value simply because there won’t be an infinite stream of happy future generations. What we end up with is a modified version of the transference principle. Instead of the amount of value and meaning in Gn being entirely determined by the amount of value in Gn+1, we have a situation in which the amount of value and meaning in Gn is partly determined by the amount of value and meaning in Gn+1. This more modest form of collective afterlifism has some disturbing implications. It suggests that life for the final generation of humans will indeed be devoid of much meaning and value, and that things won’t be much better for the second-to-last generation. But this is entirely consistent with the CADT. It simply suggests that the impact of the eventual demise of the human race attenuates as we go back in time. I find that to be a plausible construal of the CADT.


4. Conclusion
I’m going to leave it there. To quickly recap, Scheffler’s book argues that the amount of value and meaning in our lives is highly dependent upon the existence of a collective afterlife. He defends this by analysing two thought experiments, in one of which the human race goes extinct 30 days after your death, and in the other of which the human race is collectively infertile and dying out. One thing I have not covered in this post is the role of our deaths in conferring meaning on our lives. This is another, probably more controversial, aspect of Scheffler’s book. He thinks that our deaths are important for conferring meaning on our lives, and that the collective afterlife is more significant than our (individual) continued existence. I hope to cover that argument in more detail another time.

Tuesday, December 16, 2014

Should we criminalise robotic rape and robotic child sexual abuse?


I recently published an unusual article. At least, I think it is unusual. It imagines a future in which sophisticated sex robots are used to replicate acts of rape and child sexual abuse, and then asks whether such acts should be criminalised. In the article, I try to provide a framework for evaluating the issue, but I do so in what I think is a provocative fashion. I present an argument for thinking that such acts should be criminalised, even if they have no extrinsically harmful effects on others. I know the argument is going to be unpalatable to some, and I myself balk at its seemingly anti-liberal/anti-libertarian dimensions, but I thought it was sufficiently interesting to be worth spelling out in some detail. Hence why I wrote the article.

For the detail, you’ll have to read the original paper (available here, here, and here). But in an effort to entice you to do that, I thought I would use this post to provide a brief overview.


1. What is robotic rape and robotic child sexual abuse?
First things first, it is worth clarifying the phenomena of interest. I’m sure people have a general sense of what a sex robot is, and maybe some vaguer sense of what an act of robotic rape or child sexual abuse might be, but it’s worth being as clear as possible at the outset in order to head-off potential sources of confusion. So let’s start with the notion of a sex robot. In the article, I define a sex robot as any artifact that is used for the purposes of sexual stimulation and/or release with the following three properties: (i) a human-like form; (ii) the ability to move; and (iii) some degree of artificial intelligence (i.e. an ability to interpret, process and act upon information from its environment).

As you can see from this definition, my focus is on human-like robots not on robots with more exotic properties, although I briefly allude to those possibilities in the article. This is because my argument appeals to the social meaning that might attach to the performance of sexual acts with human-like representations. For me, the degree of human-likeness is a function of the three properties included in my definition, i.e. the more human-like in appearance, movement and intelligence, the more human-like the robot is deemed to be. For my argument to work (if it works at all) the robots in question must cross some minimum threshold of human-likeness, but I don’t know where that threshold lies.

So much for sex robots. What about acts of robotic rape and robotic child sexual abuse? Acts of robotic rape are tricky to define given that legal definitions of rape differ across jurisdictions. I follow the definition in England and Wales. Thus, I view rape as being non-consensual sexual intercourse performed in the absence of a reasonable belief in consent. I then define robotic rape as sexual intercourse performed with a robot that mimics signals of non-consent, where it would be unreasonable for the performer of those acts to deny that the robot was mimicking signals of non-consent. I know there is some debate as to what counts as a signal of non-consent. I try to sidestep this debate in the article by focusing on what I call “paradigmatic signals of non-consent”. I accept that the notion of a paradigmatic signal of non-consent might be controversial. Acts of robotic child sexual abuse are easier to define. They arise whenever sexual acts are performed with robots that look and act like children.

Throughout the article, I distinguish robotic acts from virtual acts. The former are performed by a human actor with a real, physical robot partner. The latter are performed in a virtual world via an avatar or virtual character. There are, however, borderline cases, e.g. virtual acts performed using immersive VR technology with haptic sensors (e.g. such as those created by the Dutch company Kiiroo). I am unsure about the criminalisation argument in such cases, for reasons that will become clearer in a moment.


2. What is the prima facie argument for criminalisation?
With that definitional work out of the way, I can develop the main argument. That argument proceeds in a particular order. It starts by focusing on the purely robotic case, i.e. the case in which the robotic acts have no extrinsic effects on others. It argues that even in such a case, there may be grounds for criminalisation. That gives me a prima facie argument for criminalisation. After that, I focus on extrinsic effects, and suggest that they are unlikely to defeat this prima facie argument. Let’s see how all this goes.

The prima facie argument works like this:


  • (1) It can be a proper object of the criminal law to regulate conduct that is morally wrong, even if such conduct has no extrinsically harmful effects on others (the moralistic premise).

  • (2) Purely robotic acts of rape and child sexual abuse fall within the class of morally wrong but extrinsically harmless conduct that it can be a proper object of the criminal law to regulate (the wrongness premise).

  • (3) Therefore, it can be a proper object of the criminal law to regulate purely robotic acts of rape and child sexual abuse.


I don’t really defend the first premise of the argument in the article. Instead, I appeal to the work of others who have. For example, Steven Wall has defended a version of legal moralism that argues that actions involving harm to the performer’s moral character can, sometimes, be criminalised; likewise, Antony Duff has argued that certain public wrongs are apt for criminalisation even when they do not involve harm to others. I use both accounts in my article and suggest that if I can show that purely robotic acts of rape and child sexual abuse involve harm to moral character or fall within Duff’s class of public wrongs, then I can make the prima facie case for criminalisation.

This first premise is likely to be difficult for many, particularly those with a classic liberal or Millian approach to criminalisation. They will argue that only harm to others renders something apt for criminalisation. I sympathise with this view (which is why I am cagey about the argument as a whole) but, again, appeal to others who have tried to argue against it by showing that a more expansive form of legal moralism need not constitute a severe limitation of individual liberty and how it may be very difficult to consistently hold to the liberal view. I also try to soften the blow by highlighting different possible forms of criminalisation at the end of article (e.g. incarceration need not be the penalty). Still, even then I accept that my argument may simply lead some to question the moralistic principles of criminalisation upon which I rely.

Premise two is where I focus most of my attention in the article. I defend it in two ways, each way corresponding to a different version of legal moralism. First, I argue that purely robotic acts of rape and child sexual abuse may involve harm to moral character. This is either on the grounds that the performance of such acts encourages/requires the expression of a desire for the real-world equivalents, or on the grounds that the performance requires a troubling insensitivity to the social meaning of those acts. This is consistent with Wall’s version of moralism. Second, I build upon this by arguing that the insensitivity to social meaning involved in such acts (particularly acts of robotic rape) would allow for them to fall within Duff’s class of public wrongs. The idea being that in a culture that has condoned or belittled the problem of sexual assault, an insensitivity to the meaning of those acts demands some degree of public accountability.

In defending premise (2) I rely heavily on work that has been done on the ethics of virtual acts and fictional representations, particular the work of Stephanie Patridge. This reliance raises an obvious objection. There are those — like Gert Gooskens — who argue that our moral characters are not directly implicated in the performance of virtual acts because there is some distance between our true self and our virtual self. I respond to Gooskens by pointing out that the distance is lessened in the case of robotic acts. I rely on some work in moral psychology to support this view.

That is my defence of the prima facie argument.




3. Can the prima facie argument be defeated?
But it is important to realise how modest that argument really is. It only claims that robotic rape and robotic child sexual abuse are apt for criminalisation all else being equal. It does not claim that they are apt for criminalisation all things considered. The argument is vulnerable to defeaters. I consider two general classes of defeaters in the final sections of the paper.

The first class of defeaters is concerned with the possible effects of robotic rape and robotic child sexual abuse on the real-world equivalents of those acts. What if having sex with a child-bot greatly reduced the real-world incidence of child sexual abuse? Surely then we would be better off permitting or facilitating such acts, even if they do satisfy the requirements of Duff or Wall’s versions of moralism? This sounds right to me, but of course it is an empirical question and we have no real evidence as of yet. All we can do for now is speculate. In the article, I speculate about three possibilities. Robotic rape and robotic child sexual abuse may: (a) significantly increase the incidence of real-world equivalents; (b) significantly reduce the incidence of real-world equivalents; or (c) have an ambiguous effect. I argue that if (a) is true, the prima facie argument is strengthened (not defeated); if (b) is true, the prima facie argument is defeated; and if (c) is true then it is either unaffected or possibly strengthened (if we accept a recent argument from Leslie Green about how we should use the criminal law to improve social morality).

The second class of defeaters is concerned with the costs of an actual criminalisation policy. How would it be policed and enforced? Would this not involve wasteful expenditure and serious encroachments on individual liberty and privacy? Would it not be overkill to throw the perpetrators of such acts in jail or subject them to other forms of criminal punishment? I consider all these possibilities in the article and suggest various ways in which the costs may not be as significant as we first think.




So that’s it. That is my argument. There is much more detail and qualification in the full version. Just to be clear, once again, I am not advocating criminalisation. I am genuinely unsure about how we should approach this phenomenon. But I think it is an issue worth debating and I wanted to provide a (provocative) starting point for that debate.

Tuesday, December 9, 2014

Sunday, December 7, 2014

Brain-based Lie Detection and the Mereological Fallacy




Some people think that neuroscience will have a significant impact on the law. Some people are more sceptical. A recent book by Michael Pardo and Dennis Patterson — Minds, Brains and Law: The Conceptual Foundations of Law and Neuroscience — belongs to the sceptical camp. In the book, Pardo and Patterson make a passionate plea for conceptual clarity when it comes to the interpretation of neuroscientific evidence and its potential application in the law. They suggest that most neurolaw hype stems from conceptual confusion. They want to throw some philosophical cold water on the proponents of this hype.

In many ways, I am sympathetic to their aims. I too am keen to downplay the neurolaw hype. Once upon a time, I wrote a thesis about criminal responsibility and advances in neuroscience. Half-way through that thesis, I realised that few, if any, of the supposedly revolutionary impacts of neuroscience on the law were all that revolutionary. Most were simply rehashed arguments about free will and responsibility, dressed up in a neuroscientific garb, but which had been around for millennia. I also agree with the authors that there has been much misunderstanding and philosophically naivety on display.

That said, one area of neurolaw that I’m slightly more bullish about is the potential use of brain-based lie detection. But let me clarify. I’m not bullish about the use of “lie detection” per se, but rather EEG-based recognition detection tests or concealed information tests. I’ve written about them many times. Initially I doubted their practical importance and worried about their possibly mystifying effects on legal practice. But, more recently, I’ve come around to the possibility that they may not be all that bad.

Anyway, I’m participating in a conference next week about Pardo and Patterson’s book and so I thought I should really take a look at what they have to say about brain-based lie detection. That’s what this post is about. It’s going to be critical and will argue that Pardo and Patterson’s scepticism about EEG-based tests is misplaced. There are several reasons for this. One of the main ones is that they focus too much on fMRI lie detection, and not enough on the EEG-based alternatives; another is that they fail to really engage with the best current scientific work being done on the EEG tests. The result is that their central philosophical critique of these methods seems to lack purchase, at least when it comes to this particular class of tests.

But I’m getting ahead of myself. To develop this critique, I first need to review the basic techniques of brain-based lie detection and to summarise Pardo and Patterson’s main argument (the “Mereological Fallacy”-argument). Only then can I move on to my own critical musings.


1. What kinds of technologies are we talking about?
When debating the merits of brain-based lie detection techniques, it’s important to distinguish between two distinct phenomena: (i) the scanning technology and (ii) the testing protocol. The scanning technology is what provides us with data about brain activity. There are currently two main technologies in use in this field. Functional magnetic resonance imaging (fMRI) is used to track variations in the flow of oxygenated blood across different brain regions. This is typically thought to be a good proxy measure for underlying brain activity. Electro-encephalographic imaging (EEG) tracks variations in electrical activity across the scalp. This too is thought to be a good measure of underlying brain activity, though the measure is cruder than that provided by fMRI (by “cruder” I mean less capable of being localised to a specific sub-region of the brain).

The testing protocol is how the data provided by the scanning technology is used to find out something interesting about the test subject. In the classic control question test (CQT) the data is used to make inferences as to whether a test subject is lying or being deceitful. This testing protocol involves asking the test subject a series of questions, some of which are relevant to a particular incident (e.g. a hypothetical or real crime), some of which are irrelevant, and some of which are emotionally salient and similar to the relevant questions. The latter are known as “control” questions. The idea behind the CQT is that the pattern of brain activity recorded from those who lie in response to relevant questions will be different from the pattern of activity recorded from those who do not. In this way, the test can help us to separate the deceptive from the honest.

This is to be contrasted with the concealed information test (CIT), which doesn’t try to assess whether a test subject is being deceptive or not. Instead, it tries to assess whether they, or more correctly their brain, recognises certain information. The typical CIT involves presenting a test subject with various stimuli (e.g. pictures or words). These stimuli are either connected to a particular incident (“probes”), not connected to a particular incident but similar to those that are (“targets”), or irrelevant to the incident (“irrelevants”). The subject will usually be asked to perform some task to ensure that they are paying attention to the stimuli (e.g. pressing a button or answering a question). The idea behind the CIT is that certain recorded patterns of activity (data signals) are reliably correlated with the recognition of the probe stimuli. In this way, the test can be used to separate those who recognise certain information from those who do not. Since the information in question will usually be tied to a crime scene, the test is sometimes referred to as the guilty knowledge test. But this name is unfortunate and should be avoided. The test does not prove guilt or even knowledge. At best, it proves recognition of information. Further inferences must be made in order to prove that a suspect has guilty knowledge. Indeed, calling it the “concealed” information test is not great either, since the suspect may or may not be “concealing” the information in question. For these reasons, I tend to prefer calling it something like a memory detection test or, better, a recognition detection test, but concealed information test is the norm within the literature so I’ll stick with that.

As I said above, scanning technologies and testing protocols are distinct. At present, it happens to be the case that EEGs are used to provide the basis for a CIT, and that fMRIs are used to provide the basis for a CQT. But this is only because of present limitations in what we can infer from the data provided by those scans. It is possible that fMRI data could provide the basis for a CIT; and it is possible that EEG data could provide the basis for a CQT. In fact, there are already people investigating the possibility of an fMRI-based CIT.

All that said, the technology I am most interested in, and the one that I will focus on for the remainder of this post, is the P300 CIT. This is an EEG-based technology. The P300 is a particular kind of brainwave (an “evoked response potential”) that can be detected by the EEG. The P300 is typically detected when a subject views a rare and meaningful (i.e. recognised) stimulus, in a sequence of other stimuli. As such, it is thought to provide a promising basis for a CIT. I won’t go into any great depth about the empirical evidence for this technique, though you can read about it in some of my papers, as well as in this review article from Rosenfeld et al 2013. I’m avoiding this because Pardo and Patterson’s criticisms of these technologies is largely conceptual in nature.

Let’s turn to that critique now.


2. The P300 and the Mereological Fallacy
Before I get into the meat of Pardo and Patterson’s argument, I need to offer some warnings to the reader. Although in the relevant chapter of their book, the authors do lump together the P300 CIT with the fMRI CQT, it is pretty clear that their major focus is on the latter, not the former. This is clear both from the time they spend covering the evidence in relation to the fMRI tests, and from their focus on the concept of lying in their critique of these tests. It is also clear from the fact that they assume that the P300 CIT is itself a type of lie detection. In this they are not entirely correct. It is true that one may be inclined to infer deceptiveness from the results of a P300 CIT, either because the testing protocol forces subjects to lie when responding to the stimuli, or because the subject themselves may deny recognising the target information. But inferring deceptiveness is not the primary goal nor the primary forensic use of this test — inferring recognition is.

Pardo and Patterson’s preoccupation with lie detection should blunt some of the force of my critique. After all, my focus is on recognition detection, and so it may be fairly said that my defence of that technology misses their larger point, or does not really call into question their larger point. Nevertheless, I do think there is some value to what I am about to say. Pardo and Patterson do still talk about the P300 CIT and they do still argue that the mereological fallacy (which I’ll explain in a moment) could apply to the interpretation of evidence drawn from that test. The fact that they spend less time fleshing this out doesn’t mean the topic is irrelevant. Indeed, it may serve to bolster my critique since it suggests that their application of the mereological fallacy to the P300 CIT is not as well thought-out, nor as respectful of the current state of the art in research and scholarship, as it should be.

But what is this mereological fallacy and how does it affect their argument? Mereology is the study of part-whole relations, so as you might gather the mereological fallacy arises from a failure to appreciate the difference between a whole and one of its parts. For the sake of clarity, let’s distinguish between two versions of the mereological fallacy. The first, more general one, can be defined like this:

The General Mereological Fallacy: Arises whenever you ascribe properties that are rightly ascribed to a whole to some part or sub-part of that whole. For example, applying the predicate “fast” to a runner’s leg, rather than to the runner themselves.

The second, more specific one, can be defined like this:

The Neurolaw Mereological Fallacy: Arises whenever a neurolaw proponent ascribes behavioural or person-level properties to a state of the brain. (In other words, whenever they assume that a brain state is constitutive of or equivalent to a behavioural or person-level state). For example, applying the predicate “wise” to a state of the brain, as opposed to the person whose brain state it is.



This more specific version of the fallacy is the centrepiece of Pardo and Patterson’s book. Indeed, their book is effectively one long elaboration of how the neurolaw mereological fallacy arises in various aspects of the neurolaw literature. In basing their criticism on this fallacy, they are following the work of others. As far as I am aware, the mereological fallacy was first introduced into debates about the philosophy of mind by Bennett and Hacker. Pardo and Patterson are simply adapting and updating Bennett and Hacker’s critique, and applying it to the neurolaw debate. This is not a criticism of their work since they do that job with great care and aplomb; it is simply an attempt to recognise the origins of the critique.

Anyway, the neurolaw mereological fallacy provides the basis for Pardo and Patterson’s main critique of brain-based lie detection. Though they do not set this critique out with any formality, I think it can be plausibly interpreted as taking the following form (see pp. 99-105 for the details):


  • (1) If it is likely that the use of brain-based lie detection evidence would lead legal actors (lawyers, judges, juries etc) to commit the neurolaw mereological fallacy, then we should be (very) cautious about its forensic uses.

  • (2) The use of brain-based lie detection evidence is likely to lead legal actors to commit the neurolaw mereological fallacy.

  • (3) Therefore, we should be (very) cautious about the forensic uses of brain-based lie detection evidence.


Let’s go through the main premises of this argument in some detail.

The first premise is the guiding normative assumption. I am not going to challenge it here. I will simply accept it arguendo (“for the sake of argument”). Nevertheless, one might wonder why we should endorse it? Why is the mereological fallacy so normatively problematic? There are some reasons. The main one is that the law cares about certain concepts. These include the intentions of a murder suspect, the knowledge of the thief, and the credibility or potential deceptiveness of the witness. The application of these concepts to real people is what carries all the normative weight in a legal trial. The content of one’s intentions and the state of one’s knowledge is what separates the criminal from the innocent. The deceptiveness of one’s testimony is what renders it probative (or not) in legal decision-making. Pardo and Patterson maintain that all these concepts, properly understood, apply to the behavioural or personal level of analysis. For example, they argue that deceptiveness depends on a complex relationship between a person’s behaviour and the context in which that behaviour is performed. To be precise, being deceptive means saying or implying something that one believes to be false, in a social context in which truth-telling is expected or demanded. These behavioural-contextual criteria are what ultimately determine the correct application of the predicate “deceptive” to an individual.

If we make a mistake in the application of those predicates, it has significant normative implications. If we deem someone deceptive when, by rights, they are not, then we risk a miscarriage of justice (or something less severe but still malign). The concern that Pardo and Patterson have is that neurolaw will encourage people to make such mistakes. If they start using neurological criteria as the gold-standard in the application of normative, behavioural-level predicates like “intention” and “knowledge”, then they risk making normative errors. This is why we should be cautious about the use of neuroscientific evidence in the law.

But how cautious should we be? That’s something I’m not entirely clear about from my reading of Pardo and Patterson’s book. They are not completely opposed to the use of brain-based lie detection in the law. Far from it. They think it could, one day, be used to assist legal decision-making. But they do urge some level of caution. My sense from their discussion, and from their book as a whole, is that they favour a lot of caution. This is why I have put “very” in brackets in my statement of premise (1).

Moving on then to premise (2), this is the key factual claim about the use of brain-based lie detection evidence. In its current form it does not discriminate between the P300 CIT and the fMRI CQT. Pardo and Patterson’s concern is that evidence drawn from these tests will lead to legal actors confusing the presence of brain signal X with the subject’s meeting the criteria for the application of a behavioural predicate like “knowing” or “intending” or “deceiving”. In the case of the P300 CIT, the fallacy arises if the detection of the P300 is taken to be equivalent to the detection of a “knowledge”-state within the subject’s brain, instead of merely evidence that can be used to infer that the subject is in the appropriate behavioural knowledge state.

But do proponents of this technology make the fallacy? Pardo and Patterson argue that they do. They offer support for this by quoting from an infamous proponent of the P300 CIT: Lawrence Farwell. When describing how the technology worked, Farwell once said that the “brain of the criminal is always there, recording events, in some ways like a video camera”. Hence, he argued that the P300 CIT reveals whether or not crime-relevant information is present in the brain’s recording. Farwell is committing the fallacy here because he thinks that the state of knowing crime relevant information is equivalent to a brain state. But it is not:

This characterization depends on a confused conception of knowledge. Neither knowing something nor what is known — a detail about a crime, for example — is stored in the brain…Suppose, for example, a defendant has brain activity that is purported to be knowledge of a particular fact about a crime. But, suppose further, this defendant sincerely could not engage in any behavior that that would count as manifestation of knowledge. On what basis could one claim and prove that the defendant truly had knowledge of this fact? We suggest that there is none; rather, as with a discrepancy regarding lies and deception, the defendant’s failure to satisfy any criteria for knowing would override claims that depend on the neuroscientific evidence 
(Pardo and Patterson 2013, pp. 101-102)

Or as they put it again later, behavioural evidence is “criterial” evidence for someone knowing a particular fact (satisfaction of the behaviour criteria simply is equivalent to being in a state of knowledge); neuroscientific evidence is merely inductive evidence that can be used to infer what someone knows. People like Farwell are wont to confuse the latter with the former and hence wont to commit the mereological fallacy.

That, at any rate, would appear to be their argument. Is it any good?


3. Should we take the mereological fallacy seriously?
I want to make three criticisms of Pardo and Patterson’s argument. First, I want to suggest that the risk of proponents of the P300 CIT committing the mereological fallacy is, in reality, slight. At least, it is when one takes into account the most up-to-date work being done on the topic. Second, I want to push back against Pardo and Patterson’s characterisation of the mereological fallacy in the case of the P300 CIT. And third — and perhaps most significantly — I want to argue that in emphasising the risk of a neurolaw mereological fallacy, Pardo and Patterson ignore other possible — and arguably more serious — evidential errors in the legal system.

(Note: these criticisms are hastily constructed. They are my preliminary take on the matter. I hope to revise them after next week’s conference)

Turning to the first criticism, my worry is that in their defence of premise (2), Pardo and Patterson are constructing something of a straw man. For instance, they cite Lawrene Farwell as an example of someone who might confuse inductive neuroscientific evidence of knowledge with criterial behavioural evidence of knowledge. But this is a misleading example. Farwell’s characterisation of the brain as something which simply records and stores information has been criticised by leading proponents of the P300 CIT. For example, J. Peter Rosenfeld, himself a leading psychophysiologist and P300 researcher, wrote a lengthy critical appraisal of Farwell back in 2005. In it, he identified the problems with Farwell’s analogy, and noted that the act of remembering or recollecting information is highly fallible and reconstructive. There are also other P300 CIT researchers have actually tried to check the vulnerability of the technique to false memories. Beyond this, Farwell has been more generally criticised by experts in the field. In a recent commentary on a review article written by Farwell, the authors (a group of leading P300 researchers) said this:

By selectively dismissing relevant data, presenting conference abstracts as published data, and most worrisome, deliberately duplicating participants and studies, he misrepresents the scientific status of brain fingerprinting. Thus, [Farwell] violates some of the cherished canons of science and if [he] is, as he claims to be, a ‘brain fingerprinting scientist’ he should feel obligated to retract the article. 
(Meijer et al, 2013)

Of course, Farwell isn’t a straw man: he really exists and he really has pushed for the use of this technology in the courtroom. So I’m not claiming that there is no danger here or that Pardo and Patterson are completely wrong to warn us about it. My only point is that Farwell isn’t particularly representative of the work being done in this field, and that there are others that are live to the dangers of assuming that the P300 signal does anything more than provide inductive evidence of knowledge. To be fair, I have a dog in this fight since I have written positively about this technology. But I would never claim that the detection of a P300 is criterial evidence of guilty knowledge; I would always point out that further inferential steps are needed to take reach such a conclusion. I am also keen to point out that this technology is not yet ready for forensic use. Along with other proponents, I think widespread field-testing — in which the results of a P300 are measured against other more conclusive forms of evidence (including behavioural evidence) in actual criminal/legal cases — would be needed before we seriously consider it.

This leads me to the second criticism, which is that I not entirely sure about Pardo and Patterson’s characterisation of the mereological fallacy, at least as it pertains to the P300 CIT. They are claiming that there is an important distinction between a person knowing something and the neurological states of that person. Knowledge is a state pertaining the whole, whereas neurological states are sub-parts of that whole. Fair enough. But as I see it, the P300 CIT is not a test of a subject’s knowledge at all. It is a recognition test. In fact, it is not even a test of whether a person recognises information; rather, it is a test of whether the person’s brain recognises information. A person’s brain could recognise a stimulus without the person themselves recognising the stimulus. Why? Because large parts of what the brain does are sub-conscious (sub-personal — if we assume the personal is defined by continuing streams of consciousness). Figuring out whether a subject’s brain recognises a stimulus seems forensically useful to me, and it need not be confused with assuming that the person recognises the stimulus.

The final criticism is probably the most important. A major problem I have with Pardo and Patterson’s discussion of brain-based lie detection is how isolated it feels. They highlight the empirical and conceptual problems with this form of evidence without considering that evidence in its appropriate context. I will grant that there is a slight risk that proponents of the P300 CIT will commit the mereological fallacy. But how important is that risk? Should it really lead us to be (very) cautious about the use of this technology? That’s something that can only be assessed in context. What other methods do we currently use for determining whether a witness or suspect recognises certain crime-relevant information? There are several. The most common are robust questioning, cross examination and interrogation. Verbal or behavioural responses from these methods are then used to make inferences about what someone knows or does not know. But these methods are not particularly reliable. Even if behavioural criteria determine what it means for a subject to know something, there are all sorts of behavioural signals that can mislead us. Is someone hiding something if they are being fidgety? Or if they look nervous and blink too often? What if they change their story? We routinely make inferences from these behavioural signals without knowing for sure how reliable they are or how likely they are to mislead us (though we may have some intuitive sense of this).

And this matters. One of the points that I, and others, have been making in relation to the P300 CIT is that it provides a neurological signal, from which we can make certain inferences, and that it comes with known error rates and precise protocols for its administration. In this respect it seems to have a comparative advantage over many of the other methods we use for making similar inferences. This is why we should take it seriously. In other words, even if it does carry with it the risk that legal actors will commit the mereological fallacy, that risk has to be weighed against the risks associated with other, similar, evidential methods. If the latter outweigh the former, Pardo and Patterson’s argument seem a good deal less significant.




4. Conclusion
To briefly sum up, Pardo and Patterson offer an interesting and philosophically sophisticated critique of brain-based lie detection. They argue that one of the dangers with this technology is that the legal actors who make use of it will be prone to commit the neurolaw mereological fallacy. This fallacy arises when they ascribe behavioural-level properties to brain states. Though I agree that this is a fallacy, I argue that it is not that dangerous, at least in the case of evidence drawn from the P300 CIT. This is for three reasons. First, I think the risk of actual proponents of the technology committing this fallacy is slight. With the exception of Lawrence Farwell — whom Pardo and Patterson critique — most proponents of the technology are sensitive to its various shortcomings. Second, Pardo and Patterson’s characterisation of the mereological fallacy — at least when it comes to this type of evidence — seems misleading. The P300 CIT provides a signal of brain-recognition of certain information, not person-recognition of information. And third, and most important, the risk of committing the mereological fallacy must be weighed against the risk of making faulty inferences from other types of evidence. It is suggested that the latter are likely to be higher than the former.

Friday, December 5, 2014

The Philosophy of Sex (Series Index)




Once you've written nearly 700 posts, you begin to see patterns you never really appreciated. For example, I just realised that I've written quite a bit about the philosophy of sex (broadly construed). In doing so, I've covered a number of controversial debates and issues. These include: the permissibility of pornography; the criminalisation of prostitution; the punishment of rape and sexual assault; and the ethics of sex in virtual and robotic worlds.

Anyway, I thought it might be useful to group together everything I've written on the topic in this one post. I think it makes for some interesting reading. I've divided this up by theme, starting with the basic views on the ethics of sex, and then moving into more specialised debates. I haven't included the numerous posts I have written on the ethics of same-sex relations. There's another index-post that will give you links to them.


1. Introduction: General Issues in the Ethics of Sex


  • On Benatar's Two view of Sexual Ethics - A look at David Benatar's classic paper which argued that a casual attitude toward sex implies that there is nothing particularly wrong about rape and child sexual abuse. I tried to resist Benatar's conclusions.




2. The Ethics of Pornography







3. Prostitution and the Ethics of Commercial Sex






4. Criminal Law: Rape, Sexual Assault and Incest



  • On Rubenfeld and the Riddle of Rape by Deception - My analysis and critique of Jed Rubenfeld's controversial article on rape by deception. Rubenfeld argued that rape law should not be premised on consent and the right to sexual autonomy. Instead, it should be based on the right to self-possession and bodily autonomy. 




5. Robotic and Virtual Sex


  • Will sex workers be replaced by robots? (A Precis) - A brief summary of my paper on the topic of sex work and technological unemployment. I try to argue -- contra others -- that sex work may be one of the few areas that is resistant to technological unemployment.