This is the second part in short series of posts looking at Frederick Schauer’s article “Lie Detection, Neuroscience and the Law of Evidence”. In this article, Schauer examines the debate surrounding the legal admissibility of fMRI lie detection evidence, and argues that there are good reasons to allow such evidence in a court of law. This is interesting in that it runs contrary to the prevailing view about fMRI lie detection.
In part one, I reviewed some of the background issues in Schauer’s article. This included a brief discussion of the problem of false testimony within the law — a problem that makes a reliable lie detector particularly alluring. It also included an overview of the legal history of the lie detector test, noting that from its earliest days it has struggled to win acceptance in the courts. This trend has continued despite the advent of newer versions of the test using fMRI imaging techniques.
Schauer questions the tenability of this trend. He does so by defending one overarching claim, which we may call “Schauer’s Thesis”:
Schauer’s Thesis: Whether fMRI lie detection evidence should be admitted to court is not simply a question of its scientific validity and reliability, it also (perhaps primarily) a question of the normative and ethical function of the law. That is to say, questions of evidential admissibility are fundamentally determined by legal-ethical standards, not purely scientific ones.
This claim is significant in that current tests for the admissibility of scientific evidence, such as DNA fingerprinting and other forensic techniques, are heavily reliant on scientific standards of validity and reliability. For instance the Daubert test, which is now advocated for introduction in the UK, states that judges should assess scientific evidence by referring to the various indicia of reliability that are common in the scientific world. These indicia include things like “known error rates”, “general acceptance within the relevant scientific community”, “testability” and “passing peer review”.
This approach yields significant legal territory to the norms of scientific inquiry, and while this may often be appropriate, Schauer’s Thesis urges lawyers and legal theorists to regain at least part of this territory. What scientists rightfully deem “good evidence” and what legal theorists rightfully deem “good evidence” may be two different things. It’s important not to lose sight of this.
Schauer supports his thesis with two arguments. For cognitive convenience, I have labelled them the probative context argument and the epistemic progress argument. In the remainder of this post, I examine each argument in some detail.
1. The Probative Context Argument
As mentioned in part one, in every legal case there is some set of facts that need to be proved (or disproved) in order for the case to succeed. If I am to be convicted of murder, it must be proved that I intentionally killed another person. One step on the way towards proving this would be to establish that I was present at the scene of the crime. Lie detectors or other forensic evidence might be used to do this. But the value of any such evidence depends largely on three factors:
Probability: Does the evidence raise or lower the probability of the factum probandum and if so, by how much does it raise or lower its probability?
Standard of Proof: What confidence threshold must the probability of the factum probandum cross in order for it to count as being proved or not proved?
Legal Purpose: Is the evidence being submitted in order to prove or disprove the factum probandum?
These three factors determine the probative context in which the evidence is presented. This context varies relative to the legal issue at stake, and the party for whom the evidence is proffered.
For example, in criminal cases, the standard of proof for the prosecution is beyond a reasonable doubt. This is a notoriously fuzzy standard, but let’s put a figure on it and say that it corresponds to a 95% (0.95) probability of the factum probandum being true. To return to my murder trial, it would follow then that, in order to secure a conviction, the prosecution would need to introduce a body of evidence that (in its totality) raises the probability of my intentionally killing the victim to the 95% threshold. Contrariwise, it would also follow that if I could introduce any evidence that lowered the probability back down below the 95% threshold, then I would succeed in my defence. Thus, the probative value of the evidence varies depending on the context.
This is important because it feeds into the assessment of lie detection evidence. Reviewing the available literature, Schauer notes that fMRI lie detector tests have reported reliability rates that vary from 70-90%. This means they are better than chance at identifying deceptive individuals, but far from perfect. Unfortunately, in his discussion, Schauer doesn’t break down the data into false positives and false negatives. Consequently, I’m not unsure whether the 10-30% of failures covers truth-tellers who were falsely identified as liars or liars who were never spotted, or some combination of both. This could make a big difference to the legal utility of the evidence from the prosecutorial side in a criminal trial, but Schauer doesn’t look at the issue from their perspective.
Instead, Schauer looks at the issue from the perspective of the defence and notes that although a 70% reliability rate might not suffice to prove that someone is guilty, it might suffice to prove reasonable doubt. So, for instance, if I’m being tried for murder and I have an alibi which, following the administration of an fMRI lie detection test, is 70% likely to be true (Bayesian considerations to one side), it would be highly useful for the court to be made aware of this fact.
Breaking it down, the argument Schauer’s making looks something like this:
- (1) In its present form(s), the accuracy rate of fMRI lie detection is somewhere between 70% and 90%.
- (2) In some probative contexts, a 70% likelihood that X is telling the truth/lying is highly probative.
- (3) Therefore, fMRI lie detection could be useful (even in its present form) in some probative contexts.
Thus we have the probative context argument. It should be pointed at that premise (2) can be defended with a number of examples. I used the criminal example since it’s possibly the most straightforward, but in civil trials the standard of proof is much lower (balance of probabilities) and hence the lie detector test could be highly probative in those contexts too.
2. Challenges to the Probative Context Argument
I have to say, Schauer’s basic point strikes me as being a good one. Nevertheless, there are some lingering concerns. Personally, I think the second premise needs to show some greater sophistication in its use of probabilities and accuracy rates. Thus, as mentioned previously, greater appreciation should be shown for rates of false positives and false negatives, not simply overall accuracy rates. A test that is 46% accurate might actually be highly probative, depending on whether the 54% of inaccuracies refers to false positives or false negatives. If the 54% refers solely to false negatives, then the test might actually be incredibly useful to the prosecution in a criminal trial. For in that case, the test would accurately identify guilty people to the exclusion of innocents. Thus, any concern about punishing the innocent would be allayed.
But this observation is a relatively minor one. The second premise of the argument could easily be reformulated and defended in such a way that the importance of false positives and false negatives is brought to the fore. A more pressing concern, and one that Schauer does actually address, arises in relation to the first premise. Critics will be keen to point out that the 70-90% accuracy rate is derived from experimental studies of the tests, not from real world applications. There are serious doubts as to the merits of extrapolating from such experimental studies to the real world. What might be 90% accurate in the laboratory setting, could be only 30% accurate in the field, or even less. We simply don’t know.
This is the ecological validity challenge. If it succeeds, it would undermine the probative context argument since that argument depends on us having some reasonable estimate of the accuracy of the test in question. If we have no such reasonable estimate — if the probabilities in question are, to put it bluntly, inscrutable — then Schauer’s argument won’t work. But are things really this bad?
Schauer thinks not. As he sees it, the ecological validity objection breaks down into two distinct parts. The first claims that we cannot extrapolate because the experimental subjects are not representative of the wider population. The second claims that the incentives under which people lie in an experimental setting are artificial, and quite distinct from the high stakes incentives in civil or criminal litigation.
Responding to the first claim, Schauer notes that this is a general problem with many kinds of evidence proffered for forensic use. For example, studies about the unreliability of eyewitness identification and memory are typically performed on undergraduate psychology students who may not be representative of the wider population. And because this is such a general problem, psychologists and other scientists have frequently sought to address it in their studies. They have done so by attracting more representative samples and trying to match real-world conditions more closely. Furthermore, they have tried to see whether results derived from the low-stakes unrepresentative sample tests hold up in the more high-stakes representative sample settings. Citing a slew of general reviews done on this topic, Schauer notes that the general trend seems to be that the results do hold-up. Although similar studies have not yet been done on fMRI lie detection, the trend may well remain the same unless there are particular difficulties with the extrapolability of fMRI results.
In relation to the second part of the objection, Schauer accepts the significant problems here. It is very difficult to artificially recreate the pressure to lie that might be felt in a real-world setting in the lab. But some fMRI researchers have tried to do this (Greene and Paxton, 2009) and their results are consistent with the premise underlying fMRI lie detectors. Future studies should address this in more depth and thus a more reliable picture of extrapolability can emerge.
A final related point emerges from the individual-population divide. Most fMRI studies, as well as most scientific studies, generate their statistical output by averaging over the population of experimental subjects. This leads to a classic problem in the legal context: how can this population-level data be probative in the individual case? After all, just because a test is 70% accurate across a population does not mean it is accurate for a particular individual in a particular case. So should the information be used at all?
Although this has been a surprisingly popular critique in legal circles, particularly when it comes to the use of epidemiological studies in tort law, it is flawed. As Schauer points out, the fact that for any random person plucked from the population, a particular test is accurate 7 times of 10 is probatively valuable given the right probative context. So this does not defeat the probative context argument.
The only problem with all this is that it might suggest a certain weakness in the argument. After all, given the right context, a test with an exceptionally low probability of being correct (say 5%) might be probatively relevant. Is this a reductio of the argument, or just a necessary truth about the nature of evidence and proof? I won’t answer that question here.
3. The Epistemic Progress Argument
On its own, the probative context argument has some value. But when coupled with the second argument, the argument from epistemic progress, it makes a good overall case for Schauer’s thesis. To explain the epistemic progress argument, I’m going to rely on some concepts from epistemic systems theory, which I’ve covered before on this blog.
To review, an epistemic system is any social system that (at least sometimes) generates judgments of truth or falsity. The legal trial is classic example since it generates judgments of truth or falsity concerning the factum probandum. Following Koppl’s schema, the epistemic efficiency of an epistemic system can be defined as follows:
Epistemic Efficiency: A measure of the likelihood of the system reaching a true judgment. Either 1 minus the error rate of the system; or the ratio of true judgments to total judgments.
And epistemic progress in this way:
Epistemic Progress: A system can be said to undergo epistemic progress whenever its epistemic efficiency is increased.
The basic idea is that epistemic progress is a good thing, and that any reform to the system that allows it to progress would be welcome. The key, however, is that epistemic progress is always assessed relative to the existing level of epistemic efficiency. Thus, if we wished to argue in favour of a particular reform, we would have to do so by directly referencing the current level of efficiency. This relativistic property of epistemic progress has one interesting effect: if the current level of epistemic efficiency is low, then a particular reform with an unimpressive level of overall accuracy, may nevertheless be warranted on the grounds that it still raises the efficiency of the system.
Unsurprisingly, Schauer argues that this is true in the case of fMRI lie detection. This gives him the following argument:
- (4) If a particular reform to an epistemic system leads to epistemic progress, then it ought to be (all else being equal) welcomed.
- (5) The admissibility of fMRI lie detection evidence would lead to epistemic progress in the law.
- (6) Therefore, (all else being equal) fMRI lie detection evidence ought to be welcomed.
Schauer argues in favour of premise (5) by highlighting how existing methods of solving the false testimony problem are rather lacking. Historically, the administration of the religious oath was thought to incentivise truth-telling. In a culture in thrall to the fear of hell, this may have had some sway, but in its modern secular form the oath relies on the desire to be honest and the threat of perjury to do its work. Arguably, neither of these are particularly effective and certainly the oath has no known accuracy rate associated with it.
Robust cross examination is also often singled out as an excellent method for solving the false testimony problem. But this is highly suspect. As Schauer notes, cross examination may expose inconsistencies in certain cases, but is unlikely to do so in the case of the seasoned or practiced liar (movie depictions of the practice notwithstanding). In these cases we may be left with contradictory testimonies, which can be very difficult for a jury to assess. Furthermore, as with the oath, there are no known accuracy rates associated with cross-examination.
In light of these comparators, the admission of fMRI lie detection would seem to represent an improvement. Since it does have known accuracy rates, and since it can do something to break the deadlock between contradictory testimonies, it could lead to epistemic progress. Thus, the argument goes through.
Two caveats are in order here. First, in his defence of premise (5) Schauer may have missed out on other methods of solving the false testimony problem, ones which, although not currently used, would be more progressive than fMRI lie detection. This wouldn’t defeat the argument, but it might lessen its appeal since those alternatives would be the better bet. Second, the conclusion to the argument includes an “all else being equal”-clause. It might be possible for someone to argue that, in the case of fMRI evidence, all else is not equal. For example, they could argue that judges and juries are known to overvalue the results of fMRI studies, hence the admission of fMRI lie detection might do more harm than good. Schauer actually looks at this objection in the article, suggesting that it is ineffective, but I won’t cover that discussion here. I think this issue actually deserves a more detailed consideration, which I may (if the mood takes me) cover in a future post.
To sum up, Schauer’s thesis is that the admissibility of fMRI lie detection evidence cannot be determined solely on scientific grounds. He makes his case for this thesis with two arguments. The first — the probative context argument — claims that techniques with (scientifically) unimpressive accuracy rates might still be desirable in the legal setting. This is because the value of evidence varies with the probative context. The second — the epistemic progress argument — claims that even if fMRI evidence is not particularly reliable, its use in the law might nevertheless be desirable if it can raise the epistemic efficiency of the legal system. This, he argues, is something it could well do given that existing methods for solving the false testimony problem are rather weak.