Over the last several years, researchers have been trying to develop brain-based methods of detecting lies. The overwhelming consensus among neuroscientists is that such techniques are unreliable, particularly in real-world settings. That hasn't stopped at least a couple of companies from marketing functional magnetic resonance imaging ("fMRI") lie detection services. See here and here.
While the consensus view is probably right that the technology is not yet ready for primetime, there is a perhaps unwarranted pessimism that pervades many of these discussions. I've been to numerous conferences where academics love to identify problems with the reliability of fMRI-based lie detection and emphasize that jurors are likely to be unduly swayed by neuroscience evidence. They often seem to forget that our most well-known method of lie detection--the jury--has numerous reliability problems. Most importantly, despite hundreds of years of data, we can't say very well just how reliable juries are. Surely juror lie detection leads to false positives and false negatives, but this particular "technology" is widely used in the United States.
Of course, litigants have constitutional rights to jury trials that they don't have to modern lie detection techniques. But from a policy perspective, we want to know whether a new approach to lie detection, when combined with whatever other truth-generating techniques we already use, leads to cost-effective improvements in the judicial system. That policy question, at least on the surface, looks rather different than the legal issue of whether the technology satisfies the standards for admission of scientific evidence set out in the Daubert or Frye test.
With these considerations in mind, I was pleased to read Fred Schauer's (Law, UVA) new draft article, Can Bad Science Be Good Evidence: Lie Detection, Neuroscience, and the Mistaken Conflation of Legal and Scientific Norms. Schauer points out, quite appropriately, that the standards for the admission of legal evidence are determined as a matter of policy not scientific inquiry. Neuroscientists can tell us whether a particular technology satisfies an existing legal standard but have no special expertise, as a general matter, in selecting the standard itself.
Schauer states, "Some examples of good science may still not be good enough for some legal purposes, and, conversely, some examples of bad science m[a]y, in some contexts, still be good enough for some legal purposes." His draft, therefore, seems to cut to the heart of the Daubert and Frye standards. Perhaps, given the ways that scientific evidence may unduly sway juries, we need something like the Daubert and/or Frye standards, so that, at the end of the day, we reach the right policy outcomes. But Schauer is not so pessimistic about juror abilities. He suggests, at least implicitly, that in the lie detection context, the traditional tests for admitting scientific evidence may not set the bar at the right place.
(Hat tip for the SSRN link: Kevin Cole); (Cross-posted.)
Hey Adam,
Thanks for the hat-tip and the perspective. I agree with you that when discussing neuro-evidence, one ought to keep uppermost the idea that while we cannot specify baseline reliability of juror lie detection, the bulk of the evidence suggests that it is not particularly good, which is particularly important where the consequences of errors are so large.
But I don't think this fundamentally alters the significance of the neuro-evidence skeptics. Whatever the problems with juror lie detection, it seems to me to be worse to introduce a form of evidence that is very likely to exercise enormous influence over a jury (given the power of pretty brain pictures) while offering nothing that suggests greater reliability and accuracy than, say, juror lie detection.
If we have a significant problem with juror lie detection, and I certainly think that we do, then we ought to reflect on the significance of these problems and whether there might be ways of improving them. Importing a form of evidence that proffers no greater assurances of reliability and accuracy but which will be much more likely to sway a jury then other forms of evidence offered as the truth of the matter asserted seems to be enough of a problem to generate the concern typically raised in the discourse.
No?
Posted by: Daniel S. Goldberg | 09/11/2009 at 12:17 PM
Hi Daniel,
I agree with the thrust of your comments above. If brain-based lie detection is unreliable and confuses jurors more than it helps them, there's no point in admitting a technology that will make fact-finding worse!
My impression (in our CrimProf comment exchange) was that Pardo and Patterson (and Bennett and Hacker) have a particular critique about how we understand and speak about the mind and the brain. That particular critique is not relevant to the use of brain-based lie detection provided one relies only on correlations between brain phenomena and lying phenomena.
Posted by: Adam Kolber | 09/12/2009 at 02:31 PM
Hi Adam,
That was a different "Daniel"!
(I almost always sign in with my full name . . . )
Posted by: Daniel S. Goldberg | 09/12/2009 at 10:08 PM
Oh, okay! Sorry about that!
Posted by: Adam Kolber | 09/12/2009 at 10:46 PM