Margaret Talbot has written an article for the New Yorker on efforts to develop fMRI-based methods of lie detection (link to abstract). This is another version of the sort of article you've probably read many times by now in mainstream publications, but I think this one has somewhat more depth than most. Here are some highlights of recent events that you may not know about:
- Joel Huizenga, who started No Lie MRI, "has charged about a dozen clients approximately ten thousand dollars apiece for an examination";
- In response to the ACLU's FOIA request about C.I.A. use of fMRI-based lie detection, "the C.I.A. would neither 'confirm nor deny' that it is investigating fMRI applications."
- Huizenga told Talbot "that he was trying to get fMRI evidence evidence admitted into a California court for a capital case that he was working on. (He would not go into the case's details.)"
The article also claims that "[s]ome bioethicists have been particularly credulous, assuming that MRI mind reading is virtually a done deal, and arguing that there is a need for a whole new field: 'neuroethics.'" I'm skeptical of both parts of this claim. I'm sure there are some bioethicists who think that this is a done deal, but I suspect that's less common than the article seems to suggest; admittedly, many more think that it will someday be a done deal (and they may well be right!). As to the second claim, I'm not sure what it means to "need a . . . whole new field," but I think it's pretty obvious that whatever "neuroethics" is, it is surely closely tied to many existing, well-established fields of inquiry. And, whatever "neuroethics" is, it is surely about more than just fMRI scanning.
Of course, neuroethics it´s more than scanning the brain and their ussually related oddities (i.g. incidental findings, Illes 2005) or other associated problems (scanning as a tool for therapy and therefore preventing disease development with it´s ethical implications or invasion of privacy [cognitve liberty])
In relation to the use of fMRI as a lie detector i read in various volumes (including those published by the Dana foundation) that in this issue emerges what is called the "problem of memory". Perhaps one can detect when somone lies intentionally but if, for example, he mistakes to be sure about something but turns to be mistaken about it, then it is a problem with memory (the subject believes something with extreme confidence but turns to be false).
On the other hand, specific individuals can self-decieve themselves and passing undetected or just simply people may have an anomalous neural circuit similar to the one which tell us that is lying when in fact is not lying or statistical errors...
Posted by: Anibal | 06/30/2007 at 01:56 PM