Margaret Talbot has written an article for the New Yorker on efforts to develop fMRI-based methods of lie detection (link to abstract). This is another version of the sort of article you've probably read many times by now in mainstream publications, but I think this one has somewhat more depth than most. Here are some highlights of recent events that you may not know about:
- Joel Huizenga, who started No Lie MRI, "has charged about a dozen clients approximately ten thousand dollars apiece for an examination";
- In response to the ACLU's FOIA request about C.I.A. use of fMRI-based lie detection, "the C.I.A. would neither 'confirm nor deny' that it is investigating fMRI applications."
- Huizenga told Talbot "that he was trying to get fMRI evidence evidence admitted into a California court for a capital case that he was working on. (He would not go into the case's details.)"
The article also claims that "[s]ome bioethicists have been particularly credulous, assuming that MRI mind reading is virtually a done deal, and arguing that there is a need for a whole new field: 'neuroethics.'" I'm skeptical of both parts of this claim. I'm sure there are some bioethicists who think that this is a done deal, but I suspect that's less common than the article seems to suggest; admittedly, many more think that it will someday be a done deal (and they may well be right!). As to the second claim, I'm not sure what it means to "need a . . . whole new field," but I think it's pretty obvious that whatever "neuroethics" is, it is surely closely tied to many existing, well-established fields of inquiry. And, whatever "neuroethics" is, it is surely about more than just fMRI scanning.