Greg Miller has written an interesting one-page article in Science (limited access here) describing the recent conference at Stanford Law School on the use of brain imaging to assess pain. You can access some more commentary about the conference, along with the audio recordings here.
I will also use this opportunity to address a couple of claims in the article attributed to me. The article states that I argued "that pain detection is more likely to be the first fMRI application to find widespread use in the courtroom, in part because the neuroscience of pain is better understood."
Let me clarify/expand: I don't know if pain detection will be the first fMRI application to find widespread use in the courtroom, but compared to brain-based lie detection technologies, I do think that brain-based methods of pain detection (including both functional and structural neuroimaging) have certain distinct advantages. First, unlike lies, which can be made in a fraction of a second, the kind of pain that would likely be relevant in the courtroom is likely to extend over long periods of time. That may make pain a little bit easier to reliably detect. That is, we're observing a phenomenon that doesn't require precise time resolution. Second, we already have some evidence of correlations between chronic pain conditions and structural changes in the brain. If these correlations can reliably be detected and if these structural changes are not observed in subjects lacking pain, then we may have a way of detecting certain kinds of malingering. Assuming that it is difficult to deliberately fake the pertinent structural changes in the brain, countermeasures to the technology will be harder to come by. By contrast, it seems likely that there will be a number of countermeasures one can use against functional MRI.
As for whether the neuroscience of pain is better understood than the neuroscience of lies, I'm not sure if that's true. I'm happy to defer to neuroscientists on the matter. I do know that I have asked many neuroscientists about the relative plausibility of brain-based lie detection relative to brain-based pain detection. I would say that most, but not all, seem to concur with my general sentiments. More interestingly, though, few neuroscientists so far seem sufficiently versed in both technologies to make the comparison.
The article also says that "Kolber estimates that pain is an issue in about half of all tort cases, which include personal injury cases." What I think I said (or at least meant to convey) is the claim that I mention in this article about pain imaging that "pain and suffering awards may represent about half of personal injury damage awards" (sources on p.434). This figure is just an estimate, but it does reinforce the central point that lots of money changes hands in the legal system over hard-to-verify claims about pain and other subjective experiences.
Adam,
Obviously, I did not have the benefit of attending the conference and engaging in the discussion. I did, however, read the write-up, and I think you know we disagree on neuroimaging and pain. You write:
"Second, we already have some evidence of correlations between chronic pain conditions and structural changes in the brain."
Yes, indeed, and these are intriguing findings. But these are correlations, and as far as I am aware we have very little evidence of cause and effect at this point.
In any case, moving from evidence of correlations to conviction that these correlations identify the actual phenomenon itself is a big leap. Given the incredible interdependence of brain function, picking out the single function (i.e., pain) from the myriad correlates that we know are linked to any observable change is difficult, to say the least.
If I were judging such a case, saw such evidence, and had a 403 objection before me, I'd kick it out immediately, especially given how readily persons are likely to engage in neurofallacies with brain imaging.
"If these correlations can reliably be detected and if these structural changes are not observed in subjects lacking pain, then we may have a way of detecting certain kinds of malingering."
These are enormous ifs, and my understanding is that given the state of the art, belief that these conditions can be fulfilled is serious optimism if not downright hubris. And I also think it is worthwhile to note the abundant historical evidence of the latter in neuroscience extending back all the way to its inception on both continents (Gall, Charcot, Hammond, Mitchell, etc.).
And I still fail to understand why discussion on imaging and pain should focus on malingering. With lie detection, we are talking about lies. With pain, we are talking about pain. Sure, some people lie about pain, but the best evidence on this subject strongly suggests that the proportion of undertreated chronic pain sufferers dwarfs those of malingerers. Given this, surely there is more that matters ethically about pain than the fact that some people lie about it.
If neuroimaging is to have any use in context of pain -- and I'm not remotely convinced that it does at this point, or even that it will in the near future -- I would argue ethical use of the techniques commends if not obligates its purveyors use it to ameliorate human suffering, rather than deploy it in the service of stigmatizing an already stigmatized population. They are not mutually exclusive, but I think the ethical priority is reasonably clear.
Given that so much of the discussion as to neuroimaging and pain focuses on whether we can objectively prove pain complaints, I have no problem saying such discourse is ethically suboptimal. That we are so desperate to objectify pain suggests some other important aspects about the meaning of pain in American society, but that is a different discussion altogether.
Nevertheless, thanks for the clarification.
Posted by: Daniel Goldberg | 01/15/2009 at 12:19 AM
Daniel,
I disagree that we disagree. (Well, I know that we disagree to some extent but not to the extent you think.) I didn't say anything about causation. The issue is whether we can identify a brain-based correlate of pain that will reliably pick out those people who have genuine pain from those people who do not.
I absolutely agree with you that we should be far more concerned with alleviating the very real pain that chronic sufferers have than we should be concerned with identifying malingerers. But I am not writing a post about the allocation of health care resources throughout society. I am writing about a particular technology and how it might someday be useful. I discuss malingering, not because it's the most desirable use for a pain detection technology, but rather because it may be easier to detect a person who has no pain but purports to have substantial pain than it would be to detect the precise extent to which a person is in pain.
I make no claim that this sort of technology is ready for use today, tomorrow, or in the immediate future. I do think it holds promise someday for use in certain contexts. Whether that is realism, optimism, or "downright hubris" remains to be seen.
Posted by: Adam Kolber | 01/15/2009 at 12:35 AM
Adam,
"I discuss malingering, not because it's the most desirable use for a pain detection technology, but rather because it may be easier to detect a person who has no pain but purports to have substantial pain than it would be to detect the precise extent to which a person is in pain."
This is fair, but if it is not the ethically preferable use of the technology, I don't think you can evade the question of allocation. Note that this is not a question of absolute priority (which should we do), but relative priority (how should we prioritize each of them). Thus, the fact that it may be easier to detect malingering than utilize it to actually ameliorate pain is a terrible reason for investing signficant time and resources in the technique itself, given that the magnitude of the latter problem dwarfs the former.
Moreover, I think it's also relevant to note the increasing utilization and waste associated with imaging techniques in general, and the fact that many imaging techniques seem to do much more harm than good with chronic pain, which by and large resists the vast majority of imaging techniques -- and I am dubious it will be drastically different with fMRI -- with devastating consequences for chronic pain sufferers, whose pain is delegitimized and invalidated in part because it is invisible to diagnostic imaging.
In short, I guess my point is that a significant portion of the problems we have with health expenditures in general and pain in particular are deeply connected to our use of imaging techniques. Thus, unless we want to say that concerns of justice are irrelevant to our development and use of imaging techniques -- which is not a compelling claim, to me -- I don't think it's a good idea to separate our discussion of neuroimaging from these larger ethical issues.
Of course, it is fair to zero in on any particular aspect of this in a blog post, but I do think that critically assessing why we develop imaging technologies and what we use them for is a matter of serious ethical concern and ought to be front and center for neuroethicists.
Posted by: Daniel S. Goldberg | 01/16/2009 at 01:33 PM
Adam & Daniel:
I'd like to weigh in from the neurobiology side of the equation. Let me say at the outset that my expertise is as a cellular neurophysiologist rather than an imager, which may explain my overall skepticism about the fMRI. From my perspective, the important things happen at a subcellular level, and fMRI just is not able to resolve events that I suspect are of the greatest interest. That being said, I have been surprised at the insights that have emerged in the past year or two, and hold considerable optimism that clever scientists will come up with ever more insightful ways to use of this technology. Frankly, I think it is pointless to speculate on whether the resolution will ever be sufficient to satisfy skeptics like me: either it will or it won't, and only time will tell. [If I had to predict, it would be that some other technology with better spatial and even temporal resolution displaces fMRI.]
This would also seem to be the right time to bring the controversy du jour to the Neuroethics & Law Blog. As most readers probably know, there is a paper in press that has raised substantial questions about the validity of a reasonably large number of highly regarded fMRI studies. I had considered writing about it separately, but there is so much thoughtful blog material out there that more words are not required. Except for this: when interpreting fMRI, it is probably prudent to be prudent.
Posted by: Peter B. Reiner | 01/18/2009 at 11:47 PM
Daniel:
I'm not trying to "evade the question of allocation". But I am free to pick up on particular issues about which I believe I can contribute. After all, I don't expect you to explain why we are spending as much money as we do to treat the chronic pain of Americans when that money could save many more lives, as well as considerable pain and suffering, if used to treat starving, malnourished children in other parts of the world. Of course there are interesting and important allocation issues to deal with all the time in bioethics and neuroethics. I happen not to be taking them up in the original blog post.
Peter:
Agreed that the really big breakthroughs are likely to come with new technologies. And yes, we absolutely should put more up on the blog about the study you describe.
Posted by: Adam Kolber | 01/20/2009 at 12:28 AM
Hey Adam,
As I indicated, it is fair to write in a blog post about what interests you. But it does not follow that what interests one is identical to what is most ethically important. I'm not saying this on an individual level as much as I am thinking about the larger community of neuroethicists. IMO, I have trouble seeing the justification for spending the majority of neuroethical time and resources discussing technologies that will have no impact on 90% of the world's health. Is it interesting? Sure. Is it less important, IMO? Yes.
To be clear, I don't fault you for one moment for, in a blog post, addressing the aspects of pain and fMRI that most interest you. However, I am prepared to say that malingering is simply not as important from an ethical perspective as assessing the role of neuroimaging in the undertreatment of pain. As such, I believe that the neuroethics community does itself a disservice if it spends undue resources and energy on thinking about pain in context of malingering. If the point is not to improve things, so to speak, why should society allocate resources to it?
And actually, I do think it would be fair to ask me to justify allocation decisions as to chronic pain. That's the point of a concern with justice -- it forces us to think deeply about how we utilize resources in the way that we do. Though I am not so much talking about your blog post, I am perfectly happy to defend the notion that a neuroethics disconnected from larger questions of justice is an ethically suboptimal practice.
Posted by: Daniel Goldberg | 01/20/2009 at 05:24 PM
Peter,
I look forward to reading the paper you describe.
Posted by: Daniel Goldberg | 01/20/2009 at 05:25 PM
Just FYI, I turned this into a blog post, as I am hogging the comments here:
http://www.medhumanities.org/2009/01/on-pain-neuroimaging-neuroethics-and-justice.html
Posted by: Daniel Goldberg | 01/21/2009 at 12:21 AM