I'm a big fan of the site "Askphilosophers.org", where netizens can pose philosophical questions to a big group of philosopher panelists (more info. here). It doesn't seem to get as much attention on the Internet as I think it deserves, at least not in the portion of the Internet to which I'm exposed. Here's a sample Q&A from the site (I picked this one in particular, because I like the description of "chemophobia" at the bottom of the answer):
ANSWER (by Allen Stairs):
You're right: we shouldn't throw the word "causes" around too casually. Let's fix on depression as our example, and let's keep in mind that simply being sad isn't the same as being clinically depressed. On the one hand, neurochemicals probably aren't just symptoms of depression; they probably have something to do with causing the symptoms -- the listlessness or anxiety, or excessive rumination or protacted feelings of sadness. Perhaps it would be more accurate to say that clinical depression is, at bottom, a malfunction in the neurochemical system, though this may be too reductionistic, and it also may turn out not to get the biology right. But perhaps what you're pointing to is that it still makes sense to ask what causes this malfunction in the first place. That's obviously a very good question. My impression is that sometimes life circumstances can trigger depression, but sometimes there's no clear external cause. The right answer here is likely to be very complicated.
At the moment, far as I know, we have no good way of testing the functioning of the neurochemical system itself. Perhaps we will some day; perhaps we'll develop a blood test or scanning technique that will tell us when someone's neurochemical system is out of whack and allow for a biological diagnosis of mental illnesses. Until then, we have to make trade-offs. Some of the chemicals we use to treat psychiatric conditions have serious side effects. Unfortunately, however, untreated mental illness has serious side effects of its own, including death.
Psychiatric medications are tested for safety. We have reasonably good but imperfect information about what percentage of people taking them are likely to develop which side effects. And so we have a basis for making a trade-off: if a patient has serious symptoms, if we have evidence that a certain medication can help alleviate the symptoms, and if the risk of side-effects is not too great, it might well make sense to try the medication even if we aren't sure what's really going on in the brain. All of this should be monitored carefully, of course, and physicians shouldn't be too quick to give out medications when other approaches (cognitive behavioral therapy, for example) might do the job with lower risk of side effects. But I think that what we might call "chemophobia" -- fear of medications -- is potentially just as dangerous as overprescription.
The confidence expressed in the relationship between serotonin levels and depression greatly exceeds the evidence for such a relationship. The evidence that SSRIs outperform placebo isn't all that strong. See Ben Goldacre's recent Guardian column for more:
http://www.guardian.co.uk/commentisfree/2008/jan/26/badscience
Posted by: Neil | 02/18/2008 at 06:22 AM
That's one reason I provided a link to this paper by Jonathan Leo and Jeffrey R. Lacasse in the "open thread" post: http://www.springerlink.com/content/u37j12152n826q60
And at the risk of being accused of suffering from "chemophobia":
Re: "Psychiatric medications are tested for safety." Well, yes and no: the record is not encouraging on this score. Cf., for instance, this post from Jan. 17 at the blog, Clinical Psychology and Psychiatry: A Closer Look (http://clinpsyc.blogspot.com/) [chart omitted]:
Antidepressants: Hiding and Spinning Negative Data
As I alluded to yesterday, a whopper of a study has just appeared in the New England Journal of Medicine. It tracked each study antidepressant submitted to the FDA, comparing the results as seen by the FDA in comparison with the data published in the medical literature. The FDA uses raw data from the submitting drug companies for each study. This makes great sense, as the FDA statisticians can then compare their analyses to the analyses from drug companies, in order to make sure that the drug companies were analyzing their data accurately.
After studies are submitted to the FDA, drug companies then have the option of submitting data from their trials for publication in medical journals. Unlike the FDA, journals are not checking raw data. Thus, it is possible that drug companies could selectively report their data. An example of selective data reporting would be to assess depression using four measures. Suppose that two of the four measures yield statistically significant results in favor of the drug. In such a case, it is possible that the two measures that did not show an advantage for the drug would simply not be reported when the paper was submitted for publication. This is called "burying data," "data suppression," "selective reporting," or other less euphemistic terms. In this example, the reader of the final report in the journal would assume that the drug was highly effective because it was superior to placebo on two of two depression measures, left completely unaware that on two other measures the drug had no advantage over a sugar pill. Sadly, we know from prior research that data are often suppressed in such a manner. In less severe cases, one might just switch the emphasis placed on various outcome measures. If a measure shows a positive result, allocate a lot of text to discussing that result and barely mention the negative results.
But wait, there's an even better way to suppress data. Suppose that a negative study is submitted to the FDA. There is no commercial value in presenting negative results on a product. Indeed, it makes no sense from a commercial vantage point to submit a clinical trial that shows no advantage for one's drug for publication in a medical journal. While it earns a bit of good PR for being honest, it would of course hurt sales for the drug, which would not please shareholders. From an amoral, purely financial view, there is no reason to publish negative trial results.
On the other hand, there is science. One of the first things that any medical student hopefully learns is that scientists should report all of their results so that other scientists, physicians, the media, and the general public have an up-to-date and comprehensive understanding of all scientific findings. Yes, this may sound naive, but this is how science is supposed to work in an ideal world.
The FDA concluded that 38 studies yielded positive results. 37 of these 38 studies were published. The FDA found mixed or "questionable" results in 12 studies. Of these 12 studies, six were not published, and six others were published as if they were positive findings. Of the 24 studies that the FDA concluded were negative, three were published accurately, five were published as if they were positive findings, and 16 were not published. To summarize, positive studies were nearly always reported while mixed and negative studies were nearly always either not published or published in a manner that spun the results unreasonably. How does one turn a questionable or negative finding into a positive one? As mentioned above, report the results that are favorable to your product and sweep the remaining results under the rug.
Overall, how do the statistics for this group as prepared by the FDA compare to the statistics in medical journal publications? Remember, physicians are trained to highly value medical journals, as they are the storehouse for "evidence-based medicine." I'll borrow a quote from the study authors: For each drug, the effect-size value based on published literature was higher than the effect-size value based on FDA data, with increases ranging from 11 to 69%.
Well, that's not very reassuring. Effect size refers to the magnitude of the difference between the drug and placebo. Note that for every single drug, the effect size as reported in the medical literature (the foundation for "evidence based medicine) was greater than the effect size calculated from the FDA's data. Remember, the FDA's data is based on raw data submitted by drug companies, and is thus much less subject to bias than data that the drug companies manipulate prior to submitting for publication in a medical journal.
Other highlights from the authors: Not only were positive results more likely to be published, but studies that were not positive, in our opinion, were often published in a way that conveyed a positive outcome... we found that the efficacy of this drug class is less than would be gleaned from an examination of the published literature alone. According to the published literature, the results of nearly all of the trials of antidepressants were positive. In contrast, FDA analysis of the trial data showed that roughly half of the trials had positive results. The statistical significance of a study’s results was strongly associated with whether and how they were reported, and the association was independent of sample size.
I'll say it one more time: Every single drug had an inflated effect size in the medical literature in comparison with the data held by the FDA. To move into layman's terms for a moment, manufacturers of every single drug appear to have cheated. This is not some pie in the sky statistics review -- this is the medical literature (the foundation of "evidence-based medicine") being much more optimistic about the effects of antidepressants than is accurate. This is marketing trumping science.
The drugs that were found to have increased their effects as a result of selective publication and/or data manipulation:
· Bupropion (Wellbutrin)
· Citalopram (Celexa)
· Duloxetine (Cymbalta)
· Escitalopram (Lexapro)
· Fluoxetine (Prozac)
· Mirtazapine (Remeron)
· Nefazodone (Serzone)
· Paroxetine (Paxil)
· Sertraline (Zoloft)
· Venlafaxine (Effexor)
That is every single drug approved by the FDA for depression between 1987 and 2004. Just a few of many tales of data suppression and/or spinning can be found below:
·Data reported on only 1 of 15 participants in an Abilify study
·Data hidden for about 10 years on a negative Zoloft for PTSD study
·Long delay in reporting negative results from an Effexor for youth depression study
·Data from Abilify study spun in dizzying fashion. Proverbial lipstick on a pig.
·A trove of questionable practices involving a key opinion leader
·Corcept heavily spins its negative antidepressant trial results
Props to the Wall Street Journal (David Armstrong and Keith Winstein in particular) and the New York Times (Benedict Carey) for quickly getting on this important story.
There are some people who seem unmoved by this story. Indeed, some people are crying that this is an unfair portrayal of the drug industry. More on their curious take on the situation coming later.
I'll close with a question: What does this say about the key opinion leaders whose names appear as authors on most of these published clinical trials in which the data is reported inaccurately?
In addition to the above, please see:
Howard Brody's book, HOOKED: ETHICS, THE MEDICAL PROFESSION, AND THE PHARMACEUTICAL INDUSTRY (Rowman and Littlefield, January, 2007) and numerous posts at his blog, Hooked: Ethics, Medicine and Pharma: http://brodyhooked.blogspot.com/
The review article by Richard Horton of Sheldon Krimsky's Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research? in the NYRB (Vol.51, No.4), March 11, 2004, "The Dawn of McScience."
The review essay by Frederick Crews in the NYRB (Vol.54, No.19), December 6, 2007, "Talking Back to Prozac."
Posted by: Patrick S. O'Donnell | 02/18/2008 at 11:50 AM
I should reiterate that I posted this particular q&a because of (what I took to be) the clever neologism "chemophobia." Keep in mind, too, that neither the question nor the answer refer to any particular kind of neurochemical activity (e.g., serotonin).
Posted by: Adam Kolber | 02/18/2008 at 12:36 PM
'Tis true, but the answer does make several references to the "neurochemical system" in general and psychiatric medications that in fact do rely on causal claims having to do with neurochemical activity, hence Neil's comment and mine as well (I confess to being opportunistic).
I agree, it is a clever neologism of sorts, but if one hears it out of context it would seem to suggest a fear of chemicals as such.
And I might add that I too enjoy Askphilosophers.org, especially because one of my favorite philosophers, indeed, cyberspace mentor (well, the mentoring actually occurs outside of cyberspace proper too), Oliver Leaman, is a frequent contributor.
Posted by: Patrick S. O'Donnell | 02/18/2008 at 01:04 PM
The monoamine hypothesis its flawless, it does not explain why serotonin depletion in nondepressed individuals does not cause depression , or exacerbate depression in depressed individuals.
But this issues are widely known by the scientific community, and the monoamine hypothesis its not abandoned because, on the other hand, it represents the only empirical evidence till now that relates the pshysiological basis with the psychological signs (loss of interest, apetite, motivation; associated with the noradrenergic systems and the HPA axis) and related health problems (coronary disease, trombus formation caused by platelet agregability...)
For a more philosophical oriented way to look to the matter (e.g. philosophy of medicine) i recomend:
Hirschfeld, R.M. (2000), History and evolution of the monoamine hypothesis of
depression. J. Clin. Psychiatry 61(6):4–6.
Posted by: Anibal | 02/19/2008 at 05:29 AM
The monoamine hypothesis its flawless, it does not explain why serotonin depletion in nondepressed individuals does not cause depression , or exacerbate depression in depressed individuals.
But this issues are widely known by the scientific community, and the monoamine hypothesis its not abandoned because, on the other hand, it represents the only empirical evidence till now that relates the pshysiological basis with the psychological signs (loss of interest, apetite, motivation; associated with the noradrenergic systems and the HPA axis) and related health problems (coronary disease, trombus formation caused by platelet agregability...)
For a more philosophical oriented way to look to the matter (e.g. philosophy of medicine) i recomend:
Hirschfeld, R.M. (2000), History and evolution of the monoamine hypothesis of
depression. J. Clin. Psychiatry 61(6):4–6.
Posted by: Anibal | 02/19/2008 at 05:31 AM