Neil M. Richards (Law, Washington U.) has posted Intellectual Privacy (Texas Law Review, Vol. 87, No. 2, December 2008) to SSRN. Here is the abstract:
This paper is about intellectual privacy - the protection of records of our intellectual activities - and how legal protection of these records is essential to the First Amendment values of free thought and expression. We often think of privacy rules being in tension with the First Amendment, but protection of intellectual privacy is different. Intellectual privacy is vital to a robust culture of free expression, as it safeguards the integrity of our intellectual activities by shielding them from the unwanted gaze or interference of others. If we want to have something interesting to say in public, we need to pay attention to the freedom to develop new ideas in private. Free speech thus depends upon a meaningful level of intellectual privacy, one that is threatened by the widespread distribution of electronic records of our intellectual activities.
My argument proceeds in three steps. First, I locate intellectual privacy within First Amendment theory and show their consistency despite the fact that traditional metaphors for why we protect speech direct our attention to other problems. Second, I offer a normative theory of intellectual privacy that begins with the freedom of thought and radiates outwards to justify protection for spatial privacy, the right to read, and the confidentiality of communications. Third, I examine four recent disputes about intellectual records and show how a greater appreciation for intellectual privacy illuminates the latent First Amendment issues in these disputes and suggests different solutions to them that better respect our traditions of cognitive and intellectual freedom.
Hank Greely is organizing two upcoming law and neuroscience events at Stanford. I'll be speaking at both, and I encourage Neuroethics & Law Blog readers in attendance to introduce themselves.
Follow the links for some more information:
(1) Neuroimaging, Pain, & the Law Conference (April 4). [UPDATE: Postponed]
(2) Junior Scholars Law and Neuroscience Workshop (April 5). [UPDATE: Will proceed as scheduled]
The following announcement is being posted at the request of Walter Sinnott-Armstrong. Inquiries should be addressed to the email address listed below:
POSTDOCTORAL RESEARCH POSITION
MACARTHUR LAW AND NEUROSCIENCE PROJECT
Area of Research: Law and Neuroscience
Position open until filled.
The MacArthur Law and Neuroscience Project, whose Central Office is at the
University of California at Santa Barbara (UCSB), is inviting applications for
post-doctoral research positions (1-2 years) in the area of Law and
Neuroscience. The post-doctoral fellow will assist and support research by
others within the Project, initiate original research of their own, and
contribute to an exciting research community. For more information on the
Project and on Law and Neuroscience, see
http://www.lawandneuroscienceproject.org/. Qualified applicants should have a
post-graduate degree in law, in neuroscience, or in some related field, such as
psychology, philosophy, or criminology. Excellent writing skills are essential.
Appointments will be made in accordance with the personnel policies of the
University of California. Salary is dependent on qualifications and experience.
Interested individuals should send their CV and a list of three references (with
e-mail addresses and phone numbers) to email@example.com. Applications
will only be received through e-mail.
The UCSB is especially interested in candidates who can contribute to the
diversity and excellence of the academic community through research and service.
An Equal Opportunity / Affirmative Action Employer.
I've recently been doing some guest blogging stints on general purpose law blogs, like the Volokh Conspiracy and Prawfsblawg. I'm going to reprint some of these posts here that are, in some respects, related to neuroethics. Here's one from the Volokh Conspiracy on my draft article, "The Subjective Experience of Punishment" that is forthcoming in the Columbia Law Review:
Suppose that Sensitive and Insensitive commit the same crime, under the same circumstances. They are both convicted and sentenced to spend four years in identical prison facilities. In fact, their lives are alike in most respects, except that Sensitive is tormented by prison life and lives in a constant state of fear and distress, while Insensitive, living under the same conditions, finds prison life merely difficult and unpleasant. Though Sensitive and Insensitive have sentences that are identical in name—four years of incarceration—and the circumstances surrounding their punishments appear identical to a casual observer, their punishment experiences are quite different in severity.
Many theorists provide a retributive justification for punishment. They believe that offenders deserve to suffer for their crimes. They typically also believe that an offender’s suffering should be proportional to the seriousness of his offense. For example, murderers should be punished more than thieves, who should be punished more than jaywalkers. Sensitive and Insensitive, however, have committed crimes of equal seriousness, and, on this view, they should suffer the same amount. In this example, they don't. Most retributivists seem committed to the perhaps surprising outcome that we ought to take account of the differences in the punishment experiences of people like Sensitive and Insensitive.
The response that Sensitive and Insensitive should receive equal punishments for equal crimes is not itself a challenge to the calibration view. At issue is, "What does it mean to have an equal punishment?" My claim here is that the only plausible way to understand retributivist suffering is in terms of experiential suffering; so that's what would need to be equalized (if you think punishments should be equal for identical crimes).
Many consequentialist punishment theorists believe that we should punish in order to deter crime, incapacitate offenders, and rehabilitate criminals. They do not seek to maximize punishment because punishment itself has negative consequences. Among those negative consequences, many consequentialists would quite directly incorporate offenders’ negative subjective experiences into their assessments of the costs of punishment. So a cost-benefit analysis of punishing Sensitive will likely look different than a cost-benefit analysis of punishing Insensitive.
More generally, consequentialists cannot optimize their deterrence strategies without taking account of different people’s anticipated subjective experiences. A group of people who are very sensitive to the risk of suffering in prison are likely to be optimally deterred at a different level than people who are very insensitive to the risk of suffering in prison. A world with calibrated sentences makes it easier to optimally deter a larger number of people. Therefore, absent concerns about cost and administrability, consequentialists are also committed to the view that we ought to consider the differences in the punishment experiences of people like Sensitive and Insensitive.
But what about the very important concerns about cost and administrability? And how does this topic relate to neuroscience? Stay tuned . . . [snip]
A new JAMA study is discussed at MSNBC here. It appears that adults who suffered from childhood abuse and also had certain genetic variations were significantly more like to have PTSD-symptoms than adults who were abused but lacked those genetic variations. From MSNBC:
The study of 900 adults is among the first to show that genes can be influenced by outside, nongenetic factors to trigger signs of PTSD. It is the largest of just two reports to show molecular evidence of a genetic influence on PTSD.
“We have known for over a decade, from twin studies, that genetic factors play a role in vulnerability to developing PTSD, but have had little success in identifying specific genetic variants that increase risk of the disorder,” said Karestan Koenen, a Harvard psychologist doing similar research. She was not involved in the new study.
The results suggest that there are critical periods in childhood when the brain is vulnerable “to outside influences that can shape the developing stress-response system,” said Emory University researcher and study co-author Dr. Kerry Ressler.
. . .
Ressler noted that there are probably many other gene variants that contribute to risks for PTSD, and others may be more strongly linked to the disorder than the ones the researchers focused on.
Still, he and outside experts said the study is important and that similar advances could lead to tests that will help identify who’s most at risk. Treatments including psychotherapy and psychiatric drugs could be targeted to those people, Ressler said.
(Hat tip: Ivy Lapides.)
The big launch of the new journal, Neuroethics, happens this weekend at the Pacific APA conference. The launch will happen right after the moral cognition session, described here by editor-in-chief Neil Levy:
If you are attending the Pacific APA - or in the area - drop in on us when we launch the new journal Neuroethics (first issue available here). Rather than just talking about neuroethics, we will be doing it with a symposium on moral cognition (4-6 pm, Saturday March 22). The speakers are Adina Roskies (Dartmouth), Matthew Liao (Oxford) and Jim Woodward ( Caltech; kindly stepping in for Jeanette Kennett); the general theme will be neuroscience and moral intuitions.
I'm told there will be free drinks at the launch. So come for the moral cognition, stay for the neuroethics.
I have been interested in pain for at least five years now, and I entered my Ph.D program with an eye towards doing some work on pain. In doing some of my basic reading on the subject, I devoured Jean Jackson's phenomenal 2005 essay on stigma and chronic pain. In that article, which has a wonderful bibliography, she noted the forthcoming research of Howard Fields, a neuroscientist at UCSF. I was enthralled by her description of his theories, and I have literally been waiting for 2 years for the anthology containing his essay to be published.
The wait is over, and I cannot recommend highly enough the entire collection: Pain and Its Transformations: The Interface of Biology and Culture. One of the aspects of studying pain I've always found so puzzling is why that virtually any clinical handbook or textbook on pain management will pronounce that pain can only be managed multimodally and through multidisciplinary efforts. Yet there are virtually no truly interdisciplinary approaches to thinking about pain itself. Most of the law, policy, and ethics' approaches tend to be focused on law, policy, and ethics, without any of the benefits of the integration that is possible through interdisciplinary work (One of the most common erroneous beliefs about interdisciplinary work is that it is really the sum of its parts; rather, such work is emergent in the sense that its lenses arise out of its parts without being reducible to them. As such, it is, or at least can be, integrative, and is therefore not simply ethics + law + history of medicine).
This book proposes to begin the process of filling that gap by analyzing pain using an interdisciplinary approach, including ritual theory, art studies, music history, and religion. I was most excited to read Fields's essay, and I was not disappointed. His most interesting idea is that the presumed distinction between physical and mental pain is incoherent. Why?
Without delving too much into the history of pain, the dominant biological model of pain over the last 40 years has been Melzack and Wall's gate control theory. This theory works very well for many kinds of nociceptive pain, but is notoriously unhelpful in unpacking many kinds of chronic pain, such as neuropathic pain and phantom limb pain. The latter is particularly difficult for gate control proponents to explain, as it is impossible to fit such pain into the gate control theory where there is by definition no organic insult to any tissue (this is because the limb no longer exists!).
Fields argues that newer models of pain suggest, in point of fact, that pain does not exist in your hand, or your leg, or your back. Pain is all in your head. What he means is that our brains have a detailed somatic map, a representation of three-dimensional somatic space. Pain is projected -- by the brain -- onto the relevant area of this somatic map. This explains how there can exist phantom limb pain: such a phenomenon is "not surprising since the brain's representation of the limb is intact" (Fields, 43). Fields concludes: "Once one understands and accepts the concept of projection, it becomes obvious that all pain is mental" (Ibid.)
I think this is phenomenal, and coheres well with the belief that many pain scholars maintain -- including myself -- that, despite our best attempts to dislodge ourselves, mind-body dualism remains an extant and active heuristic in Western conceptualizations of pain, and that this heuristic has deep consequences particularly for those who suffer from chronic pain. The idea that "pain is all in your head" is either incoherent (to the extent it relies on a strong and dubious mind-body dualism) or means exactly the opposite of what the utterer usually intends.
Adam was kind enough to allow me to bend the ears of the N&L readership, so I thought I'd take the opportunity to give you a quick introduction. I'm currently a lawyer and a full-time Ph.D student in ethics & the medical humanities at University of Texas Medical Branch. I've finished my coursework and am currently preparing for my qualifying exams, which I hope to take in early summer. I'm also a health policy fellow with Baylor College of Medicine's Chronic Disease Prevention & Control Research Center, and a Research Professor with the fledgling Initiative on Law, Brains & Behavior at BCM's Department of Neuroscience.
I am definitely a fox, though given my commitment and training in interdisciplinary studies, perhaps my wanderin' soul is more easily understood. My dissertation will be on the devastating undertreatment of pain both in the U.S. and globally. I am also very interested in conflicts of interest in clinical practice and research, disability studies, and, more recently, the social determinants of health and disparities. I have a background in philosophy, and I even dabbled there too -- ethics, philosophy of science, some philosophy of language, some philosophy of mind, etc.
For a flavor of the general topics and issues that pique my interest, y'all can check out my full-time blogging stint at Medical Humanities Blog. Here, I'll obviously focus on neuroethics issues, and I hope to say a few small things about my general approach to pain, which is quite different from most others I know who are working on the topic.
Larry Solum of the Legal Theory Blog has sent me a shout-out of sorts (at the bottom of his post here), asking my opinion about this blog post at "Philosophy of Brains". In the latter post, Adam Leonard discusses Mike Gazzaniga's work on split brains and suggests that the line of research reveals an "interpreter" function in the brain that may explain an awful lot about our behavior and our explanations of our behavior. Here's the most controversial paragraph:
The existence of the interpreter function has been known for over twenty years, and has been incorporated into Psychology’s explanations of self-deception, self-serving biases, cognitive dissonance, and defense mechanisms of the ego. What scientists have not yet addressed, however, is the possibility that the interpreter function explains how Man can be a tribal territorial animal (with DNA 98% the same as chimpanzees!) but totally unaware of any animal instincts affecting his behavior. … Is it possible that our chronic irrational behavior may actually be driven by instincts, but the interpreter covers this up by generating rational explanations for our irrational behavior, and convincing us the explanations are true? … Is it possible that the reason Man has never been able to live in peace is simply that as tribal territorial animals we compulsively form tribes and war with one another? … Is it possible that is also why we compulsively take sides and argue issues – such as this one – angrily and irrationally, as if they were territory to be taken or defended?
Here are some thoughts:
(1) I don't think we're "totally unaware of any animal instincts" that affect our behavior. I imagine that we understand our urges to eat, have sex, breathe, etc. as animal instincts in some sense, even if they get fancied up a bit in our culture. (By the way, I tend to be skeptical of DNA percentage similarity claims. The issue has been noted a bit before on the Neuroethics & Law Blog.)
(2) "Is it possible that our chronic irrational behavior may actually be driven by instincts" that the interpreter covers up with rational explanations? I think a lot of the recent work in neuroeconomics and behavioral economics indeed demonstrates that we often make decisions that are at odds with what a fully-informed, perfectly rational creature would do. I'm not sure what role "the interpreter" plays here and how precisely it is meant to connect with split-brain research. For one thing, I suspect that we often have no even-purportedly-rational story to tell ourselves (see, e.g., Jonathan Haidt et al.'s work on moral dumbfounding). Also, as for people with healthy human brains, we manage to give relatively good explanations of our behavior relative to those with so-called split brains. This suggests that any interpreter function derived solely from our understanding of split brains can only have limited applicability to those with healthy human brains.
(3) I think the last couple of questions are extraordinarily speculative. I don't think that we "compulsively" form tribes and war. Surely one can resist the urge to go to war. But I certainly take these questions to be interesting ones, and so I won't defend any territory here "angrily and irrationally"!
You may have seen this article in the NYT over the weekend on cognitive enhancement, especially among professional academics. The Neuroethics & Law Blog has posted about this before. Here's a taste:
Yet an era of doping may be looming in academia, and it has ignited a debate about policy and ethics that in some ways echoes the national controversy over performance enhancement accusations against elite athletes like Barry Bonds and Roger Clemens.
In a recent commentary in the journal Nature, two Cambridge University researchers reported that about a dozen of their colleagues had admitted to regular use of prescription drugs like Adderall, a stimulant, and Provigil, which promotes wakefulness, to improve their academic performance. The former is approved to treat attention deficit disorder, the latter narcolepsy, and both are considered more effective, and more widely available, than the drugs circulating in dorms a generation ago.
Letters flooded the journal, and an online debate immediately bubbled up. The journal has been conducting its own, more rigorous survey, and so far at least 20 respondents have said that they used the drugs for nonmedical purposes, according to Philip Campbell, the journal’s editor in chief. The debate has also caught fire on the Web site of The Chronicle of Higher Education, where academics and students are sniping at one another.
And here's Martha Farah:
“I think the analogy with sports doping is really misleading, because in sports it’s all about competition, only about who’s the best runner or home run hitter,” said Martha Farah, director of the Center for Cognitive Neuroscience at the University of Pennsylvania. “In academics, whether you’re a student or a researcher, there is an element of competition, but it’s secondary. The main purpose is to try to learn things, to get experience, to write papers, to do experiments. So in that case if you can do it better because you’ve got some drug on board, that would on the face of things seem like a plus.”
I recently found a story on neuroeconomics in a financial planning newsletter that arrived in my mailbox. Its conclusions are not staggeringly brilliant or novel, but timely in light of the current economic forecast. At any rate, it was more interesting to me that the first quarter outlook for 2008.
Apparently, mapping the brain activity of people participating in financial decision making experiments has revealed that certain areas of the brain act in a manner that mimics the same physiological reactions of fear or pleasure. These "quick reaction" brain activities assist in survival skills, but can lead us to make imprudent investments. Experiencing a successful investment triggers the release of the pleasure signal dopamine, while experiencing a decline prompts fear; over time, anticipating a success or decline will stimulate these same chemical reactions. Thus, before we know it, unrealized fears of declines or anticipation of gains can unduly influence our decision making abilities.
From this point on, the brief article becomes somewhat humorous. It advises me to "avoid making investment decisions when . . . in the throes of emotion." Granted, that's not the mindframe in which I usually tackle investment decisions. One is not usually locked in the passionate embrace of blue chip stocks. It also counsels me to "tune out the source of your overstimulation" such as by reducing the amount of time I watch CNN (none) and to stop checking my portfolio every hour (again, I'm blameless there). My favorite words of wisdom, however: it is impossible to accurately predict the future. Darn. Foiled again.
A paper from Jack Gallant's group at Berkeley posted on-line before publication in Nature is getting lots of attention, due in no small measure to the eye-catching headline of the accompanying news piece: "Mind-reading with a brain scan." The paper itself is somewhat more modestly titled, "Identifying natural images from human brain activity", and this represents the data quite a bit better than the slightly sensational news item. But if ever there was a paper that was newsworthy to neuroethics, this is it.
The authors showed 1,750 images to two subjects (who happen to be the first and second authors on the paper) while imaging blood flow in their brains using fMRI. The resultant responses were catalogued and then the subjects were shown 120 novel images; using the information gleaned from the previous data run, the investigators were able to predict the blood flow response as measured by fMRI in response to these images with reasonable accuracy (72% for one subject and a whopping 92% accuracy for the other).
So is this mind-reading? The answer is decidedly no, and as Gallant points out,
The next step is to interpret what a person is seeing without having to select from a set of known images. “That is in principle a much harder problem,” says Gallant. You’d need a very good model of the brain, a better measure of brain activity than fMRI, and a better understanding of how the brain processes things like shapes and colours seen in complex everyday images, he says. “And we don’t really have any of those three things at this time.”
In this sense, the study is more properly characterized as an attempt to unravel the neural code, albeit using the indirect measure of the BOLD signal detected by the fMRI machine. Nonetheless, the findings are sure to raise all kinds of alarms about bona fide mind-reading. If this study spurs serious discussion of what we will do when we really develop mind-reading technology, it will have made important contributions both to deciphering the neural code and triggering neuroethical debate. Not bad for a day's work.