Memories are fundamental to human experience: We are in many ways the products of our pasts, recorded in memory. The influence of memory over current experience is both a blessing and a curse, for just as we find solace in the remembrance of times past, we may also be plagued by pathological memories. Such maladaptive memories are a core feature of several psychiatric conditions, from anxiety disorders to addiction. In this article we present work from our own lab and others that shows the remarkable malleability of human memory, and how the disruption of maladaptive memory reconsolidation is being used for therapeutic purposes. If bioethical concerns about memory modification are to be more than purely hypothetical considerations for the future, they should be grounded in cutting-edge contemporary research. We provide the necessary overview of the field, then raise, challenge, and discuss several old and new ethical concerns.
Does Poverty Shape the Brain?, Scientific American
Flipping A Switch In The Brain Turns Lab Rodents Into Killer Mice, NPR Shots Blog
Mini-Brains Made from Teeth Help Reveal What Makes Us Sociable, New Scientist
How Much Does It Hurt?, Mosaic
Meet the First Humans to Sense Where North Is, The Guardian
Monkeys Grieve When Their Robot Friend Dies, Sploid
Changing the Way You Think, The Guardian
Zapping the Brain Really Does Seem to Improve Depression, New Scientist
How LSD Saved One Woman’s Marriage, The New York Times
Does Your Smartphone Make You Less Likely to Trust Others?, The Conversation
AI Robot 'Friend' Launched to Chat and Play Games with Lonely Elderly, The Telegraph
Mother-Baby Bonding Insight Revealed, BBC News
When Do Children Start Making Long-Term Memories?, Scientific American
Chimp Drinking Culture Caught on Video, BBC News
What’s Common Sense about Free Will?, Neuroethics & Law Blog
International Perspectives on Integrating Ethical, Legal, and Social Considerations into the Development of Non-Invasive Neuromodulation Devices: Proceedings of a Workshop—in Brief, The National Academies Press
To Rate How Smart Dogs Are, Humans Learn New Tricks, The New York Times
Tiny Nanoelectrodes Record Brain’s Activity Without Damaging It, New Scientist
Researchers Are Giving Religious Leaders Psychedelic Drugs in the Interest of Science, Quartz
The Mind of an Octopus, Scientific American
A CRISPR View of Life, The Neuroethics Blog
TRIM28 Controls a Gene Regulatory Network Based on Endogenous Retroviruses in Human Neural Progenitor Cells, Cell Reports
Prefrontal Cortical Control of a Brainstem Social Behavior Circuit, Nature Neuroscience
Dog-Directed Speech: Why Do We Use it and Do Dogs Pay Attention to It?, Proceedings of the Royal Society B - Biological Sciences
Organization of High-Level Visual Cortex in Human Infants, Nature Communications
Hippocampal Encoding of Interoceptive Context During Fear Conditioning, Translational Psychiatry
I am grateful to Larry Solum for recommending my new draft paper, Punishment and Moral Risk, and for his characteristically thoughtful commentary. Larry makes four points, and I'd like to respond to each:
Solum's First Point: Kolber's analysis assumes that the relevant retributivist beliefs are independent of one another, but it is not clear that this is the case. At an abstract level, the various retributivist beliefs may be part of a "web of belief" to use Quine's phrase--and hence mutually dependent in almost all cases to at least some degree. Ron Allen has made a similar point in the context of evidence law and the burdens of persuasion. More concretely, retributivist intuitions about free will and about the moral standards for adequate proportionality may rest on overlapping premises.
Reply: In fact, I don't assume that the relevant retributivist beliefs are independent of each other. Rather, I ask readers to offer their levels of confidence as to each retributivist commitment while "[t]aking the prior numbered propositions as given" (draft p.13 et seq.). This is another way of addressing Larry's concern. So long as reader's follow the instructions, we should be able to multiply the relevant probabilities. More substantively, I accept Larry's point that I should offer some discussion of the matter. What I would say is something like this: Sure, there will likely be some interdependence in retributivist commitments to these propositions. But there's also a lot of independence. For example, many of the people who think that retributivism is barbaric nevertheless believe in free will. And certainly our beliefs about the factual guilt of a particular defendant will be largely independent of our beliefs about either of these philosophical issues. So, I welcome Larry's suggestion to clarify the matter in the next draft, but I don't believe his point detracts much from my thesis. (To test this, I invite readers to give high probabilities to propositions that they believe are highly dependent on prior propositions.)
Solum's Second Point: Kolber's argument that consequentialism does not share the "problem" he identifies may be based on a double standard. Thus, consequentialist theories of punishment could be thought to depend on: 1) the truth of consequentialism as a moral theory despite the fact a substantial share (perhaps a majority) of moral philosophers reject it, 2) the belief that deterrence, incapacitation, or rehabilitation is actually efficacious, when there is substantial evidence that each of these functions fails in practice, and 3) the belief that actual punishments can be calibrated to achieve the benefits of these consequentialist justifications for punishment.
Reply: I do claim that my criticism strikes more sharply at retributivists than consequentialists. Consequentialism has plenty of uncertainty but it's largely of a different sort. As I write in the draft: "The reason that I treat consequentialism and retributivism quite differently from an epistemic perspective is that the path to resolving consequentialism’s empirical uncertainty is much clearer than the path to resolving retributivism’s moral uncertainty. At least in principle, we know how to gather evidence and set up experiments to estimate how punishment policies will affect deterrence, incapacitation, and rehabilitation. The means of resolving age-old debates about free will and proportionality, however, are highly disputed and have been for centuries." (draft p. 32).
Let me address each point more specifically, though. In #1, it's true that lots of people are not consequentialists. But consequentialists are! So they will likely have a high degree of confidence in their own beliefs. They should not be overconfident, of course. But one can be a reasonable consequentialist with high confidence in consequentialism. As for #2, the consequentialist doesn't have to believe that deterrence, incapacitation, and rehabiltation are particularly efficacious. They just believe that those are the sorts of things that justify punishment. If they're not efficacious, then the consequentialist will deem little or no punishment justified. That being said, it's hard to doubt that incapacitation often reduces the harms that some very dangerous people cause and that punishment provides at least something of a deterrent effect (see, e.g., The Purge). Finally, the key point as to #2 and #3 is that both issues are largely about empirical uncertainty, and, to the extent that there is empirical uncertainty for consequentialists, there is uncertainty in both acting and in failing to act. True, we don't know for sure if adding a year in prison for some particular offense will generate net benefits. But we may also be unsure if failing to add a year will cause unnecessary risk of harm to victims. The good consequentialist will have to balance these. It's a difficult task. But the challenge is one we know how to address in principle (relative to answering questions about free will and the like). The asymmetry in the way typical consequentialists and typical retributivists treat the act/inaction distinction is indeed critical to my view and leads us to Larry's next point.
Solum's Third Point: In addition, Kolber fails to consider the moral risk of failing to punish from a retributivist perspective. Many retributivists believe that there is a moral requirement to impose deserved punishment and that failure to punish is a moral wrong. One can easily invert Kolber's argument and show that retributivists cannot justify a failure to punish. I am not suggesting that the inverted argument is correct; rather, the point is that the structure of the argument is suspect.
Reply: Almost all retributivists find it substantially worse to punish the undeserving than to fail to punish the deserving. But I agree with Larry that this need not be the case. That's why I state that my view only applies to versions of retributivism that "share the Blackstonian value that it is substantially worse to punish the undeserving than to fail to punish the deserving." (draft p.5-6).
Solum's Fourth Point: Finally, I am dubious that retributivists themselves believe that punishments must be precisely proportionate. The relevant standard for a complex legal system is much more likely to be "rough proportionality."
Reply: Many retributivists would say that there is a firm deontological prohibition against knowingly or recklessly punishing someone in excess of desert. Larry is probably right that many would frame all of this in terms of rough proportionality. I am dubious of that approach. Why is it strictly forbidden to give an innocent person one day in prison, but it's okay if, say, we take a significant risk that a murderer will spend a year in prison that exceeds his desert? But I'd be fine to adjust the relevant proposition to speak of rough proportionality. I don't think it will affect my thesis, as I discuss nine retributivist propositions required to punish in a particular case and tweaking the probability of one them will likely have only a modest effect. There are, of course, many versions of retributivism, and I say in the paper (draft p.5) that though "I surely do not address every version of retributivism, the thrust of my argument, with modest adjustments, applies to a broad range of retributivist views." The more pure and the more traditional one's retributivism, the stronger I take my argument to be. But I think it applies to lots of versions of retributivism with adjustments along the way.
Let me end by thanking Larry again for sharing his insightful comments and for offering great suggestions as to matters that I can clarify in the next draft!
Debates in neuroethics often tackle the purported clash between commonsense and scientific perspectives as they pertain to moral concepts. The assumption that undergirds the framing of the conflict between these two approaches is that advances in neuroscience, psychiatry, and psychology can be used to explain phenomena covered by commonsense concepts and in some cases undermine them entirely. Consider the debates about free will where discoveries that unconscious processes guide behavior are taken to challenge free will and moral responsibility.
In these debates, the issue of the scope and character of common sense is not usually sufficiently explored. The characterization of the commonsense concept of free will is that it requires consciousness, which is then compared unfavorably with the evidence for a variety of unconscious brain processes that precipitate behavior we thought was caused by conscious willing. But how do we justify the view that conscious willing is a tenet of commonsense morality? There are certainly counterexamples to this claim. There are situations when we attribute free will to individuals who do not consciously will their actions. We ascribe free will to people who engage in some automatic behavior, such as successfully driving home without consciously minding each action required to reach the destination. Expert athletes perform better when they do not attend to their moves (Gray 2004), but we would still praise them for a successful performance. Even in the legal realm, criminal responsibility can be ascribed to individuals performing complex habitual actions when they are not consciously intended (Yaffe 2012). If there are quotidian, and relatively frequent, ascription of free will to individuals when they do not consciously will their actions, then there are contexts in which commonsense ascription of will do not require consciousness.
A further reason to question what could be called a conservative characterization of commonsense is that our conceptions of free will and moral responsibility have adjusted to scientific facts about human psychology. Advances in psychology and neuroscience have shaped the domain of autonomous actions (Boysen and Vogel 2008). Additionally, education about the biological etiology of particular mental illnesses affect the interpretation of the degree of control individuals can exert over their behavior (Goldstein and Rosselli 2003). We do not blame people for offensive behavior if we know that they have a mental illness or a neurological disorder that limits their ability to control themselves in certain circumstances. Given that everyday predictions and explanations of human behavior integrate scientific facts, commonsense concepts should not be treated as static and as inherently antagonistic to the emerging scientific rendition of human psychology.
Maturity is a socially useful notion. It plays a role in a number of different domains, including law, policy, and medicine. The age of majority in the US is set at 18, but there are some decisions for which adolescents are deemed mature enough to make earlier, for instance the age of consent at 16, or later, like the right to consume alcoholic beverages at 21. Although there are practical justifications for designating a specific time as the moment one becomes an adult, there is little reason to think that turning 18 corresponds with a particular physical process of maturation.
In a recently published article in Neuron, Leah H. Somerville, tackles brain development and reveals the complexities involved in identifying a physical anchor for the notion of maturity. Somerville explains that maturation in the brain can trail a number of different processes, including structural changes and shifts in connectivity. Structural changes characterized by the reduction of cortical gray matter continue throughout one’s 20s and 30s, but there are studies that indicate that this process does not plateau and continues into old age--challenging the notion that there is a point at which the brain becomes structurally mature. Adolescence is also a time of increased widespread connectivity, a process that slows down in one’s early 20s. But there is a great deal of individual variation, with some children having higher connectivity than some young adults, which makes it difficult to use these data about populations to identify a point of maturity for an individual. Furthermore, since structural changes in the brain and shifts in brain connectivity do not happen at the same time, as Somerville points out, maturity based on network connectivity can occur decades sooner than based on structural changes.
A way of circumventing the difficulty in identifying a physical marker for a general notion of maturity is to adopt a context specific version of the concept, where we don't ask whether an individual is mature but whether they are mature in a particular way. This would allow us to define maturity in terms of the cognitive skills required to make decisions within a certain realm. We have done this in medicine when we ask whether a patient has the cognitive capacity to make a specific decision and we measure their ability to do so in relation to the particulars of the medical situation. In some instances an individual could have the ability to make one decision, such as select a surrogate decision maker, but lack capacity to make another, such as consent for a complex surgical procedure. Although children are presumed to lack capacity, adolescents are in some states allowed to accept or refuse treatment depending on their ability to understand and appreciate the risks and benefits of treatment. Moreover, even in adulthood, age is not an indicator of capacity to make medical decisions and any individual could, for a number of different reasons, lose the ability to make medical decisions. We could treat maturity similarly as a task-specific ability that over time generalizes to more and more situations.
Happy New Year, Neuroethics & Law Blog readers! My name is Nada Gligorov and I am excited to be the guest blogger for the month of January.
For a bit of an introduction: I have a PhD in Philosophy from the Graduate Center of the City University of New York and I am an associate professor in the Bioethics Program of the Icahn School of Medicine at Mount Sinai. I am also a faculty member in the Bioethics Program of Clarkson University.
In my research thus far, I’ve focused on the interaction between commonsense and scientific conceptual frameworks. I recently published a monograph titled Neuroethics and the Scientific Revision of Common Sense (Studies in Brain and Mind, Springer). In the book, I argue against the view that common sense is static and characterize it as a theory that changes to accommodate influences from many domains, including neuroscience. I also evaluate how my characterization of common sense affects debates in neuroethics, especially when they revolve around concepts, such as free will, privacy, personal identity, pain, and death.
This month, I plan to blog about some of the arguments I’ve made in my book and other neuroethics related topics currently on my mind.