We are extremely grateful to Adam for hosting this symposium. As long-time readers of this blog, we appreciate the excellent work that Adam has done in providing such a wonderful interdisciplinary, intellectual, professional, and enjoyable forum. It truly is an honor and a privilege to have our work discussed here. We are also extremely grateful to Amanda, Francis, Jane, Gabriel, Carter, and Nicole for participating in the discussion and for engaging with our work.
Amanda Pustilnik focuses on our criticism of Patricia Churchland’s recent claim that difficult normative problems about legal responsibility can be resolved by looking in the brain. While accepting much of our analysis, Pustilnik notes two areas where increased neurological evidence may contribute to law. First, it might provide better diagnostic evidence for the issues and categories on which legal doctrine currently focuses. Second, it may inform whether the legal issues and categories are the ones the law should focus on, given its states goals. We agree wholeheartedly with these two points and believe they are consistent with our analysis. We emphasized the first potential neuroscientific contribution at various points throughout our book, but the second one not as much. Pustilnik’s larger point about the potential for psychological, psychiatric, and neuroscientific evidence to provide an empirical bridge between normative judgments and legal categories is an important one. It makes explicit, we think, what is implicit in many of our discussions.
We note that it also supports our larger point in criticizing Churchland’s claim. We agree with Churchland and Pustilnik that better understanding of “in control” and “out of control” brains would provide important evidence for implementing current law more accurately and also for reforming law to better match our normative goals. But the answers to the complex normative questions are not themselves answered by looking in the brain. Nor do they follow in a straightforward manner from a control vs. out-of-control distinction. They require normative arguments, which Churchland fails to provide.
In our book, we asserted that our general analysis about the relationships among mind, brain, and law extends beyond the examples discussed (which primarily involve criminal law and procedure). In his post, Francis Shen picks up on this thread by asking how our analysis extends to issues such as recent debates in tort law and insurance law about how to treat mental disorders and injuries. His questions pose a challenge: what, if anything, can our conceptual claims about the mind contribute to these debates, as we appear to him to be advocating a position that is inconsistent with both sides?
Here is how Shen sets up the debates. In one camp, there are “dualists” (insurance companies and tort defendants) who argue for a robust distinction between mind and body. In the second camp, there are those (tort plaintiffs and mental-health advocates) who argue that mental disorders and mental injuries are physical and thus the doctrinal distinction between mind and body should be abandoned. He argues that neither position seems consistent with ours. So what gives?
We side with the second camp here, and this conclusion is in fact consistent with the analysis in our book. Any claims based on substance dualism for treating mental disorders differently from other physical disorders or for treating mental/ emotional injuries differently from other physical injuries should be rejected. There may be good policy reasons for drawing distinctions along these lines—for example, evidentiary considerations or effects on primary (i.e., non-litigation) behavior, in other words, the usual reasons for drawing doctrinal distinctions—but the fact that mental events are not physical events should not be one of them.
This is perfectly consistent with our view, however, that the mind and the various mental attributes are not identical with the brain or states of the brain. In the book, we pointed to instances where confusion arises because necessary conditions are equated with sufficient conditions. As we see it, the issues Shen points to regarding disorders and injuries are about necessary conditions. To argue, as we do, that the mind consists in an array of powers, abilities, and capacities is consistent with the notion that each of the latter depends on a properly functioning brain. A brain is necessary for, and plays an important causal role in, the exercise of these powers, abilities, and capacities. Therefore, a demonstrated causal relationship between the brain (e.g., traumatic injury) and a mental category (e.g., memory loss) provides probative evidence regarding the latter. In short, biological evidence may, depending on the evidence and the issue, provide probative evidence of mental disorder or injury. We do not dispute this, at any rate. But, the biological evidence is not the mind or the mental attribute (i.e., it is not sufficient). Our ability to blog may (given our current circumstances) depend on having WiFi access and a computer. But this does not mean that our WiFi access and computers are blogs. The same is true for any legal category that depends on a mental attribute. If a legal category depends on a cognitive, volitional, or emotional defect, for example, neurological evidence may inform that category, but it does not replace that category (without changing the subject). As we note in the book, this is one area in which neuroscience might have a significant practical impact on law (by providing inductive evidence regarding legal categories). The danger—we call it “conceptual confusion” in the book—would be to equate the defect with biological evidence for it. Our analysis contributes, in part, by illuminating this potential pitfall and its implications for law. We note that the potential pitfalls extend beyond law. They also affect psychological explanations of mental disorders and injuries (as Gregory Miller argues here).
Jane Moriarty focuses on brain-based lie detection. She challenges our claims about the need for conceptual clarity as a practical matter. She bases her challenge on two points. First, that correlations of brain activity with lying may provide probative evidence of lying, even if we cannot conceptually sort out “mind” from “brain” or “person” from “brain.” Second, the conceptual and empirical issues are “entwined” and cannot be cleanly separated. (“Research designs for neuroscience lie detection are entwined with many assumptions about brains, minds, truth, deception, intent, signaling, memory, and mistake.”)
We agree completely with the first point. We acknowledge in the book that neuroscience-based lie detection may provide relevant and potentially highly probative evidence, depending on the correlations between the evidence and the issues about which the law cares (i.e., whether someone is lying). On the second point, we agree that the conceptual and empirical issues are entwined. But this is precisely why we believe conceptual clarity is so important as a practical matter—it helps us to sort out what is being correlated with what, and what inferences may or may not be drawn from the evidence.
Here are three examples of conceptual issues with practical implications. First, the brain/ person issue matters because lies involve complex social acts by persons (e.g., saying something believed to be false in a context in which saying so violates a norm of not doing so). It is the social act that the law cares about, not the brain activity in and of itself (at least where the issue of lying is concerned). Someone with brain activity correlated with lying who does not engage in the social act has not lied; someone who engages in the social act without this brain activity has lied. Thus, the lie is not the brain activity. Second, as Moriarty’s post implies, many of the studies about “lie detection” actually purport to measure an “intent to deceive.” But lying and intending to deceive are distinct (a conceptual point). One can lie (and be guilty of perjury) even without intending to deceive (for example, a witness who says something she knows to be false but hopes the jury sees through it). And one can intend to deceive without lying. Third, many of the studies do not appear to be measuring lies. The social acts being correlated with brain activity may in fact be more akin to playing a game or reciting lines in a play. Now, it may still turn out that these studies tell us something about real-world lying. But it also might turn out that they do not in fact translate for this reason. Each of these conceptual issues may be overcome if they are recognized and kept in mind (none is necessarily fatal to the possibility of brain-based lie detection). When unrecognized, however, they may cause conceptual confusion, and this may lead to mistaken assumptions about the nature of the evidence and what inferences it warrants. These are real practical consequences, and they depend on increased awareness of the conceptual issues and how they entwine with the scientific and legal ones.