N&LB Readers,
In my final post I want to address the ethical issues that might arise from the research on motivated reasoning, particularly as the phenomenon relates to evaluation of scientific research/discovery.
The West Wing’s Toby Ziegler once told pollster Al Kiefer that science is science to everyone. As discussed in my previous post, research has shown that this is not the case. The perceived credibility and validity of scientific research can depend significantly on the individual’s prior beliefs and the extent to which an individual agrees with the outcome of the scientific research. This finding poses some interesting and important ethical issues for any number of areas. Scientific knowledge is constantly evolving and a variety of legal and non-legal decisions could and should be influenced by new scientific research.
For example, this research has significant implications for the practice of medicine. To the extent that providers should practice evidence-based medicine, their willingness or ability to adapt to new scientific information may be impeded by their loyalty to a particular school of thought.
Policy decisions that could (or should) be rooted in science are likely to be influenced by policymakers’ biases and prior beliefs. For example, the underlying science guiding lawmakers' decisions on abortion regulations may be subject to motivated reasoning. Utah lawmakers are wrestling with the issue of when fetuses can feel pain, which will influence how the procedure is performed (whether the use of anesthesia is required) and could at some point influence lawmakers' (in Utah and other states) decisions about the point in a pregnancy at which abortions are no longer legal to perform. As noted in my previous post, my work with Nick Scurich tested this very issue. We found that prior attitudes about abortion were a significant factor in how individuals evaluated scientific research about whether fetuses feel pain at a particular point in a pregnancy.
Particularly relevant for this blog’s audience, policy with respect to potential issues associated with both the criminal justice system and civil legal system may be influenced by how lawmakers evaluate new scientific research.
From the perspective of the criminal justice system, insights from neuroscience have been offered as a means of reshaping our notions of culpability. While noted scholars have contended that what neuroscience currently does not offer any insights that radically challenge our concept of criminal responsibility (see e.g., Morse 2003; Morse 2005; Morse 2011) and offers nothing new, it may at some point offer enough information that could or should change the way society views the culpable criminal. Additionally, scientific and social scientific research can be useful in determining how to address the most effective means to deal with those deemed criminally responsible.
This general issue has taken a role in the 2016 elections as the United States combats an opioid abuse crisis. Research on whether addiction should be considered a neurological disease and how best to handle those whose crimes are a result of addiction will be interpreted through the individual preconceptions and experiences. While this is true of many things in life (we all bring our unique experiences to the table), it is important to act on the best scientific information available on such important issues. Motivated reasoning may contribute to a “law lag” or inefficient policy with respect to important issues, costing the taxpayers money and leading to negative outcomes for those processed through the criminal justice system.
Another area in which recent neuroscientific discovery might be particularly important is the assessment of pain, particularly chronic pain. Tort litigation/reform has its proponents and opponents and the beliefs and visceral reactions that come with those two camps. Based on prior research, these prior beliefs will likely impact how new research on chronic pain are perceived and incorporated into policy and practice, which could lead to the law inadequately or incorrectly addressing claims of injury or chronic pain.
The research on motivated reasoning begs more questions than it provides answers for. How do we deal with this phenomenon to assure that the best science is adopted and implemented at the appropriate time? Given the introspection illusion I mentioned in a previous comment, how do we determine whether someone is disregarding information because they genuinely believe it does not represent the best scientific information or whether it is simply biased assimilation of new information?
CITATIONS:
Morse, S. J. (2003, December). New neuroscience, old problems: legal implications of brain science. In Cerebrum: the Dana forum on brain science(Vol. 6, No. 4, pp. 81-90).
Morse, S. J. (2005). Brain overclaim syndrome and criminal responsibility: A diagnostic note. Ohio St. J. Crim. L., 3, 397.
Morse, S. (2011). Avoiding irrational neurolaw exuberance: a plea for neuromodesty. Law, Innovation and Technology, 3(2), 209-228.