The fMRI studies by J.D. Greene and co-workers (2001, 2004) have been widely discussed in the last decade. Greene draws two conclusions from his experiments.
The first conclusion is a descriptive model for moral psychology, the so-called dual-process model. According to this model there would be two non-domain specific systems that produce moral judgments in the human brain. The first system is slow, flexible, and conscious: It is chiefly connected to reasoning. The second system is fast, automatic, and unconscious. It produces moral intuitions (i.e. strong and immediate moral judgments) that allow for a swift, albeit rough, evaluation of the eliciting situation. This system is intertwined with moral emotions and can be seen as emotionally-driven.
The second conclusion Greene draws from his experiments is normative. According to his views, there is a mapping between these two systems and two of the three most prominent normative ethical theories in the current philosophical debate: utilitarianism and deontology.
The hot (fast, automatic, unconscious, and emotive) system generates deontology, i.e. an ethical system that is based on a series of right and duties, that must be respected no matter the consequences, at least prima facie.
On the contrary, the cold system generates utilitarianism, i.e. an ethical system in which outcomes, their desirability and their likelihood to occur must be computed before a moral decision is taken. On the basis of this, Greene claims that deontology is unreliable, because it draws on automatic responses that are very likely to generate bad outcomes, being inaccurate and insensitive to the fine details of the context. Following a metaphor Greene uses in a recent Edge talk (here), deontology looks like the point-and-shoot mode of a camera: quick, dirty, and unreliable.
In a recent article, Oxford philosopher Guy Kahane and his coworkers challenge these views on experimental grounds. They have set up an fMRI experiment to look for the neural correlates of intuitive moral judgments (judgments that come natural to most people in the sampled population) and counter-intuitive moral judgments (judgments that are not compelling to most people).
They examined dilemmas in which a deontological response was intuitive (e.g. the famous Footbridge version of the trolley problem) and others in which an utilitarian response was intuitive (e.g. would you break a promise to stop a whole town from being destroyed by a nuclear blast? Most people would answer yes, thereby following utilitarianism).
The key findings of Kahane and colleagues are these: (1) the association (established by Greene et al 2001, 2004) between deontological judgments and automatic processes on the one hand, and utilitarian judgments and controlled processing on the other hand, is not replicated; (2) the BOLD (Blood Oxygen Level Dependent) activation varies when judgments with the same content (e.g. either utilitarian or deontological) are pronounced in response to different dilemmas. Judgments having the same content do not produce a stable BOLD activation.
According to Kahane and coworkers, the utilitarian / deontological distinction is not particularly explanatory when BOLD signals are taken into account. At variance with that, it is the intuitiveness or counter-intuitiveness of a judgment that correlates in a reliable way with BOLD patterns. Intuitive judgments match with activation of a consistent network of brain regions, and counter-intuitive judgments do so with a distinct network as well. These findings (if accurate) seem to undermine Greene’s dual-process hypothesis, as least in the very general form in which it was spelled out in the controversial essay “The secret joke of Kant’s soul” (“the terms deontology and consequentialism refer to psychological natural kinds").
The disagreement between these fMRI experiments may be caused by different sets of stimuli (distinct sets of moral dilemmas were used). Furthermore, it should be noted that Greene modified the criterion he avails himself of to differentiate utilitarian and deontological dilemmas after 2007. It is difficult to say who, between Greene and Kahane, is right. The only certainty is that even mere empirical findings in the neuroimaging of moral decision-making are controversial. When one then passes to the interpretation of the results or crosses the boundary between description and normativity, it is quite clear that the level of disagreement cannot but increase. Further empirical research on these topics is deeply needed and, at least at the current state of the art, sweeping conclusions about the consequences of neuroscience and experimental moral psychology for substantive ethics should be avoided, since theories are still underdetermined by the data and moral dilemmas about killing and saving human beings do not represent the whole domain of normative theories.
References
Greene JD et al. 2001. A fMRI investigation of emotional engagement in moral judgment. Science, 293: 2105-2108
Greene JD et al. 2004. The neural bases of cognitive conflict and control in moral judgment. Neuron 44: 389-400
Greene JD. 2007. The secret joke of Kant’s soul. In: Sinnott-Armstrong W (Ed.), Moral Psychology, Vol. 3. Cambridge (Mass): The MIT Press.
Kahane G et al. 2011. The neural basis of intuitive and counterintuitive moral judgment. Social Cognitive Affective Neuroscience, Advance Access, March 18th 2011