“We are not dualists” write Pardo and Patterson (xiii).
What ominous words to end the first page of an introduction. If my cocked eyebrow could have spoken, it would have inquired about what Pardo and Patterson were about to reveal. Turning the page, my lips parted at the next word: “We are not dualists. Nevertheless...” (xiv). Who knows, perhaps I even let out a little gasp of anticipation.
Of course, I say all this in jest. That sentence actually reads “Nevertheless, the relationship between the mind and the brain is enormously complicated” (xiv). Hear hear, too right! Nevertheless, the more I read, the more I think I understand why Pardo and Patterson put their readers on such stern notice and denied that they are dualists.
My comments in what follows relate to what Pardo and Patterson say on pages 130-140, but the following passage offers a particularly good place to start: “Even if we learned the details about the neural activity of defendants while acting, it is an error to suppose that we will find intent or knowledge lurking in the neural activity” (135-6). I take this to indicate Pardo and Patterson’s rather pessimistic stance on the utility of brain scans as tools to help us to ascertain the intent with which someone acted and whether/what they knew at the time of acting.
The affect that I would like my commentary to convey is this: I just don’t get why two people who are not dualists would be that pessimistic about our prospects of using brain scans to help courts assess intent and knowledge. Accordingly, this commentary will aim to set aside the reasons Pardo and Patterson cite in support of this pessimism, and to explain why I think that at least in principle (i.e. if we had access to ideal science and technology) we could use brain scans to help courts ascertain these things.
***
Here’s what Pardo and Patterson say to support their pessimism about assessing intent through brain scans: “Intentions are not brain processes or inner feelings, nor are they the neural activity that precedes an internal process or feeling. It is an example of the mereological fallacy to ascribe ‘intent’ to a particular array of neural activity (brain states do not intend to do things — people do), and it is similarly a mistake to try to identify a person’s intention with a state of his brain” (137-138). They add that in order for us to even see the incoherence of statements like “I intend to X, and X is impossible” (138), we can’t “[s]uppose... that having an intention just was having a particular brain state” (138), since there is nothing incoherent in saying “I have a particular brain state, and X is impossible”.
I think that Pardo and Patterson are probably right about three things. One, that the mereological fallacy is committed by ascribing intentions to neural activity. Two, that intentions are not the neural activity that precedes an internal process or feeling. Three, that intentions are not feelings. However, given that this part of their book is concerned with reaching a conclusion about “whether the science can reveal whether defendants acted with the culpable mental attributes necessary for liability” (133), what I would have expected them to discuss is whether measurable indicators of brain processes or feelings might reliably correlate with intentions or knowledge, not whether they can be identified with them.
In a few paragraphs I shall return to what they say about identifying brain states with intentions or knowledge, but first I would ask you to consider the phenomenology that I experience when I act with different kinds of intent. First, there is nothing that it is like for me to do something negligently. This is simply because negligence involves a failure to consider stuff that you should have considered, and not considering something – an absence – does not feel like anything in particular. On the other hand, when I do things recklessness I feel an attitude of belligerence. It feels like a certain kind of wilful, flaunting disregard. Another kind of disregard – a much colder “don’t care” sort of attitude – accompanies acting recklessly. And a certain kind of ditermined, directed, or single-minded feeling is what tags along when I do things on purpose. Distinct phenomenology – a distinct feeling – accompanies doing things on purpose, with knowledge, recklessly, and negligently. Now, my own phenomenology is probably idiosyncratic, so nothing of consequence for Pardo and Patterson’s argument should be extracted from the content of my idiosyncratic phenomenology of intending. But what does follow, in my view, is this:
(P1) if different phenomenology attends to different ways of intending, and
(P2) different brain processes correlate with different phenomenology, and
(P3) brain imaging techniques could discern these different brain processes from one another, then
(MC) brain imaging techniques could help courts discern the kind of intent with which someone acted.
Or so, at least, it seems to me.
***
Consider next what Pardo and Patterson say about assessing knowledge (as opposed to intent) through brain scans: “Imagine we had an fMRI scan of a defendant’s brain while [he was] committing an allegedly criminal act. ... Exactly where in his brain would we find this knowledge?” Presumably, what they mean here is something like knowledge that he committed that act, or that he committed it on purpose. They continue: “Because knowledge is an ability, and not a state of the brain, the answer is: nowhere” (139). Their point, I take it, is that knowledge is not like data encoded on a computer hard drive nor like text printed on the page of a book — something that exists in some location, that we could peer at, inspect, and measure. Rather, knowledge is on their account an ability. Suppose that’s right. (Though to be perfectly frank I’m not totally sure I understand precisely what they mean by this, but yet I do not think this matters for what I about about to say.) What’s meant to follow from that? On Pardo and Patterson’s account what follows is that such fMRI scans could not possibly reveal whether defendants do or do not know the facts in question, because those facts simply are not inscribed in any location in their brains, and so they cannot be read by inspecting any location in their brains. My puzzlement here is simple, I think: regardless of whether we conceive of knowledge as akin to text inscribed in some location on a sheet of paper (this, I guess, is meant to be analogous to what they call “states” of the brain), or whether the right way to think about knowledge is as an ability, even in the latter case we will still presumably want to say that the ability in question is implemented somewhere in the brain. If not there, then where? For example, if I have the ability to speak Polish (which I do), then presumably that ability is to a significant degree somehow encoded or wired into my brain. But if that is so, then it appears to be a non sequitur to infer that knowledge cannot be found in the brain from the premise that knowledge is an ability and not a brain state.
***
This brings me to their claim that to even notice the incoherence of statements like “I intend to X, and X is impossible” (138), we cannot suppose that having an intention just is having a particular brain state, since there is nothing incoherent about saying “I have a particular brain state, and X is impossible”. My worry here is that Pardo and Patterson’s argument misunderstands the sense of “is” that lies at the core of the Mind-Brain Identity Theory. To make my point, I quote U.T. Place on the topic of the distinction between two different senses of “is”, the “is” of definition and the “is” of composition:
“[I]n defending the thesis that consciousness is a process in the brain, I am not trying to argue that when we describe our dreams, fantasies and sensations we are talking about processes in our brains... To say that statements about consciousness are statements about brain processes is manifestly false. This is shown (a) by the fact that you can describe your sensations and mental imagery without knowing anything about your brain processes or even that such things exist, (b) by the fact that statements about one's consciousness and statements about one's brain processes are verified in entirely different ways and (c) by the fact that there is nothing self-contradictory about the statement ‘X has a pain but there is nothing going on in his brain’. What I do want to assert, however, is that the statement ‘consciousness is a process in the brain', although not necessarily true, is not necessarily false. ‘Consciousness is a process in the brain', on my view is neither self-contradictory nor self-evident; it is a reasonable scientific hypothesis, in the way that the statement ‘lightning is a motion of electric charges’ is a reasonable scientific hypothesis. The ... view that an assertion of identity between consciousness and brain processes can be ruled out on logical grounds alone, derives, I suspect, from a failure to distinguish between what we may call the ‘is’ of definition and the ‘is’ of composition. The distinction I have in mind here is the difference between the function of the word ‘is’ in statements like ‘a square is an equilateral rectangle', ‘red is a colour', ‘to understand an instruction is to be able to act appropriately under the appropriate circumstances', and its function in statements like ‘his table is an old packing case', ‘her hat is a bundle of straw tied together with string', ‘a cloud is a mass of water droplets or other particles in suspension’... Statements like ‘a square is an equilateral rectangle’ are necessary statements which are true by definition. Statements like ‘his table is an old packing case', on the other hand, are contingent statements which have to be verified by observation. ... Those who contend that the statement ‘consciousness is a brain process’ is logically untenable base their claim, I suspect, on the mistaken assumption that if the meanings of two statements or expressions are quite unconnected, they cannot both provide an adequate characterization of the same object or state of affairs: if something is a state of consciousness, it cannot be a brain process, since there is nothing self-contradictory in supposing that someone feels a pain when there is nothing happening inside his skull. By the same token we might be led to conclude that a table cannot be an old packing case, since there is nothing self-contradictory in supposing that someone has a table, but is not in possession of an old packing case.” U. T. Place (1956) "Is consciousness a brain process?", British Journal of Psychology, 47(1):44-46.
I suspect that the argument that Pardo and Patterson offer to support their claim that intentions cannot just be brain states makes precisely the same mistake as the one that U.T. Place highlighted back in 1956. Rather than sprinkling my own confusions all over U.T. Place’s impeccable argument, I’ll leave things here and assume that this suffices to put Pardo and Patterson’s mind at ease in the knowledge that they need not be as pessimistic about the prospect of using brain scans to help courts assess people’s intent and knowledge.
***
I argued above that intention and knowledge need not be identified with brain states in order for evidence about the brain to usefully inform mens rea investigations. Correlation will suffice. For evidential utility, all that’s needed are (sufficiently) stable correlations, and we could certainly get those if we discovered that certain brain states are (almost) invariably present whenever someone intends or knows. I also argued that it is not as implausible as Pardo and Patterson maintain to identify brain states with mental states. U.T. Place did it, and it seems to me that Pardo and Patterson’s objections to doing it only apply if we use the word “is” in the definitional rather than the compositional sense. But putting aside the philosophical gloss and endless distinctions that we are so fond of making, I simply wonder where else, if not in the brain, are we to find knowledge and intention? I do not mean to be crass, but only to point out that regardless of whether we conceive of intending and knowing as states or as processes, they will still in the end need to be implemented somewhere, and the brain seems like a plausible candidate.
It is reassuring that Pardo and Patterson distance themselves from dualists. However, given what they say above, I must confess to still be none the wiser about how, in their view, the mind relates to the brain. I distance myself from dualists too. In part because dualism does not seem sensible, but also because I just cannot see a sensible dualist neurolaw research programme. After all, if the mind is forever beyond what the empirical sciences can study, then they can shed precious little light on the mental element of crimes. I am a materialist about the mind (probably not of the eliminativist variety, but I’m really not sure), a compatibilist about responsibility (of the capacitarian stripe), and an optimist about the power of compatibilism to explain how the new mind sciences can fruitfully inform legal responsibility adjudications. Consequently, I am puzzled why Pardo and Patterson adopt such a cautious and conservative stance on how brain scans might inform mens rea investigations.
Admittedly, this might just be a good time for me to say mea culpa since I’ve not read their book from cover to cover, and so they probably address my concerns somewhere else in this fantastic new book. Also, for the record, the clarity of Pardo and Patterson’s message – that there is a pressing need in the field of neurolaw to clear up a range of conceptual confusions – the sheer strength of their arguments, and the clarity of their writing, are all second to none. Nevertheless, nothing is perfect, and that applies equally to my own critical comments.
Comments