Thanks to Adam for the invitation to contribute to his great Neuroethics & Law blog. I have been working in philosophy of mind and cognitive science for quite some years now. I am a PI at the Donders Institute for Brain, Cognition and Behaviour of the Radboud University, in Nijmegen, the Netherlands.
Last month I participated in the 2016 Kavli Futures symposium on the ethical foundations of novel neurotechnologies at Colombia University in New York. Many great speakers presented lectures on Identity and mind reading, Agency and brain stimulation, and Normality and brain enhancement, followed by a roundtable discussion with members of the NIH BRAINI Neuroethics committee.
One of the many issues that kept me thinking long afterwards arose out of Jack Gallant’s talk. He presented his well-known brain decoding research (sometimes referred to as ‘brainreading’), and in particular about decoding semantic categories from brain states while seeing a movie, or listening to a story, see e.g. his paper in Nature. He clearly expressed his ethical concerns regarding brain decoding, and suggested that privacy measures should be taken. According to him, the question is no longer whether it will happen, only when. His bet, he said, was within another 20-50 years. Of course, that’s quite a margin, and others may have different estimates, but the question I’d like to raise here applies to almost any estimate:
How long does it take to develop a fully functional ethical and legal framework that could guide a responsible societal introduction of a now still emerging neurotechnology? Specifically, when would we have to start developing such a framework, in order not to have started too early or too late?
A lot of energy is spent these days on estimating current and to be developed technologies. For instance, Gartner’s ‘hype cycles’ have become quite famous and they pride themselves on the amount of research that goes into it. I’m showing a slightly modified version here.
Shouldn’t a similar amount of effort go into estimating appropriate time frames for ethical and legal safeguards surrounding such technologies? Thinking about this made me wonder what exactly the methodology is (or could or should be) for determining when one is about to ‘miss the regulatory boat’. Start too early, and you run the risk of regulating for ‘Jetsons’ technology’ as Patricia Churchland put it in her talk at the same session. She pointed out that it could be a waste of time to develop ethical or legal frameworks for all possibly developing types of technologies, as many or most of them may not come to real applications (see the ‘Valley of Oblivion’ in the picture above).
I agreed, but during my own talk also emphasized the other half of the worry, namely that if we start too late, we may witness a similar disaster as with the Internet and privacy. In the 1990s we woke up too late and now are still struggling to get some privacy back. Perhaps you remember that famous statement: ‘You have zero privacy anyway. Now get over it’. I wouldn’t like to see that repeated by some CEO in relation to brainreading.
There is another reason to consider the importance of methodically addressing the ‘when to start working seriously on an ethical and legal framework’ question. In the discussion after the first session, Pat Churchland warned for the dangers of neuroscientists hyping the results of their research. Rightly so, as it leads to disappointments, shakes public confidence, and jeopardizes funding. In addition, I suggested that it was important to realize that it’s not only the neuroscientists that are contributing to the hype, but the ethicists as well (or perhaps even more so?). In order to get to examples where the consequences of applying the neurotechnology becomes ethically clear and interesting, we often need to extrapolate technology in a way that may strengthen the ‘peak of inflated expectations’. Philosophers grow up discussing thought experiments, so we’re kind of used to it. But seriously discussing the ethical implications of rather fancy, currently non-existing applications may distort the public’s perception of the field, of what is possible and what one could hope for or fear. For this reason as well, we may need to develop a methodology for deciding when to seriously start setting up the ethical and legal frameworks for a specific application of neurotechnology.