Thanks to Adam for the invitation to contribute to his great Neuroethics & Law blog. I have been working in philosophy of mind and cognitive science for quite some years now. I am a PI at the Donders Institute for Brain, Cognition and Behaviour of the Radboud University, in Nijmegen, the Netherlands.
Last month I participated in the 2016 Kavli Futures symposium on the ethical foundations of novel neurotechnologies at Colombia University in New York. Many great speakers presented lectures on Identity and mind reading, Agency and brain stimulation, and Normality and brain enhancement, followed by a roundtable discussion with members of the NIH BRAINI Neuroethics committee.
One of the many issues that kept me thinking long afterwards arose out of Jack Gallant’s talk. He presented his well-known brain decoding research (sometimes referred to as ‘brainreading’), and in particular about decoding semantic categories from brain states while seeing a movie, or listening to a story, see e.g. his paper in Nature. He clearly expressed his ethical concerns regarding brain decoding, and suggested that privacy measures should be taken. According to him, the question is no longer whether it will happen, only when. His bet, he said, was within another 20-50 years. Of course, that’s quite a margin, and others may have different estimates, but the question I’d like to raise here applies to almost any estimate:
How long does it take to develop a fully functional ethical and legal framework that could guide a responsible societal introduction of a now still emerging neurotechnology? Specifically, when would we have to start developing such a framework, in order not to have started too early or too late?
A lot of energy is spent these days on estimating current and to be developed technologies. For instance, Gartner’s ‘hype cycles’ have become quite famous and they pride themselves on the amount of research that goes into it. I’m showing a slightly modified version here.
Shouldn’t a similar amount of effort go into estimating appropriate time frames for ethical and legal safeguards surrounding such technologies? Thinking about this made me wonder what exactly the methodology is (or could or should be) for determining when one is about to ‘miss the regulatory boat’. Start too early, and you run the risk of regulating for ‘Jetsons’ technology’ as Patricia Churchland put it in her talk at the same session. She pointed out that it could be a waste of time to develop ethical or legal frameworks for all possibly developing types of technologies, as many or most of them may not come to real applications (see the ‘Valley of Oblivion’ in the picture above).
I agreed, but during my own talk also emphasized the other half of the worry, namely that if we start too late, we may witness a similar disaster as with the Internet and privacy. In the 1990s we woke up too late and now are still struggling to get some privacy back. Perhaps you remember that famous statement: ‘You have zero privacy anyway. Now get over it’. I wouldn’t like to see that repeated by some CEO in relation to brainreading.
There is another reason to consider the importance of methodically addressing the ‘when to start working seriously on an ethical and legal framework’ question. In the discussion after the first session, Pat Churchland warned for the dangers of neuroscientists hyping the results of their research. Rightly so, as it leads to disappointments, shakes public confidence, and jeopardizes funding. In addition, I suggested that it was important to realize that it’s not only the neuroscientists that are contributing to the hype, but the ethicists as well (or perhaps even more so?). In order to get to examples where the consequences of applying the neurotechnology becomes ethically clear and interesting, we often need to extrapolate technology in a way that may strengthen the ‘peak of inflated expectations’. Philosophers grow up discussing thought experiments, so we’re kind of used to it. But seriously discussing the ethical implications of rather fancy, currently non-existing applications may distort the public’s perception of the field, of what is possible and what one could hope for or fear. For this reason as well, we may need to develop a methodology for deciding when to seriously start setting up the ethical and legal frameworks for a specific application of neurotechnology.
Welcome! Great post! One complicating factor is that almost any set of ethical and legal safeguards will require us to wrestle with value questions that are interesting and timeless. We should feel free to wrestle with those questions even in the context of rather futuristic technologies. For the reasons you and Pat give, though, it does seem wise to be up-front about the futuristic nature of technologies one discusses.
I wonder too whether internet privacy and other concerns would have been mitigated if ethicists and law professors had spoken up sooner or more vocally. Maybe they would have. Perhaps journalists would have picked up more stories on the topic and that would have translated into action by politicians. I just don't know enough to be confident, though. Maybe some concerns are just exceptionally difficult for average people to get their heads around until they experience them. That's a depressing thought for those concerned with setting up legal and ethical frameworks.
Posted by: Adam Kolber | 10/11/2016 at 04:27 PM
Absolutely Adam, we're always free to address value questions in any way we want. Thought experiments can be very useful. But in the context of practical applications and a continuum of too soon/hype versus too little too late, it might be useful to consider how we could address the timing of ethics more systematically (if possible). At least at the symposium some remarks made me think about this issue. Currently this might happen largely on the basis of intuition or influenced by media attention. In my own work I try to indicate (but I should be more consistent) whether I'm discussing implications of technology that could be 'out there' in say the next five years or so; or applications that might be forthcoming, though significantly later (so out there perhaps in the ± 10-20 year range); or the 'in principle let's do science fiction' type of scenarios (50 years to 'when/if ever'). But I would appreciate a more methodical way of approaching this 'when will it be urgent' issue. Maybe it already exists?
Posted by: Pim Haselager | 10/12/2016 at 07:12 AM
Great points! I agree!
Posted by: Adam Kolber | 10/13/2016 at 05:10 PM