I'd like to devote this, my second guest-post, to explaining how I got into neuroethics, and what my primary interests are in the area. I sort of fell into neuroethics by accident. I had some vague ideas about what the relationship should be between moral and political theory and empirical study of the mind and behavior (which I will discuss below) rolling around my head for several years. However, I never really had any impetus to follow though on any of those ideas until I came across a call-for-proposals by the Yale Experiment Month initiative. For those of you who may have never heard of it before, it is a wonderful program that aims to help philosophers "out of armchair" by helping accepted applicants to design and carry out empirical studies. Prior to my involvement with Experiment Month, my experience with experimental psychology was more or less limited to reading pop-psychology magazines and things I had studied as an undergraduate (when I double-majored in philosophy and psychology). Although I am in many ways still very much a beginner when it comes to running and interpreting empirical studies, I have learned a great deal since becoming involved with the Experiment Month initiative, and am very thankful for all of the help and support the people there provided. I probably would have never been involved in empirically-based philosophical psychology were it not for them.
Anyway, here are the ideas I had rolling around in my head for several years. First, like many philosophers, I had long been frustrated by "stagnated debates" -- debates where different sides more or less reach an impasse where the opposing sides cannot even agree upon premises. Since philosophical arguments are, by definition, arguments from premises to conclusions, this situation puzzled me. When two sides in a moral or political debate seem to fundamentally disagree over premises, what is an appropriate way to proceed?
The second idea I had stems from the way in which I had witnessed moral and political philosophers deal with certain kinds of disputed premises in practice -- both in the classroom and in published books and papers. The kind of case I found most interesting is the case of the hard-core immoralist, the person who claims to see no reason to behave morally when they might benefit from behaving immorally. The reason the immoralist seems worth worrying about philosophically is simple: the immoralist seems to be asking a normatively reasonable question. Consider, for example, one natural way of understanding normativity (or what someone "has reason to do", "ought" to do, etc.). Suppose I want to stay alive. Is there anything I have reason to do, or ought to do? Intuitively, yes. If I want to stay alive, I sure as heck ought not to jump off of a cliff without a parachute. Why? Simple: because doing do won't get me what I want. Once you know what I want, it no longer seems like "merely a matter of opinion" what I ought to do. If I don't want to die, there seems to be at least one, objective sense -- a sense of prudence -- in which I ought to do certain things rather than others. The "immoralist challenge" then is this: it certainly seems that people can get things they want -- fame, fortune, power, etc. -- by behaving in ways that are commonly considered immoral. And so their question seems reasonable: if they can get what they want by behaving immorally, why shouldn't they?
Philosophers have grappled with this challenge for thousands of years. I do not know of many philosophers, however, who think that it has ever been definitively refuted. It still seems like people -- "bad people" -- can often get what they want by behaving in atrocious ways. That being said, what is a philosopher to do? If we can't show the immoralist why they should behave morally, how should we proceed? The answer, both in the classroom and in published work, is often this: philosophers just ignore the immoralist. Why? Isn't the answer obvious? We recognize that the immoralist is morally obtuse in a way that the rest of us are not. The rest of us -- normal, well-socialized human beings -- feel the grip of moral arguments and principles. In other words, moral and political philosophers often seem to privilege the standpoint of the "well-raised" over people are who are not.
The question that arose in my head then was this: why do philosophers tend to privilege some people's premises (i.e. yours and mine, ordinary people) over the kinds of premises that other people (the immoralist, the psychopath, etc) find attractive in moral and political argument? It seemed to me that the answer is this: we assume that some people are better moral perceivers than others, just like some people have better eyesight than others. We do not ordinarily direct moral argument to the immoralist, psychopath, or Nazi because we recognize that they are, in terms of their moral psychology, "beyond the pale."
Next, it occurred to me that this idea -- that some people are better "moral perceivers" than others -- has a long history in philosophy. Aristotle, for example, famously argued that some people (the "phronimos") possess moral and practical wisdom that others do not, and that we should look to these people for moral and practical guidance. Moreover, it is familiar enough in ordinary life. There are some people whose moral opinions we trust more than others'. We do not consult selfish, meanspirited people for moral advice. We seek the advice of the kind, the compassionate, etc.
And so the following thought occurred to me: might it be possible to study who among us have the "best moral sense", in a manner that might help us move forward stagnant debates over premises in moral and political philosophy? These were, of course, just a bunch of vague thoughts. I had never thought very seriously about how to pursue them, and I am still struggling with them to do this day. Still, I think I have made a little bit of progress on them, and have, at any rate, begun to try to study them empirically. In my next post, I will describe some of the research I have carried out, and then in future posts where I hope to go from there.
Comments