|
![]() |
« June 2013 | Main | August 2013 »
|
![]() |
Posted by NELB Staff on 07/11/2013 at 10:19 PM | Permalink | Comments (0)
Posted by Adam Kolber on 07/11/2013 at 12:48 PM | Permalink | Comments (0)
Posted by Adam Kolber on 07/11/2013 at 10:43 AM | Permalink | Comments (0)
Abstract
Concepts are mental representations that are the constituents of thought. EdouardMachery claims that psychologists generally understand concepts to be bodies of knowledge or information carrying mental states stored in long term memory that are used in the higher cognitive competences such as in categorization judgments, induction, planning, and analogical reasoning. While most research in the concepts field generally have been on concrete concepts such as LION, APPLE, and CHAIR, this paper will examine abstract moral concepts and whether such concepts may have prototype and exemplar structure. After discussing the philosophical importance of this project and explaining the prototype and exemplar theories, criticisms will be made against philosophers, who without experimental support from the sciences of the mind, contend that moral concepts have prototype and/or exemplar structure. Next, I will scrutinize Mark Johnson’s experimentally-based argument that moral concepts have prototype structure. Finally, I will show how our moral concepts may indeed have prototype and exemplar structure as well as explore the further ethical implications that may be reached by this particular moral concepts conclusion.
Posted by NELB Staff on 07/10/2013 at 10:42 PM | Permalink | Comments (0)
In Marcus's very interesting recent post, he suggests that we engage in moral debates with certain philosophical premises rather than others because we associate certain premises with better "moral perceivers." Perhaps, so. One concern, though, is how we could recognize someone as a keen moral perceiver when the very debates we seek to engage concern premises about the nature of morality. Perhaps there is a pre-theoretical method of assessing moral perception or some sort of reflective equilibrium approach where we refine our views about moral perception as the conversation proceeds.
Another possibility is that we only engage in certain debates with people who accept certain premises. So, for example, suppose I'm trying to decide which Marvel Comics' superhero is the strongest. The premises I engage with may come not from those with the best "superhero perception" but from those who care enough about the issue to engage in conversation about it. (Of course, we'll say that those who don't enter the conversation are poor "superhero perceivers," but maybe they would be good superhero perceivers, if, counterfactually, they cared about the issues.) Consider the solipsist. Perhaps there are solipsists with keen "ontological perception." We may, however, never engage with their premises because true solipsists, let us suppose, are not interested in talking with us.
In any event, the research Marcus describes sounds very interesting. Even if it doesn't tell us which premises we ought to engage with, it may tell us whether, in fact, we pick premises based on certain qualities of the people who hold those premises.
Posted by Adam Kolber on 07/09/2013 at 09:50 AM | Permalink | Comments (0)
See here for a call for papers for a special issue on "Fear, Economic Behavior, and Public Policies."
Posted by Adam Kolber on 07/09/2013 at 09:27 AM | Permalink | Comments (0)
I'd like to devote this, my second guest-post, to explaining how I got into neuroethics, and what my primary interests are in the area. I sort of fell into neuroethics by accident. I had some vague ideas about what the relationship should be between moral and political theory and empirical study of the mind and behavior (which I will discuss below) rolling around my head for several years. However, I never really had any impetus to follow though on any of those ideas until I came across a call-for-proposals by the Yale Experiment Month initiative. For those of you who may have never heard of it before, it is a wonderful program that aims to help philosophers "out of armchair" by helping accepted applicants to design and carry out empirical studies. Prior to my involvement with Experiment Month, my experience with experimental psychology was more or less limited to reading pop-psychology magazines and things I had studied as an undergraduate (when I double-majored in philosophy and psychology). Although I am in many ways still very much a beginner when it comes to running and interpreting empirical studies, I have learned a great deal since becoming involved with the Experiment Month initiative, and am very thankful for all of the help and support the people there provided. I probably would have never been involved in empirically-based philosophical psychology were it not for them.
Anyway, here are the ideas I had rolling around in my head for several years. First, like many philosophers, I had long been frustrated by "stagnated debates" -- debates where different sides more or less reach an impasse where the opposing sides cannot even agree upon premises. Since philosophical arguments are, by definition, arguments from premises to conclusions, this situation puzzled me. When two sides in a moral or political debate seem to fundamentally disagree over premises, what is an appropriate way to proceed?
The second idea I had stems from the way in which I had witnessed moral and political philosophers deal with certain kinds of disputed premises in practice -- both in the classroom and in published books and papers. The kind of case I found most interesting is the case of the hard-core immoralist, the person who claims to see no reason to behave morally when they might benefit from behaving immorally. The reason the immoralist seems worth worrying about philosophically is simple: the immoralist seems to be asking a normatively reasonable question. Consider, for example, one natural way of understanding normativity (or what someone "has reason to do", "ought" to do, etc.). Suppose I want to stay alive. Is there anything I have reason to do, or ought to do? Intuitively, yes. If I want to stay alive, I sure as heck ought not to jump off of a cliff without a parachute. Why? Simple: because doing do won't get me what I want. Once you know what I want, it no longer seems like "merely a matter of opinion" what I ought to do. If I don't want to die, there seems to be at least one, objective sense -- a sense of prudence -- in which I ought to do certain things rather than others. The "immoralist challenge" then is this: it certainly seems that people can get things they want -- fame, fortune, power, etc. -- by behaving in ways that are commonly considered immoral. And so their question seems reasonable: if they can get what they want by behaving immorally, why shouldn't they?
Philosophers have grappled with this challenge for thousands of years. I do not know of many philosophers, however, who think that it has ever been definitively refuted. It still seems like people -- "bad people" -- can often get what they want by behaving in atrocious ways. That being said, what is a philosopher to do? If we can't show the immoralist why they should behave morally, how should we proceed? The answer, both in the classroom and in published work, is often this: philosophers just ignore the immoralist. Why? Isn't the answer obvious? We recognize that the immoralist is morally obtuse in a way that the rest of us are not. The rest of us -- normal, well-socialized human beings -- feel the grip of moral arguments and principles. In other words, moral and political philosophers often seem to privilege the standpoint of the "well-raised" over people are who are not.
The question that arose in my head then was this: why do philosophers tend to privilege some people's premises (i.e. yours and mine, ordinary people) over the kinds of premises that other people (the immoralist, the psychopath, etc) find attractive in moral and political argument? It seemed to me that the answer is this: we assume that some people are better moral perceivers than others, just like some people have better eyesight than others. We do not ordinarily direct moral argument to the immoralist, psychopath, or Nazi because we recognize that they are, in terms of their moral psychology, "beyond the pale."
Next, it occurred to me that this idea -- that some people are better "moral perceivers" than others -- has a long history in philosophy. Aristotle, for example, famously argued that some people (the "phronimos") possess moral and practical wisdom that others do not, and that we should look to these people for moral and practical guidance. Moreover, it is familiar enough in ordinary life. There are some people whose moral opinions we trust more than others'. We do not consult selfish, meanspirited people for moral advice. We seek the advice of the kind, the compassionate, etc.
And so the following thought occurred to me: might it be possible to study who among us have the "best moral sense", in a manner that might help us move forward stagnant debates over premises in moral and political philosophy? These were, of course, just a bunch of vague thoughts. I had never thought very seriously about how to pursue them, and I am still struggling with them to do this day. Still, I think I have made a little bit of progress on them, and have, at any rate, begun to try to study them empirically. In my next post, I will describe some of the research I have carried out, and then in future posts where I hope to go from there.
Posted by Marcus Arvan on 07/08/2013 at 01:22 PM | Permalink | Comments (0)
![]() |
Last Edition's Most Popular Article:
In The Popular Press
From the Academic Literature:
|
||
|
Posted by NELB Staff on 07/07/2013 at 08:09 PM | Permalink | Comments (0)
Greetings, everyone! Adam Kolber has graciously invited me to contribute during the month of July as a guest-blogger. I will be posting 1-2 times per week over the next month, primarily to discuss my work and interests in neuroethics and law, including some issues that I haven't done any work on yet but which I hope might interest readers. I would like to begin, in this post, by briefly introducing myself, and by explaining how I came to be involved in neuroethics.
I received my PhD in Philosophy with a Cognitive Science Minor from the University of Arizona. For the past four years, I have been Assistant Professor of Philosophy at the University of Tampa, in Tampa Florida. For the past year or so, I have also been owner and moderator of The Philosophers' Cocoon, a safe and supportive forum for early-career philosophers to congregate and discuss their work and professional issues. Although I wrote my dissertation, "A Non-Ideal Theory of Justice", in political philosophy, I have always had very wide research interests, and have published and presented papers in a variety of areas, including:
My 2012 article, "Bad News for Conservatives? Moral Judgments and the Dark Triad Personality Traits: A Correlational Study", which found a number of significant relationships between traditionally "conservative" moral and political judgments and the Dark Triad personality traits (Machiavellianism, Narcissism, and Psychopathy), received some news coverage. It was also -- not all that surprisingly -- the subject of a pretty harsh reply article. As I will explain in more detail in a later post, although I am a bit embarrassed by some errors the reply article brings to light, I believe the critique of my paper is incorrect on several fronts. The author claims: (1) I shouldn't have used simple correlation analyses (actually,I should have), and (2) despite essentially confirming my results with different statistical methods, that is not clear what pychological construct my results measure (this despite the fact that, in the article, I state explicitly that the aim of the study is not to measure any psychological construct at all, but rather responses to particular moral questions). I am also in the process of gathering new data for a follow-up study to address the author's concerns more definitively.
Anyway, I will explain all of this in more detail in a later post. I will also explain other interests I have in neuroethics, including some issues about psychopaths and moral and legal responsibility, as well as how I think my work on free will is relevant to moral and legal responsibility, as well. Before I get to all of this, however, I would like to explain how I got into neuroethics, and what led me to examine the relationships between personality traits and moral-political judgments in the first place. And so this is what I will do next...in my next post... :)
Posted by Marcus Arvan on 07/02/2013 at 01:01 PM | Permalink | Comments (1)