What is the nature of moral truth? Should we treat uncertainty about moral claims the same way we treat uncertainty about scientific claims? These are some of the deep questions implicated by Mary Sigler's much-appreciated response to my article on Punishment and Moral Risk.
In my paper, I argue that retributivists need to believe at least nine key propositions to justify punishing particular individuals, and when we multiply our confidence in each proposition, reasonable retributivists will likely have too much doubt to punish consistent with the values that seem to underlie retributivist commitment to the beyond-a-reasonable-doubt principle. After a very clear and crisp summary of my main claims, Sigler presents her central criticism of my methodology (footnotes notes omitted throughout):
The problem with Kolber’s indiscriminate list of retributive propositions is that most of the entries on the list represent moral, rather than empirical, claims. And moral claims differ from empirical claims precisely in that they are not testable or otherwise susceptible to proof or falsification. Instead, moral belief (to the extent that it is critically examined) is generally a product of argument and reflection, not proof. By assimilating moral and empirical claims, Kolber attempts to apply a quantitative standard of proof to moral claims, which can neither be reliably measured nor empirically proved. . . .
The aim of the approach [Sigler advocates] is thus not to “prove” the validity of a moral principle, but to evaluate whether it fits within the broader scheme of principles already taken—provisionally—as fixed. Whereas a foundationalist attempts to deduce his moral conclusions from authoritative premises (e.g., the word of God), a coherentist recognizes that his enterprise will necessarily “involve a large element of trial and error and muddling through.”
So I take Sigler's central criticism to be that we cannot meaningfully assign confidence to the non-empirical propositions I use. I have two main responses. First, I don't see how anything Sigler says affects our ability to assign levels of confidence to moral claims. Many hold coherence views about the truth of scientific propositions. They consider how a claim fits with other scientific beliefs they already hold. Yet they can still estimate their confidence in scientific propositions. Indeed, it may be precisely because we hold our beliefs with varying levels of confidence that we can even sort through a web of beliefs. If all our beliefs were equally weighted in confidence, we'd be in big epistemic trouble. True, moral claims are not empirically testable in the same way as many scientific claims. But why does that affect our ability to hold them with different levels of confidence?
Consider: If we couldn't weigh our confidence in various beliefs, how would you know whether you believe X as opposed to not-X? Or know whether you find claim A, B, or C the most compelling? True, none of this means that we can come up with precise percentages. But as I say in the paper, rough percentage estimates simply make my arguments more elegant and tangible. If such numbers cannot be accurately estimated, we could run very similar arguments by asking if one holds each proposition with very high, high, medium, etc. levels of confidence. Confidence in the conjunction of all nine propositions will have to be less than one's confidence in one's least confident proposition, perhaps quite a bit less. But the argument is cleaner if one can at least give rough estimates in percentage terms. Moreover, are there moral claims that you are more confident about now than you were ten years ago? How could that be if we cannot estimate our confidence in various propositions?
Note, too, that I make no claims about how people ordinarily think about moral propositions and whether they typically think about their credence in various claims. Even if they don't ordinarily think this way, retributivists still need to confront the challenge that their justification requires belief in several propositions, each of which cannot plausibly be held with complete confidence. I realize, too, that there is disagreement about the fundamental nature of subjective probability, so there's clearly going to be disagreement when estimating the strength of our probabilistic beliefs. But I don't see how Sigler shows that such disagreement varies along the same divide as the empirical/moral line.
Of course, some people may doubt that moral propositions have truth value at all. But the argument in my paper is addressed to retributivists who purport to justify the punishment of particular individuals, and I don't think such retributivists typically deny that moral claims have truth values. And that leads to my second reply: even if I'm wrong about the ability to assign confidence to moral propositions, I'm not sure that helps retributivists much. When a prisoner makes the legitimate hypothetical inquiry to the retributivist, "how confident are you that my punishment is deserved?" it does not seem like a satisfying answer to say, "I don't know; I can't calculate confidence levels in moral propositions." Failing to explain one's level of confidence in the face of quite reasonable doubts seems like it's own sort of failure of the justificatory process.
Sigler writes:
A process of “muddling through” may not inspire high levels of Kolber-confidence, but it reflects the only viable process suitable to the moral domain, and it entails humility—the recognition that further argument, reflection, and experience may reveal a better answer. In the meantime, we need not doubt—or hedge against—what Ronald Dworkin calls the “face value” view of our propositions—that genocide is truly wrong, for example, or that wrongdoers really deserve to suffer punishment. For “any reason we think we have for abandoning a conviction is itself just another conviction, and . . . we can do no better for any claim . . . than to see whether, after the best thought we find appropriate, we think it so.” Absent empirical testing and definitive proof (unavailable in this domain), the only way to establish a working moral proposition is “through substantive normative arguments.” Until we encounter a better argument, we have reason to credit the truth of our considered convictions. Accordingly, application of the BARD standard (or other quantitative metrics) represents a misapprehension of the nature of moral truth, producing a category mistake that imposes an inapt quantitative measure to gauge the soundness or strength of a moral proposition.
I couldn't agree more about the importance of humility, but I think my approach is entirely consistent with it: recognizing the weaknesses in one's beliefs is a way of demonstrating humility. None of that shows how I misapprehend the nature of moral truth. Take the very example in the quote that "genocide is truly wrong" and that "wrongdoers really deserve to suffer punishment." Perhaps wrongdoers shouldn't be made to suffer but should just be preventatively detained and/or given rehabilitative treatment. Whether that's true or not, I think all reasonable people should be more confident about the genocide claim than the claim about wrongdoers. We ought to vary in our confidence levels about moral propositions. Exactly how to convert the weight of these beliefs into numbers is challenging but comparisons of this sort can give rough estimates and the details, again, needn't be spelled out because precise quantification isn't essential to the argument.
Sigler also calls me out for perceived mistakes in my discussion of particular individual propositions. These discussions, however, are not meant to give definitive arguments about major moral questions. They are simply meant to stimulate readers unfamiliar with them to give rough estimates about their confidence levels in pertinent propositions. For example, when discussing free will, I mention some data about the rather deep-seated differences of agreement among professional philosophers. Sigler responds:
As Kolber correctly notes, despite his penchant for citing disagreement, we cannot “straightforwardly determine our confidence” in a moral proposition by taking a poll. But this disclaimer is much more devastating than he allows. Absent an argument about why these differences of opinion have any implications for moral truth, we cannot evaluate the significance of moral disagreement.
To be sure, the data are only meant to help spur readers on. But it strikes me as not at all unreasonable to consider such data. Imagine a student in her first week of an introductory philosophy class who believes she has solved the problem of free will. Telling this student that the problem of free will stretches back centuries and that professional philosophers have spilled enormous amounts of ink on it likely should reduce the student's confidence in her own solution if she wasn't already aware of how much attention the problem has received. None of this entails that the student's solution is wrong. But some data of the sort I mention can, at a minimum, serve as rough-and-ready proxies for more careful analysis and that's all that I use it for in the paper.
Finally, Sigler challenges my discussion of "portfolios of beliefs" in the context of threshold deontology (roughly, the view that we can be deontologists with firm moral prohibitions against certain conduct (e.g., torture) yet violate those prohibitions when the consequences of observing those provisions (say, the expected death of thousands) are sufficiently awful). I argue that one might be a kind of threshold deontologist who basically believes in deontology, but when the risks of being morally wrong are sufficiently high, he acts like a consequentialist would. The gist of Sigler's criticism is that my solution is too easy. A good threshold deontologist should experience the deep moral regret of situations requiring tragic choices that exemplify threshold deontology, and she doesn't believe that portfolios of beliefs permit that.
The matter seems closely connected to a bigger debate about whether there can be genuine moral dilemmas--situations where whatever you do is morally wrong, even if you did nothing morally wrong to get into the situation in the first place. I have my doubts about the existence of true moral dilemmas, just as I have doubts about whether one ought to experience moral regret in the situations Sigler envisions.
More importantly though, even if I concede that there are true "moral remainders" in such situations, I don't see why those who hold portfolios of beliefs cannot accommodate Sigler's request. The epistemic threshold deontologist that I discuss is rather confident in deontology. When she breaks a deontological prohibition, she can still feel deep regret. Maybe those regrets are reassured a bit by her belief in the modest possibility that consequentialism is true. But I don't see why her fallback beliefs should dominate her moral psychology. Again, I likely see these matters differently than Sigler does. Yet when I try to enter her mindset, I see many ways in which epistemic threshold deontologists might generate the kind of moral remainders Sigler seeks.
There is much of great value in Sigler's reply, and for reasons of time and space, I haven't addressed all of her keen and thoughtful arguments (but will try to do so in the comments if readers have particular interests). In the meantime, I extend my warmest thanks to her for a most valuable discussion.
Comments