In this interview with Pat Churchland, she states:
By and large, the philosophers who say we must maximize aggregate utility end up with all the usual problems every undergraduate can list at a moment’s notice, not least of which is that what makes people happy is apt to vary with their values, not to mention that calculating aggregate utility is NP-incomplete, or as close as makes no difference.
Perhaps it's an off-the-cuff comment and it certainly seems plausible, but I suspect she's alluding to some research out there about consequentialism and NP-completeness. I did a little searching with only modest success. Anyone know anything directly on point?
When I saw the recent headline in New Scientist, "The Moral Uncertainty of a P = NP World," I thought it would be on this very topic. In fact, however, it's about a new film that will soon start the festival circuit called Travelling Salesman. In the film, a group of researchers figure out how to easily crack difficult cryptographic problems and the like. Writer Jacob Aron is excited about the issues the film raises but gives a rather soft endorsement of the film itself.
Comments