Someone called SoccerSkillz has written a pretty good post arguing against utilitarianism. I don’t think that the objections arguably work, but they’re pretty good. The article is a bit like Huemer’s article—which I’ve responded to before. Given that there are more cases, I thought that they’d be worth responding to. I’ll refer to the author as SS, because SoccerSkillz is awkward to type repeatedly.
Most rationalists I have asked about the subject tell me their interest in utilitarianism largely comes down to their theoretical preference parsimony--"it boils everything down to once clear principle." Which is strange, seeing as consequentialism is a pluralistic theory that encompasses more than one starting variable. Pleasure and pain are morally relevant--and, for utilitarians, relative impartiality in the distribution of utilities is also thought to matter, which is yet another principle.
It’s true that utilitarianism is not the simplest moral theory possible—but it’s pretty simple. Sure, it says that both pleasure and pain are good, but those share the commonality of being valanced experiences. What fundamentally matters is the direction of valanced experiences—and then the fact that it does matter gives rise to various reasons for action. Consequentialism is pluralistic, but consequentialism is underspecified. More detailed forms of consequentialism, like hedonic act utilitarianism, are often pretty simple.
The author goes on to elaborate that parsimony is only a virtue if theories are explanatorily on a par. This is only partially right. Parsimony is always a virtue—but if a theory can’t explain the data, then it’s a terrible theory. Just like nice to dogs is always a virtue, but no matter how nice to dogs Jeffrey Dahmer was, he would still not be a good person. So then the question is whether utilitarianism explains the data.
A few considered intuitions that I have about morality which I feel go bizarrely unaccommodated by utilitarianism are:
Bodily autonomy is generally morally relevant in an intrinsic way, even independent of consequences. A rapist would not be in the right because he managed to create a full-proof date rape strategy and committed his act while his victims were unconscious, never to be the wiser. This is because we have a right to limit the sexual access of other people to our bodies, given that we own our bodies.
I’ve already replied to this objection here.
Promises and honesty are also relevant: imagine an low-IQ boy, Ronny, with a terrible memory mows the neighborhood's lawns for cash. After a hard days labor mowing seven lawns, he forgets to ask Mr. Jenson for compensation. Mr. Jenson, aware of the child's gullibility, takes advantage of his innocence and withholds payment, answering the door with a grin and saying "Oh no, Ronny, you're mistaken. You mowed my lawn last week you poor dear!" Ronny, considering this, realizes it must be true, and thanks Mr. Jenson for his business before cheerfully skipping away. Were Mr. Jenson's actions appropriate? Assume that his cynical act will not become known to Ronny, nor will it be practiced universally as a rule and undermine the institution of promise keeping in general. It will simply violate his promise. Is it any worse for that?
I’ve replied to this here.
The Problem of Extreme Demands: Another problem with consequentialism is that it is over-demanding. This is a big issue for the utilitarians who think the theory provides an excellent rule of thumb with the right answers for 99% of cases, despite a few rarefied hypothetical problems that don't matter. The idea that consequentialism is "a great rule of thumb" in the real world or in everyday life only makes sense if we ignore most of what the rule implies. Why not donate all of your nonessential earnings to effective charities operating in the developing world which save a life for every $100-$3,500? Why not work more hours for more charity dollars, until you reach the highest level of altruistic slavery that corresponds to the highest possible production of goods of which you are emotionally and physically capable? Why not become a utility pump?
I don’t think this objection succeeds. The most promising reason relates to scalar utilitarianism. Utilitarianism.net defines scalar utilitarianism in the following way
Scalar utilitarianism is the view that moral evaluation is a matter of degree: the more that an act would promote the sum total of well-being, the more moral reason one has to perform that act.19
On this view, there is no fundamental, sharp distinction between 'right' and 'wrong' actions, just a continuous scale from morally better to worse.
Scalar utilitarianism gives a natural and intuitive way to defuse this. Sure, maybe doing the most right thing is incredibly demanding — but it’s pretty obvious that giving 90% of your money to charity is more right than only giving 80%. Thus, by admitting of degrees, demandingness worries dissipate in an instant.
And we should expect doing the best thing to be demanding. For you to, every moment of every day, do the best possible thing, that’s pretty demanding. The best thing that you can do should be pretty demanding. Scalar utilitarianism accounts for this fact, perfectly accommodating our demandingness intuitions.
But there are a lot more objections.
First, utilitarianism is intended as a theory of right action not as a theory of moral character. Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection. However, it is far less counterintuitive to suppose that it would be good to sacrifice oneself to save two others than it is to suppose that one is a bad person unless they sacrifice themselves to save two others. In fact, it seems that any plausible moral principle would say that it would be praiseworthy to sacrifice oneself to save two others. If a person sacrificed their lives to protect the leg of another person, that act would be bad, even if noble, because they sacrificed a greater good for a lesser good. However, it’s intuitive that the act of sacrificing oneself to save two others is a good act.
The most effective charities can save a life for only a few thousand dollars. If we find it noble to sacrifice one's life to save two others, we should surely find it noble to sacrifice a few thousand dollars to save another. The fact that there are many others who can be saved, and that utilitarianism prescribes that it’s good to donate most of one’s money doesn’t count against the basic calculus that the life of a person is worth more than a few thousand dollars.
Second, while it may seem counterintuitive that one should donate most of their money to help others, this revulsion goes away when we consider it from the perspective of the victims. From the perspective of a person who is dying of malaria, it would seem absurd that a well off westerner shouldn’t give up a few thousand dollars to prevent their literal death. It is only because we don’t see the beneficiaries that it seems too demanding. It seems incredibly demanding to have a child die of malaria to prevent another from having to donate.
(Sobel, 2007) rightly points out that allegedly non-demanding moralities do demand a great deal of people. They merely demand a lot of the people who would have been helped by the allegedly demanding action. If morality doesn’t demand that a person gives up their kidney to save the life of another, then it demands the other person dies so the first doesn’t have to give up a kidney. Sobel argues that there isn’t a satisfactory distinction between demands on the victim of ill fortune and the demands of well-off people who consequentialism places significant demands on.
If a privileged wealthy aristocracy objected to a moral theory on the grounds that it requested they donate a small share of their luxury to prevent many children from dying, we wouldn’t take that to be a very good objection to that moral theory. Yet the objection to utilitarianism is almost exactly the same--minus the wealthy aristocracy part. Why in the world would we expect the correct moral theory to demand we give up so little, when giving up a vacation could prevent a child from dying. Perhaps if we consulted those whose deaths were averted as a result of the foregone vacation or nicer car, utilitarianism would no longer seem so demanding.
Third, we have no a priori reason to expect ethics not to be demanding. The demandingness intuition seems to diffuse when we realize our tremendous opportunity to do good. The demandingness of ethics should scale relative to our ability to improve the world. Ethics should demand a lot from superman, for example, because he has a tremendous ability to do good.
Fourth, the drowning child analogy from (Singer, 1972) can be employed against the demandingness objection. If we came across a drowning child with a two thousand dollar suit, it wouldn’t be too demanding to suggest we ruin our suit to save the child. Singer argues that this is analogous to failing to donate to prevent a child from dying.
One could object that the child being far away matters. However, distance is not morally relevant. If one could either save five people 100 miles away, or ten 100,000 miles away, they should surely save the ten. When a child is abducted and taken away, the moral badness of the situation doesn’t scale with how far away they get.
A variety of other objections can be raised to the drowning child analogy, many of which were addressed by Singer.
Fifth, demandingness is required to obey cross world pareto optimality. Consider two possible worlds: world one has immense opportunity to help people, world two has very little opportunity to help people, such that in world two utilitarianism demands virtually nothing. If ordinary morality is similarly demanding across both worlds, then the demanding morality would be, across worlds, both better for you and for others.
Utilitarianism is not demanding because of some inherent reason to be a saint. Rather, utilitarianism is demanding at this place, time, and social location, because we have immense opportunities to make the world a better place. When the evidence changes, so should the demandingness of our morality--particularly if we want to obey cross world pareto optimality.
Sixth, Kagan (1989) provides the most thorough treatment of the subject to date up, and argues persuasively that there is no philosophically persuasive defense of morality not being demanding. Similar accounts can be found in (Pogge, 2005), (Chappell, 2009), and many others.
Kagan rightly notes that ordinary morality is very demanding in its prohibitions, claiming that we are morally required not to kill other people, even for great personal gain. However, Kagan argues a distinction between doing and allowing and intending and foreseeing cannot be drawn successfully, meaning that there’s no coherent account of why morality is demanding on what we can’t do, but not on what we must do.
Seventh, a non demanding morality can be collectively infinitely undesirable. Currently, to increase the welfare of a person on the other side of the world by N, the cost is far less than N/2, for affluent people. However, we can stipulate that it’s only N/2, and the implication still goes through. Suppose two people can both endure a cost of N/2 to benefit another far away person by N. If everyone does this, everyone is better off. We can stipulate this process being iterated enough to make non-demanding moralities make everyone infinitely worse off. If you have infinite opportunities to make a person 1 utility better off at the cost of .5 utility, and they have the same, you’ll both be left infinitely better off, rather than infinitely worse off, if we stipulate that each time you take the option the .5 utility option, it costs the other person .6 utility.
Eighth, our intuitions about such cases are debunkable. (Braddock, 2013) argues that our beliefs about demandingness were primarily formed by unreliable social pressures. Thus, the reason we think morality can’t be overly demanding is because of norms about our society, rather than truth tracking reasons. Additionally, (Ballantyne and Thurow, 2013) argue that partiality, bias, and emotions all undermine the reliability of our intuitions. We have a strong partial and biased reason to oppose demanding morality, meaning that this is a prime target of debunking. This similarly provides us with a strong emotional opposition to demandingness.
Greater evidence for the social pressure thesis from Braddock comes from the fact that our intuitions about demandingness are hard to explain, except in light of social pressures. Several strange features of our obligations are best explained by this theory.
It’s generally recognized that people have an obligation to pay taxes, despite that producing far less well-being than saving the life of people in other countries. There are obvious social pressures that encourage paying taxes.
As (Kagan, 1989) points out, morality does often demand we save others, such as our children or children we find drowning in a shallow pond. This is because social pressures result in caring about people in one’s own society, who are in front of them, rather than far away people, and especially about one’s own children.
Braddock notes (p.175-176) “These processes include but are not limited to the internalization of social norms through familiar socialization practices, sanction practices, conformist pressures, modeling processes, and so on. What we think is too demanding is largely influenced by what people around us think is too demanding, much like, as a general matter, what we are likely to believe and do is influenced by what people around us believe and do. And even if those around us have not expressed these intuitions or ever explicitly entertained them before their minds, nonetheless from our earliest days, the content of our demandingness intuitions is plausibly influenced by which norms of beneficence people adopt and which attitudes they express about sharing and giving.”
Ninth, as (Braddock, 2013) notes, this problem applies to nearly all plausible theories, not merely consequentialism. Braddock writes (p.169) “The targets get branded as being “too demanding,” “unreasonably demanding,” “infeasible,” “unrealistic,” “unlivable,” “impracticable,” or “utopian.” The idea is not just that the targeted views are demanding—every plausible moral view is at least somewhat demanding in that it imposes some moral obligations upon us—but rather that they are excessively demanding. Usual suspects include: act consequentialism,1 rule consequentialism,2 Kantian ethics,3 virtue ethics,4 Scanlonian contractualism,5 the Golden Rule,6 Peter Singer’s famous strong principle of beneficence,7 commonsense principles of beneficence,8 egalitarian and cosmopolitan principles of distributive justice,9 socioeconomic rights claims,10 and so on.”
(Ashford, 2003), points out that the demandingness problem plausibly applies to Scanlon’s contractualism, meaning that it is a problem for other moral views, even ones designed specifically to avoid the demandingness of utilitarianism.
There’s a puzzling asymmetry between the demandingness objection when applied to utiltiarianism and Christianity. Christianity does seem to be a prime example of a demanding theory. It does, after all, imply that we’re all sinners. However, the demandingness objection is never, to my knowledge, raised against Christianity. It’s not clear why it’s thought to be uniquely a problem for utilitarianism given this fact.
Matched consequences: Under circumstances where consequences are matched between potential perpetrators, consequentialism gives no specific recommendation. This becomes a problem when it affords a moral justification for heinous acts. For example, the seductive Tammy from work approaches John at a bar, and John is interested. There's one problem: John has a loving wife at home, and two children. He goes over all of the possible moral consequences: I could destroy our happy marriage, I could devastate my children, I could lose my job. John sighs and tells her he can't cheat on his wife. Tammy raises an eyebrow and says "Okay, but consider this before you decide: I already have plans to go home with Andrew--that is, if I can't see you instead." John understands that his coworker Andrew is in the same situation: he has two children of the same age, and a loving wife, they live on the same block in similar houses, they have the same guile and resourcefulness, and (for the sake of the hypothetical) it is presumable that the consequences will be the same (probability of spouse discovering/probability of escalating the affair/etc.). Although there may be self-interested reasons not to be the one who cheats, John has no specific moral reason not to at this point on consequentialism.
I think that something like this will be entailed by any plausible view. As has been shown before, if you think that you shouldn’t take some heinous act to prevent more instances of that act, it results in the conclusion that perfect beings should hope you act wrongly, you should actively deprive perfect people of options, and more.
Now, applied to this scenario specifically, there’s a very obvious reason not to have the affair. The other guy who would have had the affair would have been an unloyal shmuck, such that his affair is less bad—it doesn’t break the type of significant trust that one ought to have. Ultimately, one shouldn’t just take good actions—they should do what will, over the course of their life, makes things go best. Valuing your marriage non-instrumentally will be the best way to do that.
Additionally, the view that takes the wrongness here as irreducible results in an implausible result. Suppose that you know that you’ll have two affairs at some future point—you’re just a shmuck. However, if you have an affair now, you’ll prevent Jim from having two affairs. Similarly, if Jim has an affair now, he’ll prevent you from having two affairs. You’re both better off if you have the affair now—both morally and prudentially—but this would hold that you’re acting wrongly. The correct morality shouldn’t generate these weird prisoners dilemmas—ones that are supposed to only arise because one is self-interested.
There’s also a very plausible heuristic-based explanation of our intuitions about these cases. When something is wrong in all real-world cases, it’s very easy to imagine it being wrong, even if we somehow stipulate away those real-world constraints. Additionally, for reasons I’ve explained before, these will not be very good evidence against utilitarianism.
The rest is just objecting to rule utilitarianism—something I agree is wrong—and quoting Huemer’s cases, which I’ve replied to in my first articles. Thus, while I think this is a decent attempt at arguing against utilitarianism, and is fundamentally the right approach to arguing against it, none of the specific objections succeed.
Regarding scalar u-ism.
I've been aware for some time that u=ism can be formulated in a way with no obligations, but that creates a problem of it's own. If there are no obligations, what are we punishing people for, when we punish people?