Here, I’ll argue that utilitarianism is true, meaning that if you’re deciding between any two actions, you should take the one that generates more well-being. This is an attempt to provide a concise case for utilitarianism, in case you don’t want to read a few hundred thousand words.
1 The veil of ignorance
Here’s something that intuitively seems to capture the essence of morality and instruct us on what to do. We’ll see additional reasons to think that this captures the essence of morality in the following sections. Imagine deciding upon courses of action without knowing who you were, thus acting as if you and everyone else had an equal probability of being each person. So if you were deciding whether to punch someone, you’d act as if you were just as likely to be the person who was punched as the person who did the punching.
In this case, you’d act as a utilitarian—for example, you’d support killing one to save five, because this would leave everyone better off in expectation, each five times more likely to have been saved than killed. If we accept that you should do things that are better for everyone in expectation, then you should act as a perfect utilitarian.
You might worry that this doesn’t prove total utilitarianism, just average utilitarianism. If you’re acting to try to maximize your expected payouts, you’d maximize the average, not the total. But this is only true if we shouldn’t count possibly non-existent beings in our calculus, which is wrong. Additionally, average utilitarianism is obviously false.
2 Experiencing all
Morality is about what we have most reason to do. The types of reasons we’re talking about here are, by definition, impartial reasons—even if it were rationally permitted to only act in one’s own self-interest, this would not be the morally right thing to do. So morality is about doing what one has most impartial reason to do.
Here’s a way to figure out what we have most impartial reason to do: imagine living everyone’s life. You experienced every pain, every joy, every sorrow ever experienced—the good, the bad, the medium. If you acted rationally, given that no one else could be affected, you’d act in your own interests, and when your interests include everyone, you’d count everyone’s interests equally and maximize them, meaning you’d act as a utilitarian. Thus, one who was perfectly moral would act as a utilitarian.
3 A proof that you should do all the things prescribed by utilitarianism
(This comes from this article—skip to “Rejecting Pareto” to see more of it)
In a recent post, I made an argument for the conclusion that one should create a person with positive utility even if that involves inflicting some lesser amount on suffering on existing people. But I realized that one can design a very similar argument for utilitarianism full stop. This appeals to a few premises.
Weak Pareto: If some act is better for someone and worse for no one, it is good overall.
Deflection: If one can redirect a threat that they’ve created, such that it will cause less harm, they ought to.
Combination: If one should take each act in some sequence, then if some act N produces the same effect as taking each act in the sequence, they should take act N.
Normative Decision-Tree Separability: the moral status of the options at a choice node does not depend on other parts of the decision tree than those that can be reached from that node. .
Expansion Improvability: The fact that a choice enables future choices that are worth taking does not count against it.
This doesn’t quite get us to utilitarianism, but it does allow us to prove that one should take all actions with positive utility, which is something only utilitarians accept.
To illustrate the proof, Pareto will entail that one should do anything with positive utility. By Pareto, suppose there is some act that causes Jane negative utility while giving Jon slightly greater positive utility. Suppose that Jane will be pricked by a sharp object to give John some greater amount of utility. Well, it would be good if John was pricked by the sharp object and given more utility by Pareto. Then it would be good if the threat were deflected from Jane to John by Deflection. By Combination then, an act that pricked Jane to give more pleasure to John is good. One might think that the future or past actions make the other act undesirable, but this is ruled out by Normative Decision-Tree Separability and Expansion Improvability.
Anything that is utility maximizing could made into a Pareto improvement by being turned into a combination of a threat and utility boost, and then the threat could be redirected. Thus, this gets us all the way to utilitarianism.
I think each of these premises is super plausible—you can see more discussion of them here. Deflection seems to be taken for granted in trolleyology, weak Pareto is supremely plausible, and the other three are, I think, pretty trivial. If I were a non-utilitarian, I’d probably want to reject Pareto.
There are lots of other arguments for utilitarianism—see here and here, for example.
4 The best argument
Utilitarianism is often unintuitive. People think that this means that it’s false. But we’d expect the correct moral view to be unintuitive sometimes because our moral intuitions are fallible. Even if our intuitions are right 95% of the time, the correct moral view would still conflict with our intuitions. So that means the mere fact of unintuitiveness is not good evidence against utilitarianism. In contrast, the fact that there are lots of different plausible axiomatic derivations of utilitarianism is good evidence for it—that this is the case would be surprising if it were false.
On the hypothesis that utilitarianism were correct, we’d expect careful reflection to end up bringing our intuitions more in line with utilitarianism. If utilitarianism were correct, we’d expect there to often be good arguments for accepting the specific surprising utilitarian conclusion, because most true things will have some good arguments in their favor. This turns out to be true—every single time one carefully investigates a non-utilitarian intuition, it turns out to be indefensible. I’ll just give a few examples here, though elsewhere I’ve documented this claim in more detail.
a Torture vs dust specks
Utilitarianism says that one torture is less bad than 100^100 slightly irritating dust specks, but this seems unintuitive to people. Dust specks are barely bad at all—how could things that are barely bad add up to be worse than a torture? Fortunately, there are lots of easy ways to show that the utilitarian conclusion is true.
Suppose we start with a torture and make it slightly less intense, while affecting one hundred times as many people. That seems worse. Suppose we do that again—that seems worse again. We can do that until it’s reduced to the level of dust specks and is affecting many more people. At each step of the way it’s getting worse, and the end product is a lot of things as painful as dust specks, while the thing at the beginning was a torture. So if we accept that if A is worse than B which is worse than C then A must be worse than C—which we should—we must accept that a lot of dust specks are worse than one torture.
Here’s another argument: one dust speck is bad. This is obvious. Infinite dust specks are infinity times worse than one dust speck, because they don’t affect the badness of each other. So infinity dust specks are infinitely bad. But a torture is only finitely bad. So infinity dust specks are worse than a torture.
Finally, there are good arguments for why we shouldn’t trust our intuitions about torture vs dust specks. We’re very bad at grasping large numbers and compounding small ones.
For more arguments for these conclusions, see here.
b Rights
People often think that humans have rights, so that it’s wrong to kill one person even to save multiple others.
Richard has a paradox, arguing against that, here. Definitely check out Richard’s excellent paper on the topic—he presses the argument in a more sophisticated way, giving it overwhelming force.
Suppose you’re deciding whether or not to kill one person to prevent two killings. The believer in rights holds that you shouldn’t. However, it can be shown that a third party should hope that you do. To illustrate this, suppose that a third party is deciding between you killing one person to prevent the two killings, or you simply joining the killing and killing one indiscriminately. Surely, they should prefer you kill one to prevent two killings to you killing one indiscriminately.
Thus, if you killed one indiscriminately, that would be no worse than killing one to prevent two killings, from the standpoint of a third party. But a third party should prefer you killing one indiscriminately to two other people each killing one indiscriminately. Therefore, by transitivity, they should prefer you kill one to prevent two killings to the two killings happening—thus they should prefer you kill one to prevent two. To see this let’s call you killing one indiscriminately YKOI, you killing one to prevent two killings YKOTPTK, and the two killings happening TKH.
YKOTPTK< YKOI<TKH. < represents being preferrable. Thus, the deontologist should prefer a world where you to do the wrong thing sometimes—a perfectly moral third party should hope you do the wrong thing.
There are lots of other arguments for the same conclusion, some of which I think are even more forceful, but harder to state simply. I basically think there’s no way to salvage a belief in rights, and every believer in rights has had to either bite the bullet on bizarre conclusions or admitted they had no solutions to the various paradoxes that arise.
c Other examples
I won’t lay out the other arguments, but I’ll just give a list of various intuitions and then links to articles showing that they’re indefensible
People often think that others deserve things—but I think this argument disproves the idea of desert.
People often think that equality matters intrinsically over and above the impact on utility, as well as that it’s more important to benefit the least well off. But this is hard to believe.
People often think that it’s good to make people happy but not to make happy people, but this produces unacceptable normative implications.
Conclusion
There you have it. Utilitarianism is supported by lots of very convincing arguments, and the arguments against it are false. Every time one systematically investigates a non-utilitarian intuition, it always is revealed to be lacking.
So let’s all be Keynsians utilitarians now.
Under utilitarianism, it is good for a gang of 1000 extremely sadistic people to kidnap and torture an innocent person. I'd like to see your defense of this.
It might be a misunderstanding on my part, but it seems like there’s an inconsistency in argument 3. Your application of Deflection assumes that less harm is done if John suffers the prick instead of Jane. But your application of Combination seems to assume that the effect is the same no matter who gets pricked. I might be misreading your application of Combination, though.
On the torture vs. dust speck point, I think it also helps your point to consider cases involving risk. If torture is worse than any number of irritating dust specks, is it okay to bring about a 0.0000001 probability of torture to prevent 100^100 dust specks? If not, at what point is the probability of torture low enough that it becomes okay to take action against the dust specks? Any cutoff seems arbitrary and, as Michael Huemer points out in “Lexical Priority and the Problem of Risk,” seems to lead to paradoxes.