0 Why we modern utilitarians don’t use Bentham’s argument for utilitarianism
Answer: Because it was garbage—here it is.
1
(I have not read Hare, so this might be a bit off). He was also an accursed non cognitivist so…)
Hare’s basic argument was something like this. Morality must be universalizeable by definition. If a person makes a moral claim that only benefits themselves in an unprincipled way that wouldn’t count as morality. So morality has to apply generally. Well, if morality is applying generally then we have to weight everyone’s interests equally and maximize generally, what selfish people maximize for themselves. Sord’ve like the earlier argument I presented. Thus, Hare thinks that morality is about acting as one would if they experienced all possible experiences. This seems to adequately fit the concept of both universalizability and morality. If Jeffrey Dahmer lived the lives of his victims, he wouldn’t kill them. Hare argued that this resulted in preference maximization. I think Hare was wrong—I’ve argued previously that we have reason to maximize our happiness. However, Hare was on the right track about universalizability.
2
I’m just going to rip off Katarzyna de Lazari-Radek and Peter Singer
“Sidgwick finds three principles that meet these requirements.
• Justice requires us to treat similar cases alike, or as Sidgwick puts it: ‘… whatever action any of us judges to be right for himself, he implicitly judges to be right for all similar persons in similar circumstances’.
• Prudence tells us that we ought to have ‘impartial concern for all parts of our conscious life’, which means giving equal consideration to all moments of our own existence. We may discount the future because it is uncertain, but ‘Hereafter as such is to be regarded neither less nor more than Now.’
• Benevolence, like prudence, considers the good of the whole, rather than of a mere part, but in this case it is not our own good, but universal good. Hence, Sidgwick says, the principle of benevolence requires us to treat ‘the good of any other individual as much as his own, except in so far as he judges it to be less, when impartially viewed, or less certainly knowable or attainable by him’. This principle of benevolence is, for Sidgwick, the basis for utilitarianism, although for the principle to lead to hedonistic utilitarianism, we still need an argument saying that pleasure or happiness, and nothing else, is intrinsically good.”
Each of these principles seems very plausible and is argued for extensively here.
3 Yet another way of deriving utilitarianism
Suppose you accept
1) Hedonism, which says that the happiness that one experiences during their life determines how well their life goes for them.
2) Anti egalitarianism, which says that the distribution of happiness is irrelevant.
3) Pareto Optimality, which says that we should take actions that are better for some, but worse for none, regardless of consent.
These axioms are sufficient to derive utilitarianism. The action that maximizes happiness could be made Pareto Optimal by redistributing the gains. Anti egalitarianism says that redistributing the gains has no morally significant effect on the situation. If Pareto improvements should be taken, and the utilitarian action is morally indistinct from Pareto improvements, utilitarian actions should be taken.
This can be illustrated with the trolley problem as an example. In the trolley problem it would be possible to make flipping the switch Pareto optimal by redistributing the gains. If all of the people on the track gave half of the happiness they’d experience over the course of their life to the person on the other side of the track, flipping the switch would be pareto optimal. In that case, everyone would be better off. The person on the other side of the track would have 2.5 times the good experience that they would otherwise have had, and the other people would all have .5 a life's worth of good experience more than they would have otherwise had. Thus, if all the axioms are defensible, we must be utilitarians.
Hedonism was defended here.
Anti egalitarianism can be defended in a few ways. The first supporting argument (Huemer 2003) can be paraphrased (and modified slightly) in the following way.
Consider two worlds, in world 1 one person has 100 units of utility for 50 years and then 50 units of utility for the following 50 years. A second person has 50 units of utility for the first 50 years, but 100 units of utility for the next 50 years. In world 2, both people have 75 units of utility for all of their lives. These two worlds are clearly equally good, everyone has the same total amount of utility. Morally, in world 1, the first 50 years is just as good as the last 50 years, in both of them, one person has 100 units of utility and the other person has 50 years. Thus the value of world 1 = two times the value of the first 50 years of world 1. World one is just as good as world two, so the first 50 years of world one are half as good as world two. The first 50 years of world 2 are half as good as the total value of world 2, thus the first half of world one, with the same total utility, but greater inequality of utility is just as good as the first half of world 2, with greater inequality of utility but the same total utility. This proves that the distribution of utility doesn’t matter. This argument is decisive and is defended at great lengths by Huemer.
Another argument can be deployed for non egalitarianism based on the difficulty of finding a viable method for valuing equality. If the value of one’s utility depends on equality, this runs into the spirits objection; it implies that if there were many causally inert spirits living awful lives this would affect the relative value of giving people happiness. If there was one non spirit person alive, this would imply that the value of them being granted a desirable experience was diminished by the existence of spirits that they could not effect. This is not plausible; causally inert entities have no relevance to the value of desirable mental states.
This also runs into the Pareto objection; to the extent that inequality is bad by itself then a world with 1 million people with utility of six could plausibly be better than a world with 999,999 people with a utility of six and one person with a utility of 28, given the vast inequality.
Rawls formulation doesn’t work; if we are only supposed to do what benefits the worse off then we should neglect everyone’s interests except those who are horrific victims of the worst forms of torture imaginable. This would imply that we should bring to zero the quality of life of all people who live pretty good lives if that would marginally improve the quality of life of the worst off human.
Rawls' defense of this rule doesn’t work. As Harsanyi showed we would be utilitarian from behind the veil of ignorance. This is because the level of utility referred to as 2 utility is, by definition, the amount of utility which is just as good as a 50% chance of 4 utility. Thus, from behind the veil of ignorance we would necessarily value a ½ chance of 4 utility at equal to 2 utility and always prefer it to certainty of 1.999 utility.
Rawls attempts to avoid this problem by supposing that we don’t know how many people are part of each class. However, it is not clear why we would add this assumption. The point of the veil is to make us impartial, but provide us with all other relevant information. To the extent that we are not provided information about how many people are part of each social class that is because Rawls is trying to stack the deck in favor of his principle. Simple mathematics dictates that we don’t do that.
An additional objection can be given to the egalitarian view. On this view, if an action makes people who are well off much better off and people who are not well off slightly better off, on egalitarian accounts this action is in some sense bad. Because it increases inequality, this action does something bad (though it might be outweighed by other good things). However, this is implausible. Making everyone better off by differing amounts is not partially bad.
We have several reasons to distrust our egalitarian intuitions.
First, egalitarianism relates to politics and politics makes us irrational. As (Kahan et al 2017) showed, greater knowledge of math made people less likely to solve politicized math problems.
Second, equality is instrumentally valuable according to utilitarianism; money given to poor people has greater utility given the existence of declining marginal utility. It is possible to easily explain our support for equality as a utilitarian heuristic. Heuristics make our moral judgments often unreliable. It is not surprising that we would care about something that is instrumentally valuable and that should often be pursued given that it’s pursuit is a good heuristic. We have similar reactions in similar cases.
Third, given the difficulty of calculating utility we might have our judgement clouded by our inability to precisely quantify utility.
Fourth, given that equality is very often valuable; an egalitarian cookie, money, or home distribution produces greater utility than an inegalitarian one, our judgement may be clouded by our comparison of utility to other things. Most things have declining marginal utility. Utility, however, does not.
Fifth, we may have irrational risk aversion that leads us to prefer a more equal distribution.
Sixth, we may be subject to anchoring bias, with the egalitarian starting point as the anchor.
Several more arguments can be provided against egalitarianism. First is the iteration objection. According to this objection, if we found out that half of people had a dream giving them unfathomable amounts of happiness that they had no memory of, their happiness would become subsequently less important. Given that egalitarianism says that the importance of further increases in happiness relates to how much happiness they’ve had previously, to the extent that one had more happiness previously - even if they had it in a dream that they couldn’t remember - their happiness would become subsequently less important.
The egalitarian could object that the only thing that matters is happiness that one remembers. However, this runs into a problem. Presumably to an egalitarian what matters is total happiness that one imagines rather than average happiness. It would be strange to say that a person dying of cancer with months to live is less entitled to happiness than a person with greater average happiness but less lifetime total happiness. However, if this is true then the importance of increasing the happiness of one’s dream self is dramatically more important than increasing the happiness of themselves when they’re awake. To the extent that they’ll forget their dream self, their dream self is a very badly off entity, very deserving of happiness. It would be similarly strange to prioritize helping dementia patients with no memory of most of their lives based on how well off they were during the periods of their life which they can no longer recall.
A second argument can be called the torture argument. Suppose that a person has been brutally tortured in ways more brutal than any other human such that they are the worst off human in history by orders of magnitude. From an egalitarian perspective, their happiness would be dramatically more important than that of others given how poorly off they are. If this is true, then if we set their suffering to be great enough, it would be justified for them to torture others for fun.
A third argument can be called the non prioritization objection. Surely any view which says that the happiness of poorly off people matters infinitely more than the happiness of well off people is false. If it were true it would imply that sufficiently well off people could be brutally tortured to make poorly off people only marginally well off. Thus, the egalitarian merely draws the line at a lower level of happiness in terms of how much happiness for a poorly off person outweighs improving the happiness of a well-off person. If this is true non egalitarianism ceases having counterintuitive implications. The intuitive appeal of “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 2 being morally wrong,” dissipates when alternative theories endorse “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 5 (it could be more than 5, 5 is just an example) being morally wrong.” At that point utilitarians and egalitarians are merely haggling over the degree of the tradeoff.
We can now turn to the Pareto optimality premise which says we should take actions if they increase the happiness of some but don’t decrease the happiness of any others. This principle is deeply intuitive and widely accepted. It’s hard to imagine something being bad while making some people better off and no people worse off.
One might object to the Pareto principle with the consent principle, which says that an act is wrong if it violates consent even if it is better for some and worse for none. However, this runs into questions of what constitutes consent. For example, a person throwing a surprise party violates the consent of the person for whom the party is being thrown. Yet a surprise party is obviously not morally wrong if it makes everyone better off.
Similarly, if a pollutant was being released into the air that went into people’s lungs without consent, it would seem that would be bad only if it were harmful. One might argue that we should only take Pareto optimal actions that don’t violate rights, yet views that privilege rights have already been discussed. Additionally, the areas where consent seems to matter are precisely those where one who does not consent can be seriously harmed. Consent to marriage is valuable because nonconsensual marriage would obviously be harmful. Yet to the extent that one is not harmed, it’s hard to imagine why their consent matters.
An additional objection can be given to rights based views. Suppose that someone is unconscious and needs to be rushed to the hospital. It seems clear that they should be rushed to the hospital. Ordinarily, transporting someone without consent is seen to be wrong. However, in a case like the one just stipulated, it increases happiness and thus is morally permissible.
Lots and lots of plausible arguments can be made for utilitarianism. This would be surprising if it were false. If you were purely judging different philosophical theories prior to case specific reflection, utilitarianism would blow other theories out of the water. Utilitarianism, unlike other theories, is derived from plausible first principles. It doesn’t require rapid patching up to accommodate for specific intuitions about cases.
I think these arguments are super decisive. Not only does utilitarianism seem very intuitive at first—it’s supported by 5 independent sets of plausible principles. Prior to going into the specific thought experiments, our credence in utilitarianism should be super high. Soon we’ll see, however, that the case only gets stronger when we consider specific thought experiments.
This is supported by all of the theoretical virtues, overwhelming historical evidence, proofs based on axioms of rational reasoning combined with impartiality and reasoning from a nobel prize winning economist, and three other fully independent sets of axioms. This is already enough evidence to trump most moral intuitions—even if we have a lot of conjunctive intuitions that favor it.
However, as we’ll see, every single time that our intuitions diverge from utilitarianism they can be independently proven wrong. We already did that with Huemer’s arguments. Dozens more thought experiments will fall to the blade of utilitarian reasoning1. Yog-Sothoth swims slowly, but he swims towards Bentham2.
It’s a blade, okay. I had to try to make the tedious prices of writing hundreds of pages about particular random thought experiments sound impressive and blades are impressive. Also, like pretend the blade is had by a dragon or something to make it extra impressive.
This is a modified version of a quote by a pretty odious individual, who’s not worth mentioning.