Here I will present a series of arguments favoring utilitarianism, in a debate over whether utilitarianism is the truth, the way, and the light.
1 Theoretical Virtues
When deciding upon a theory we want something with great explanatory power, scope, simplicity, and clarity. Utilitarianism does excellent by this criteria. It’s incredibly simple, requiring just a single moral law saying one should maximize the positive mental states of conscious creatures, explains all of ethics, applies to all of ethics, and has perfect clarity. Thus, utilitarianism starts out ahead based on its theoretical virtues. Additionally, in terms of its prior plausibility it does well, being immensely intuitive. It just seems obvious that ethics should be about making everyone’s life as good as possible. There are others detailed here, in which utilitarianism excels, though I haven’t room to go into detail.
2 History As A Guide
History favors utilitarianism. If we look at historical atrocities, they were generally opposed by utilitarians. Utilitarian philosophers were often on the right side of history. Bentham favored decriminalizing homosexuality, abolition of slavery, and protection for non human animals. Mill was the second member of parliament to advocate for women's suffrage and argued for gender equality. In contrast, philosophers like Kant harbored far less progressive views, supporting killing people born to unmarried parents, favoring racial supremacy, and believing masturbation to be a horrifically wrong “unmentionable vice.”
Additionally, the atrocities of slavery, the holocaust, Jim Crow, and all others have all come from excluding a class of sentient beings from moral consideration, something prevented by utilitarianism.
If utilitarianism were not the correct moral view, it would be a bizarre coincidence that it both has the mechanism to rule out every historical atrocity and that utilitarians are consistently hundreds of years ahead of their time when it comes to important moral questions.
3 A syllogism
These premises, if true prove utilitarianism.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
I shall present a defense of each of the premises.
Premise 1, is true by definition
Premise 2 states that a rational egoist would do only what produces the most happiness for themselves. This has several supporting arguments.
1 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize other things like virtue or rights and no one holds that view.
2 Any agent that can suffer matters. Imagine a sentient plant, who feels immense agony as a result of their genetic formation, who can’t move nor speak. They’re harmed from their pain, despite not having their rights violated or virtues. Thus, being able to suffer is a sufficient condition for moral worth.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that hedonic experience is a necessary and sufficient condition for mattering. This serves as strong evidence for utilitarianism—other theories can’t explain this necessary connection between hedonic value and mattering in the moral sense.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible, requiring strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. However, strongly emergent properties are not reducible to weaker properties. Philosophers tend to think there is only at most one strongly emergent thing in the universe, so if deontology requires strong emergence, that’s an enormous cost.
3 As we’ll see, theories other than hedonism are just disastrously bad at accounting for what makes someone well off, however, I’ll only attack them if my opponent presents one, because there are too many to criticize.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings them joy. This is how we decide what to eat, how to spent our time, or who to be in a romantic relationship with—and is the reason people spend there time doing things they enjoy rather than picking grass.
The rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. We care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit. Thus, it seems like for any X we only care about X if it tends to produce happiness.
5 Consciousness seems to be all that matters. As Sidgwick pointed out, a universe devoid of sentience could not possess value. The notion that for something to be good it must be experienced is a deeply intuitive one. Consciousness seems to be the only mechanism by which we become acquainted with value.
6 Hedonism seems to be the simplest way of ruling out posthumous harm. Absent hedonism, a person can be harmed after they die, yet this violates our intuitions,.
7 As Pummer argues, non hedonism cannot account for lopsided lives.
If we accept that non hedonic things can make one’s life go well, then their life could have a very high welfare despite any amount of misery. In fact, they could have an arbitrarily good life despite any arbitrary amount of misery. Thus, if they had enough non hedonic goodness (E.G. knowledge, freedom, or virtue), their life could be great for them, despite experiencing the total suffering of the holocaust every second. This is deeply implausible.
8 Even so much as defining happiness seems to require saying that it’s good. The thing that makes boredom suffering but tranquility happiness is that tranquility has a positive hedonic tone and is good, unlike boredom. Thus, positing that joy is good is needed to explain what joy even is. Additionally, we have direct introspective access to the badness of pain when we experience it.
9 Only happiness seems to possess desire independent relevance. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights—one isn’t irrational for not exercising their rights. If we’re irrational to not care about our happiness, then happiness has to objectively matter.
10 Sinhababu argues that reflecting on our mental states is a reliable way of forming knowledge as recognized by psychology and evolutionary biology—we evolved to be good at figuring out what we’re experiencing. However, when we reflect on happiness we conclude that it’s good, much like reflecting on a yellow wall makes us conclude that it’s bright.
11 Hedonism is very simple, holding there’s only one type of good thing, making it prima facie preferrable.
Let’s take stock. So far we’ve established that hedonism is intuitively prima facie plausible, it’s extremely simple, beings matter if and only if they have the capacity for hedonic value, other theories need to posit strong emergence which exists virtually nowhere else in the universe, hedonism unifies the things we care about such as knowledge, friendship, and virtue, we have a reliable mechanism for identifying the goodness of pleasure, unlike the goodness of other things, there’s a plausible evolutionary explanation of the goodness of pleasure unlike other things, it is necessary to explain what pleasure even is, only hedonic value seems to have desire independent relevance, consciousness seems intuitively to be all that matters, hedonism explains why one can’t be harmed or benefited after death, pleasure is the most obvious value, experiences seem valuable if and only if they produce pleasure--other theories have to posit other strange things that make people better off, and when the hedonic values are sufficiently great, they dominate all other considerations. The non-hedonist has to deny that pleasure is the good, while accepting that pleasures is a prerequisite for there being good and dominates goodness considerations at the extremes, no matter what other facts are present.
Premise 3 says therefore only happiness is good (for selves who are rational egoists). This follows from the previous premises.
Premise 4 is trivial. For things to be better for person A than for person B, they must have something about them that produces extra benefits for person A, by definition.
Premise five says happiness does not have unique benefits that only apply to rational egoists. This is obvious.
Premise six says follows from previous premises
Premise seven is trivial
Premise eight says therefore, only happiness is good for selves. This follows from the previous premises.
Premise nine says Something is good, if and only if it is good for selves
It seems hard to imagine something being good, but being good for literally no one. If things can be good while being good for no one, there would be several difficult entailments that one would have to accept, such as that there could be a better world than this one despite everyone being worse off.
People only deny this premise if they have other commitments, which I’ll argue against later.
Premise 10 says Therefore only happiness is good. It follows from the previous premises.
Premise 11 says we should maximize good
First it’s just trivial that if something is good we have reason to pursue it, so the most good thing is the thing we have the most reason to pursue.
Second, this is deeply intuitive. When considering two options, it is better to make two people happy than one, because it is more good than merely making one person happy. Better is a synonym of more good, so if an action produces more good things it is better that it is done.
If there were other considerations that counted against doing things that were good, those would be bad, and thus would still relate to considerations of goodness
Third, as Parfit has argued the thing that makes things go best, is the same as the thing that everyone could rationally consent to and that no person could reasonably reject.
Fourth, an impartial observer should hope for the best state of the world to come into being. However, it seems clear that an impartial observer should not hope for people to act wrongly. Therefore, the right action should bring about the best world.
Fifth, as Yetter-Chappel has argued, agency should be a force for good. Giving a perfectly moral agent control over whether some action happens shouldn’t make the world worse. In the trolley problem, for example, the world would be better if the switch flipped as a result of random chance, divorced from human action. However, if it is wrong to flip the switch, a perfectly moral person being given control over whether or not the flip switches by accident would make the world actively worse. Additionally, it would be better for a perfectly moral person to have muscle spasm which results in the switch flipping, than to have total control of their actions. It shouldn’t be better from the point of view of the universe for personally benevolent agents to have muscle spasms resulting in them taking actions that would have been wrong if they’d voluntarily taken them.
Sixth, as Yetter-Chappel has argued, a maximally evil agent would be an evil consequentialist trying to do as much harm even if it involves them not violating rights, so a maximally good agent would be the opposite.
Premise 12 says therefore, we should maximize only happiness. This follows from the previous premises.
4 Harsanyi’s Proof
Harsanyi’s argument is as follows.
Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.
So, when making ethical decisions one should act as they would if they had an equal chance of being any of the affected parties. Additionally, every member of the group should be VNM rational and the group as a whole should be VNM rational. This means that their preferences should have the following four features of rational decision theory which are accepted across the board (they’re slightly technical but they’re basically universally agreed upon).
These combine to form a utility function, which represents the choice worthiness of states of affairs. For this utility function, it has to be the case that a one half chance of 2 utility is equally good to certainty of 1 utility. 2 utility is just defined as the amount of utility that’s sufficiently good for a 50% chance of it to be just as good as certainty of 1 utility.
So now as a rational decision maker you’re trying to make decisions for the group, knowing that you’re equally likely to be each member of the group. What decision making procedure should you use to satisfy the axioms? Harsanyi showed that only utilitarianism can satisfy the axioms.
Let’s illustrate this with an example. Suppose you’re deciding whether to take an action that gives 1 person 2 utility or 2 people 1 utility. The above axioms show that you should be indifferent between them. You’re just as likely to be each of the two people, so from your perspective it’s equivalent to a choice between a 1/2 chance of 2 utility and certainty of 1 utility. We saw before that those are equally valuable, a 1/2 chance of 2 utility is by definition equally good to certainty of 1 utility. 2 utility is just the amount of utility for which a 1/2 chance of it will be just as good as certainty of 1 utility. So we can’t just go the Rawlsian route and try to privilege those who are worst off. That is bad math!! The probability theory is crystal clear.
Now let’s say that you’re deciding whether to kill one to save five, and assume that each of the 6 people will have 5 utility. Well, from the perspective of everyone, all of whom have to be impartial, the choice is obvious. A 5/6 chance of 5 utility is better than a 1/6 chance of 5 utility. It is better by a factor of five. These axioms combined with impartiality leave no room for rights, virtue, or anything else that’s not utility function based.
This argument shows that morality must be the same as universal egoism—it must represent what one would do if they lived everyone’s life and maximized the good things that were experienced throughout all of the lives. You cannot discount certain people, nor can you care about agent centered side constraints.
5 Wrong About Rights: Why Utilitarianism, Unlike Its Critics, Isn’t
Rights are both the main reason people would deny premise 9 of the earlier syllogism and the main objection to utilitarianism. Sadly, the doctrine of rights is total nonsense.
1 Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, we generally think it would be a violation of rights to create huge amounts of pollution, such that a million people die, but not a violation of rights to light a candle that kills no people. The difference is just in the harm caused. If things that we currently don’t think of as rights began to maximize happiness to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our sanctity in an indescribable way. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
If my opponent argues for rights then I’d challenge him to give a way of deciding whether something is a right that is not based on hedonic considerations.
3 A reductionist account is not especially counterintuitive and does not rob our understanding or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
4 An additional objection can be given to rights. We generally think that it matters more to not violate rights than it does to prevent other rights violations. We intuitively think that we shouldn’t kill one innocent person to prevent two murders. I shall give a counterexample to this. Suppose we have people in a circle each with two guns that will each shoot the person next to them. They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person. Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person. However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot. If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot. World two would have no one shot, and world one would have everyone shot. World one seems clearly worse.
Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them. This seems deeply implausible.
5 We have decisive scientific reasons to distrust the existence of rights, which is an argument for utilitarianism generally. Greater reflection and less emtotional hinderance makes people much more utilitarian as has been shown by research by Koenigs , Greene et al, and Fornasier. This evidentially supports the reliability of utilitarian judgements.
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer if given the choice between one person killing one other to prevent 5 indiscriminate murders to 5 indiscriminate murders or them not doing so, should obviously choose the world in which the one person does the murder to prevent 5. An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The person in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
8 Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong.
(More paradoxes for rights can be found here and here, but I can’t go into all of them here).
6 Some Cases Other Theories Can’t Account For
Many other theories are totally unable to grapple with the moral complexity of the world. Let’s consider four cases
Case 1
Imagine you were deciding whether or not to take an action. This action would cause a person to endure immense suffering—far more suffering than would occur as the result of a random assault. This person literally cannot consent. This action probably would bring about more happiness than suffering, but it forces upon them immense suffering to which they don’t consent. In fact, you know that there’s a high chance that this action will result in a rights violation, if not many rights violations.
If you do not take the action, there is no chance that you will violate the person’s rights. In fact, absent this action, their rights can’t be violated at all. In fact, you know that the action will have a 100% chance of causing them to die.
Should you take the action? On most moral systems, the answer would seem to be obviously no. After all, you condemn someone to certain death, cause them immense suffering, and they don’t even consent. How is that justified?
Well, the action I was talking about was giving birth. After all, those who are born are certain to die at some point. They’re likely to have immense suffering (though probably more happiness). The suffering that you inflict upon someone by giving birth to them is far greater than the suffering that you inflict upon someone if you brutally beat them.
So utilitarianism seems to naturally—unlike other theories—provide an account of why giving birth is not morally abhorrent. This is another fact that supports it.
Case 2
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
Case 3
Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off.
Case 4
An action will make everyone better off. Should you necessarily do it? The answer seems to be yes, yet other theories have trouble accounting for that if it violates side constraints.
Case 5
When the government taxes is that objectionable theft? If not, why not? Consequentialism gives the only satisfactory account of political authority.
Case 6
Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, Mogenson and Macaskill argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people by changing very slightly the time in which lots of other people have sex. They also change traffic distributions, potentially reducing and potentially increasing the number of people who die in traffic accidents. Thus, every time a person gets in a car, there is a decent chance they’ll cause an extra death, a high chance of changing the distribution of lots of future people, and a decent chance they’ll prevent an extra death. Given that most such actions produce fairly minor benefits, it is quite analogous to the scenario described above about the button.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist. The same is true if you ever have sex; you will change the identity of a future person.
Tangentially related, but how would you respond to the argument that utilitarians must be committed to some form of anti-natalism? At the very least it seems to me the case for unimaginable suffering in the case of both wild and food animals is simply overwhelming. I would rather never have existed than be a wild or domesticated food animal if the choice were given to me. Nature often strikes me as a sort of utilitarian hell.
> Rights are reducible to happiness.
Other way around. Happiness is reducible to rights. We would hold that you shouldn’t kill someone even if God (me) told you they would experience -0.01 utility in the future. Or take forced wireheading, or many other examples. At best this just begs the question of util being true.
> Concentric Circles.
You know why this is complete nonsense. Adding the word “corresponding” to the formulation doesn’t help. In fact it makes it worse, as a “corresponding” choice to the people in the next circle would be a choice that implicated 99 more circles, same as a choice 1. That’s logically contradictory, and this is null.