0
“Even if Adam and Eve were leading fantastic lives in the Garden of Eden, the world was not perfect.”
We know that language used to describe moral questions affects how people view them. However, the relationship also goes the other way—a conclusion is rarely dubbed repugnant, or labeled with some other term of abuse if it’s not counterintuitive. Yet one conclusion in particular is given the title “the repugnant conclusion,” because people find it so deeply counterintuitive.
To his credit, Michael Huemer accepts the repugnant conclusion. Being an intuitionist, Huemer attempts to weave together all his many idiosyncratic intuitive judgements into one grand web of correct beliefs. This process has lead Huemer to throw out the rejection of the repugnant conclusion, on the grounds that it can’t be reconciled with other reasonable principles. He is right to do so.
The so called repugnant conclusion has a long and storied history. Parfit thought of it originally, concluding that it’s an absurd implication of total utilitarian views. However, Parfit was troubled by this, because he realized that rejecting it required rejecting some other very plausible moral principles.
This article will show that the so called repugnant conclusion isn’t repugnant at all. Accepting it is a demand of rationality. The repugnant conclusion is the rational extension of reasoning about tradeoffs properly conceived. The ethical reasoning of the average person is full of nonsense, polluting accurate judgements. Ethical judgements like belief in the act omission distinction, rights, prioritization of torture avoidance over dust specks, spread like a cancer throughout the edifice of moral beliefs. The rejection of the repugnant conclusion is by itself bad enough, a totally wrong ethical judgement.
Yet it’s far more pernicious. Those who reject the rc, when confronted with the smorgasbord of plausible ethical principles that lead to the rc, start holding other insane moral beliefs. Not content with merely causing one bad judgement, the rejection of the repugnant conclusion begins splitting rapidly, spreading throughout the body, affecting the lungs, heart, and skin. Some allow their rejection of the rc to reach stage 3 status—risking the total collapse of the ethical system.
This article is intended to provide the cure to the rapidly spreading disease of rejecting the repugnant conclusion. Like chemo, this cure my produce some negative side effects, leaving people deeply shaken. Yet it is necessary, if we want to survive as a species, and avoid a truly popular and pernicious moral failure mode. The repugnant conclusion, once given as a knockdown argument against utilitarian reasoning, is now widely conceived as a difficult puzzle to solve. Avoiding it is impossible if we want to retain other reasonable ethical beliefs about the world.
My proof
So what is this so called repugnant conclusion? It argues that by the lights of utilitarianism there is necessarily a number of people with lives barely worth living, (10^40 let’s say) who would make the world better than trillions of people living great lives. There is a number of people whose lives consist of getting a backrub and then disappearing who possess more moral worth than quadrillions of people living unimaginably good lives. Many people find this counterintuitive.
How can we argue for accepting the repugnant conclusion? Well, we can take a similar approach to the one taken in the previous section. Suppose that we have one person who is extremely happy all of the time. Suppose that they live 1000 years. Surely, it would be better to make 100 people with great lives who live 999 years, than one person who lives 1000 years. We can now repeat the process. 100000 people living 998 years would surely be better than 100 living 999 blissful years. Once we get down to one day and some ungodly large number of people 10^100 for example, we can go down to hours and minutes. In order to deny the repugnant conclusion one would have to argue for something even more counterintuitive, namely that there’s some firm cutoff. Suppose that we say that the firm cutoff is at 1 hour of enjoyment. They would have to say that infinite people having 59 minutes of enjoyment matters far less morally than one person having a 1 in a billion chance of having an hour of enjoyment.
Another argument
Another argument can be made for the conclusion. Most of us would agree that one very happy person existing would be worse than 7 billion barely happy people existing. If we just compare those states of the universe, iterated 1 trillion times, we conclude that 7x10^21 people with barely happy lives matters more morally than 1 trillion people with great lives. To deny this view one could claim that there is some moral significance to the number a billion, such that the moral picture changes when we iterate it a trillion times. Yet this seems extremely counterintuitive. Suppose we were to discover that there were large numbers of happy aliens that we can’t interact with. It would be strange for that to change our consideration of population ethics. The morality of bringing about new people with varying levels of happiness should not be contingent on causally inert aliens.
Huemer’s proof
Huemer has given another argument for the repugnant conclusion. To quote him, suppose we accept
“The Benign Addition Principle: If worlds x and y are so related that x would be the result of increasing the well-being of everyone in y by some amount and adding some new people with worthwhile lives, then x is better than y with respect to utility.7
Non-anti-egalitarianism: If x and y have the same population, but x has a higher average utility, a higher total utility, and a more equal distribution of utility than y, then x is better than y with respect to utility.8
Transitivity: If x is better than y with respect to utility and y is better than z with respect to utility, then x is better than z with respect to utility”
we must accept the repugnant conclusion. Huemer goes on to explain why these necessitate RC, writing “To see how these principles necessitate the Repugnant Conclusion, consider three possible worlds (figure 1):
World A: One million very happy people (welfare level 100).
World A+: The same one million people, slightly happier (welfare lev el 101), plus 99 million new people with lives barely worth living (welfare level 1).
World Z: The same 100 million people as in A+, but all with lives slight ly better than the worse-off group in A+ (welfare level 3).
A+ is better than A by the Benign Addition Principle, since A+ could be produced by adding one unit to the utility of everyone in A and adding some more lives that are (slightly) worthwhile. Z is better than A+ by Non-anti-egalitarianism, since Z could be produced by equalising the welfare levels of everyone in A+ and then adding one unit to everyone's utility. Therefore, by Transitivity, Z is better than A. Analogous arguments can be constructed in which world Z has arbitrarily small advantages in total utility; as long as Z has even slightly greater total utility than A, we can construct an appropriate version of A+ that can be used to show that Z is better than A. This suggests that we should embrace not only (RC), but the logically stronger Total Utility Principle: For any possible worlds x and y, x is better than y with respect to utility if and only if the total utility of x is greater than the total utility of y.”
Huemer goes on to defend each premise at length. Each premise is supported by extremely compelling arguments, which I won’t rehash.
Arrhenius’ proof
Arrhenius has his own proof that the repugnant conclusion must be accepted if we accept certain other very reasonable axioms. Let’s look at the axioms.
A) The Dominance Principle: If population A contains the same number of people as population B, and every person in A has higher welfare than any person in B, then A is better than B
This is obviously correct. If we have two worlds and everyone is better off in world A than world B then A is better than B. Duh.
B) The Addition Principle: If it is bad to add a number of people, all with welfare lower than the original people, then it is at least as bad to add a greater number of people, all with even lower welfare than the original people.
This is also obvious. If it’s bad to bring a person with negative 4 utility into existence then it’s more bad to bring ten people with negative 8 utility into existence. Also duh.
C) The Non-Anti-Egalitarianism Principle: A population with perfect equality is better than a population with the same number of people, inequality, and lower average (and thus lower total) welfare.
This is also obvious. A population of everyone with 10 utility is better than a population with more inequality and average utility of 5.
D) The Minimal Non-Extreme Priority Principle: There is a number n such that an addition of n people with very high welfare and a single person with slightly negative welfare is at least as good as an addition of the same number of people but with very low positive welfare.
E) The rejection of the sadistic conclusion. When adding people without affecting the original people's welfare, it can’t be better to add people with negative welfare rather than positive welfare.
These all in conjunction require we accept the repugnant conclusion.
Other views just suck
Huemer gives his response to six rival views. I’m going to give my own response to those, while drawing heavily on Huemer. What are the rival views?
A) The average utility view which says that we should maximize average utility
This one has lots of problems.
1 If we think that there’s even a .001% chance that we’re a brain in a vat, we should basically be egoists because in expectation increases in our welfare increase average utility by a lot. If there’s a .001% chance that I’m a brain in a vat, then increasing my utility from 50 to 100 would be better than increasing the well-being of tens of thousands of people.
2 It’s bizarre and ad hoc, having the strange implication that, if two people in possible worlds are both in caves, totally isolated from the rest of the world, but one of them exists in a world with a trillion people and the other exists in a world with a billion people, it’s much more important to increase the utility of a the second person because it raises average utility more. It also implies that, if there were lots of causally inert spirits floating around all the time with utility of zero, that would make it much less important to make people’s lives better.
3 It would say that if everyone in the world has positive utility of 10!!!, it would be actively bad to bring people into the world with positive utility of 10!!. This is not plausible. Bringing people into existence whose positive experience dwarfs all good experiences in the history of the world is a good thing, even if there are lots of other even happier people.
4 It would say that if the average utility was negative 10!!! it would be morally good to bring lots of people with utility negative 10!! into existence. This is deeply implausible. Bringing people into existence with no good experiences who experience more misery than the sum total of all suffering that has been experienced in the history of the world is bad, even if other people have worse lives.
5 It violates the independence of non interactionist agent criterion, which says that agents that can’t interact with each other or causally affect each other at all should not affect the goodness of the others actions. Even if there were lots of aliens with positive utility a lightyear away—that shouldn’t affect the desirability of making people’s lives better.
B) Critical level views say that happiness below a certain threshold is not morally good. For example, if one has utility of below 10, that is not morally good.
This view has problems of its own. The critical level view holder can think either that bringing people with utility below the threshold (with utility 8 for example, below the threshold of 10) is good or is morally neutral. Each one runs into problems.
If they say that bringing people into existence with utility below the critical threshold is morally neutral then if one could either bring a million people into existence with utility 9 or with utility zero into existence, they should flip a coin. Both of those are equally good. This is not plausible.
There’s also a problem relating to the sharp jump between the threshold and the area below the threshold. If the threshold is at 10, these views have to hold that bringing lots of people into existence with utility 9.999999999999999999999999999 is morally neutral, but bringing a person into existence with utility ten is good. This isn’t at all plausible. If each minute of existence for a hypothetical new being contains 1 unit of utility, then this view would hold that creating beings that are happy for 10 minutes before disappearing is good, but bringing beings into existence that are happy for 9 minutes and 59 seconds is morally neutral. This is not at all plausible.
They might also hold the view that bringing people with utility below the threshold into existence is actively bad. This view runs into several problems.
1 It would say that, if the threshold is 10 again, a world with lots of people with utility of 9.9999999999 would be worse than a world in which a small number of people are being brutally tortured. Enough bad things add up to be very bad.
2 It violates the natural presumption that bringing people into existence with good lives is good.
3 It would say that creating a world with googolplex people all of whom have utility 8 would be the worst act in history by a wide margin. This is implausible.
C) Narveson thinks that world A is only better than world B if it’s better for someone. This runs into lots of problems.
1 Suppose we can either bring 100,000 people into existence with mediocre lives or 100,000 different people into existence with great lives. Narveson’s view would imply those are equal, because neither is better for anyone.
2 On this view it would be fine to bring people into existence with miserable lives because they wouldn’t have otherwise existed. Also, as Huemer points out, it would hold that it would be good to bring millions of people with terrible lives into existence if it would make currently existing people’s lives better, because it would be better for some, but the miserable people wouldn’t have existed otherwise.
D) Variable-Value theories say that bringing a new life into existence has diminishing marginal utility. The first person with utility of 5 is more valuable than the 101st.
These views are also terrible flawed.
1 They imply that the value of bringing a person into existence is largely contingent on how many other people there are in existence. This is not plausible. As Huemer observes, Parfit observes “research in Egyptology cannot be relevant to our decision whether to have children.” However, these views imply that the number of Egyptian’s determines how valuable it is to have children, because new people have declining marginal utility.
2 These views imply that if there were lots of people with really terrible lives, with utility of negative 10!!! it would be good to bring new people into existence with utility negative 10!!. These views are intended to be like total views for small populations (Which say we should maximize total utility), while being like average views for large populations. However, this means that for large populations, all the problems that avail average utilitarian views still apply.
E) We could adopt perfectionist views, which Parfit did, according to which there are some things that are so great that they are categorically more valuable than other smaller goods. I recall Parfit saying that no number of lizards basking in the sun can be as good as the experience of true love.
This view runs into a similar objection to the one discussed in the previous post. Surely, true love is not infinitely more valuable than love that’s an iota less intense. Surely that’s not infinitely more valuable than love that’s an iota less intense than that. We can keep replacing one instance of immense love with lots of instances of slightly less valuable love, until we conclude that lots of 1 minute romantic flings are as valuable as one instance of true love. Surely, millions of lizards enjoying basking in the sun can be as good as a 1 minute romantic fling. This also runs afoul of non-anti-egalitarianism. Surely a world where one person was truly in love, but millions of others lived mediocre lives would be worse than one where lots of people had slightly less true love, but higher average and total utility—and great equality.
F) We could hold the view that some intense pleasures are lexically better than less intense pleasures. These run into the same issues that were discussed availing perfectionism. If each pleasure is not infinitely better than another pleasure that’s slightly less good than it, then some large number of trivial pleasures have more total goodness than a small number of very intense pleasures.
Biases
This demonstrates that our anti repugnant conclusion intuitions fall prey to a series of biases.
1 We are biased having won the existence jackpot. A non existing person who could have lived a marginally worthwhile life would have perhaps a different view.
2 We have a bias towards roughly similar numbers of people to the numbers who exist today.
3 Humans are bad at conceptualizing large numbers. We saw that in the previous post.
Conclusion
If you’re not convinced yet, read Huemer’s article. It’s one of the most compelling philosophy papers that I’ve ever read1. To avoid simply rehashing Huemer’s paper, I haven’t been able to go as deeply into the arguments as Huemer did. However, it should be clear by now that rejecting the repugnant conclusion has enormous costs. I’ve kept this post briefer than many of my posts, because the points that are worth making have either been made by the previous post or by Huemer.
So let’s rename the repugnant conclusion. A proper understanding of it reveals that it should be called the nice and pleasant conclusion.
So let’s recap the cumulative case for utilitarianism. Utilitarianism is supported by 5 independent sets of axiomatic principles, does better historically, blows other theories out of the water in terms of all of the theoretical virtues, and has conclusions that have been independently born out in ten independent cases, all of which are used as arguments against utilitarianism. This case is already overwhelming. But the case for utilitarianism is just getting started. There is far more to come.
Huemer’s writing is at its best when he’s agreeing with utilitarianism.