Average utilitarianism says that we should maximize average utilitarianism. It is rarely taken seriously because it is insane. #totalutilitygang.
This paper sets out to defend average utilitarianism. This was helpful given how rarely AU is defended in print, given its many insane implications.
Perhaps the main difficulty for a proponent of TU is what Derek Parfit has called the Repugnant Conclusion (“RC”)2 —an alleged reductio against TU—and the philosophical landscape of population ethics is largely a function of the different ways in which philosophers handle the RC obstacle. One camp of philosophers endorses TU and “accepts” the RC, arguing that the RC does not serve as a reductio against TU because the RC is not in fact repugnant.3 Philosophers in another camp think that the RC is repugnant and that it does follow from TU, and they thus propose theories that are variations of TU that avoid implying the RC.4 Other philosophers still—though not utilitarians— argue that the RC can be avoided by denying the transitivity of “better than.”5
I’ve addressed this here.
I think that all of these approaches are misguided, because I think that the main tenet of TU (and of most variations of TU) is wrong: It is not the case that, all else equal, a larger population of happy people is better than a smaller one. While this conviction has surely been shared by others, I think that others prematurely abandon it when the going seemingly gets tough. In this paper, I will go against the grain and argue that, if we are going to adopt a utilitarian approach to population ethics, 6 then AU is the theory we should endorse.
Wild!
The main intuition motivating AU—and distinguishing it from TU—is the idea that while we want people to be as happy as possible, we do not necessarily have reason to bring into existence additional happy people. In other words, when it comes to happy people, more is not better. This is what Jan Narveson has in mind when he writes: “[T]here is no moral argument at issue here. How large a population you like is purely a matter of taste, except in cases where a larger population would, due to indirect effects, be happier than the first . . .”13 Further, Narveson explains that it is not good that people exist “because their lives contain happiness.” Instead, “happiness is good because it is good for people.” 14
But average utilitarianism can’t accommodate that intuition. It would still say it’s good to make happy people if it increases average happiness. It just holds (bizarrely) that whether that’s good depends on the happiness of other people. I presented a knock-down argument against Narveson here .
: “We are in favor of making people happy, but neutral about making happy people.”17 In other words, not only does Narveson think that the size of population is a non-moral matter, but he thinks that adding an additional person to a population is something that we should generally consider to be an ethically neutral event.
Average utilitarians can’t accomodate that. In fact, they have to hold that you should bring miserable people into existence if the average is more miserable but not that you should bring happy people into existence if the average is happier.
Additionally, Narveson’s view fails. If it’s morally neutral to create happy people then if you could either create a person with utility of 100000000000000000000000000 or with utility of 2 then it would be a coinflip because future happiness gives you no obligation to bring it about. This is absurd.
Pressman goes on to show a devastating problem for the simple asymetry, which I agree with.
An AU theorist should hold that nothing has unconditional intrinsic value, but that a person’s happiness has intrinsic value conditional on his existence. In order to assess these claims, let’s consider first how they apply to an existing person, and next, how they apply to a potential person. In terms of intrapersonal motivation, this account prescribes the same self-interested behavior as would an account on which happiness had unconditional intrinsic value. Both would prescribe that an existing individual maximize his happiness. Are both accounts equally plausible for the intrapersonal context? As Mill argues, the best proof of something’s desirability is the fact that we tend to desire it.39 Given that the individual in question does exist, though, the “conditional” clause is satisfied and what remains is that, for the individual, happiness is intrinsically good. Thus, the conditional value and the unconditional value accounts are each plausible in an intrapersonal context, and they each pass Mill’s test equally well. As for the case of creating additional people, the two accounts of intrinsic value come apart. While the account according to which happiness is an unconditional good would espouse bringing about additional individuals if they are happy, the account according to which the value of happiness is conditional on existence will not espouse bringing about additional individuals, maintaining that population size is not a moral matter, but rather a matter of taste.
One obviously has to exist to be happy. The intuition that I’d deny is that the happiness of other people matters for you happiness. (I also disagree with Mill’s argument, but that’s a story for another day).
It is important to note something that seems to have gone somewhat ignored41 in the population ethics literature that is critical of AU: This task of articulating the population relevant for ethical inquiry is not a question only for AU. It is also a question for TU—AU and TU are both impersonal moralities that seek to maximize an abstract value for a population. As such, the question of which population is relevant for ethical inquiry is a question that is distinct and separable from the question of how to assess what is best for that population. Just as an AU account is incomplete without a description of the relevant population, so too is a TU account. TU’s common prescription to “maximize happiness” or to “maximize happiness of the population in question” leaves undetermined which population’s happiness should be maximized.
All sentient beings happiness should be maximized.
The author then makes the point that total utilitarianism will include all individuals in its calculus, even unaffected ones, but that unaffected ones simply don’t affect the ethical calculus.
Parfit and Huemer both argue that Benign Addition and the related cases give us reason to think that TU is more plausible than AU. How, they ask, can it be worse for there to be additional happy people (Mere Addition), and how can it still be worse to add additional happy people when the original person is made better off (Benign Addition)? To avoid confusion, in what follows I will discuss Benign Addition, but similar points apply to Mere Addition.
Benign Addition: A: (8, x, x, x) B: (9, 5, 5, 5)
Benign Addition Variation: A: (8, x, x, x) C: (5, 5, 5, 9)
The objection says that in Benign Addition, B is better than A, and the idea is that this is because of TU. My thought is that this has nothing to do with TU. Rather, I think this is likely because of the person-affecting intuition, which here suggests that B is better. One way to test this might be to consider Benign Addition Variation. Would it be better to reduce the welfare of an existing individual to bring into existence three additional happy individuals?
Yes!! If one finds this unintuitive, then they’re int he difficult task of denying that B is better than A despite being better for everyone and having additional well off people.
V.D.2. Reverse Egyptology The same thing at work in Benign Addition seems to be at work in Reverse Egyptology as well: Person-affecting intuitions align with TU in Reverse Egyptology, but in similar cases that TU and AU would treat in the same way as Reverse Egyptology, person-affecting intuitions align with AU. Recall the Reverse Egyptology example:
Possible Future 1: 1A: Have the child: 5, 5, 6, 6 | 18 AU = 8, TU = 40
1B: Don’t have the child: 5, 5, 5, x | 18 AU = 8.25, TU = 33
Possible Future 2: 2A: Have the child: 5, 5, 6, 6 | 1 AU = 4.6, TU = 23
2B: Don’t have the child: 5, 5, 5, x | 1 AU = 4, TU = 16
The objection aims to draw out our intuitions that, regardless of which of the two possible futures obtains, the same choice should be made—it is better to have the child. This is presented as an objection to AU and as evidence in favor of TU, because TU gives us this prescription, whereas AU does not. AU says that we should have the child if Future 2 will obtain, but that we should not have the child if Future 1 obtains. However, as with Benign Addition, I think that if we have the intuitions that Reverse Egyptology suggests we do, this is not because of TU, but rather because of person-affecting intuitions, which here suggest that having the child is better in both cases
This is not true at all. The intuition is not about person affectedness, rather the intuition is that the well-being of fully unaffected parties thousands of years in the future doesn’t affect the goodness of having a child. If I’m deciding whether or not to have a kid, the happiness of a person far away who will never interact with me shouldn’t matter. If there were happy ghosts a galaxy away living happy lives, that would have no impact on whether I should have a kid. This is as strongly held as any intuition in ethics.
While perhaps it might seem odd to some to include the past in the population relevant for ethical inquiry, recall that I have already argued that those who are unaffected by an action seem to be similarly situated regardless of when they exist temporally
They should be considered in that even if they aren’t affected we should hold the view that if they were affected they’d change the ethical status of an action. However, if they’re not affected they shouldn’t change our all things considered verdict about an action.
Not only is one able to make betterness assessments of this sort, but one can also wish that things in the past were or were not the case. For example, one can wish that it was the case that the ancient Egyptians lived fulfilling lives.
True. However, the weird thing is the claim that the happiness of egyptians affects whether or not I should have a child.
Thus, if we are to come to a conclusion about whether AU or TU is more plausible, I think we need to consider this impersonal case. The consideration of this impersonal case might lead some to favor TU and some to favor AU. But this is where the anti-more-is-better intuition (see Part II) comes into play, and in my opinion, without much opposition. Seemingly everyone who rejects AU—in the context of positive utility—rejects it because of absurdities it allegedly entails in cases where AU conflicts with the person-affecting intuition. Once AU and TU are compared impersonally, as they should be, AU’s treatment of cases of positive utility is difficult to reject.
I found it quite easy to reject. The more is better intuition seems to only hold when there’s status quo bias. Most people seem to think that a billion people with utility of 5 is better than 1 with utility of 5.1.
Perhaps the objection to AU that is regarded as the strongest is one that Parfit raised and termed Two Hells.50 I have yet to discuss it because it raises concerns of a different type from the alleged counterexamples already discussed: It addresses AU’s treatment of suffering. The objection goes as follows: Consider two possible hells. In Hell 1, ten people exist and suffer great agony. In Hell 2, there are ten million people who suffer ever so slightly less agony. AU will say that Hell 2 is better. Parfit and countless others, however, find it obvious that Hell 2 is far and away the worse of the two states of affairs. I am certainly in the minority, but I think that others have been mistaken, and that AU provides an adequate assessment of the Two Hells case. It seems plausible to me that there is nothing wrong with its symmetrical treatment of cases of positive happiness and negative happiness. Narveson and Boonin-Vail, themselves, however, do not think that their slogans about the addition of happy people apply to the addition of unhappy people. But this asymmetry seems unwarranted. With unhappy people, just as with happy people, “How large a population you like is purely a matter of taste,” and “there is no moral argument at issue here.”51 It is not bad that unhappy people exist because their lives contain unhappiness; unhappiness is bad because it is bad for people. We should aim to reduce unhappiness for people, not reduce the number of people for a reduction of unhappiness. This, however, amounts to little less than a bare assertion of the AU theorists’ view regarding the Two Hells case. Although I find the above point sufficient, most readers will not, and thus more needs to be said to show that the Two Hells objection is not decisive against AU. As such, the following discussion aims to provide the beginnings of a disarming (or at least of a weakening) of the Two Hells objection to AU. VI.A. The Limited Force of the Two Hells Example Although Larry Temkin is not a utilitarian, and although he makes it clear that he finds average utilitarianism to be particularly implausible, 52 I think that a discussion of his, in Rethinking the Good, provides fodder for an AU theorist to respond to the Two Hells objection.53 According to Temkin, when comparing two states of affairs in which a burden of a very similar quality is imposed on two different groups, the number of people affected in each state of affairs is relevant—or, as he says, “additive aggregationist” reasoning is plausible. For example, according to Temkin, in a case like Two Hells, Hell 2 is much worse than Hell 1, and this is because the difference in quality of the burdens is very small, and this enables the number of individuals in Hell 2 to make the badness of Hell 2 outweigh the badness of Hell 1. Whether or not the badness of Hell 2 and Hell 1—at least with respect to the utility ideal, leaving open the possibility that ideals other than utility are valuable—is directly proportional to the total units of utility is an open question, but at the very least, Temkin argues that it is plausible that the number of affected parties is relevant. Further, he argues, a large enough disparity of people can outweigh the slight disparity in quality of the burden that cuts in the opposite direction. According to Temkin, however, additive aggregationist reasoning is not plausible for comparisons of two states of affairs in which there is a large disparity in the quality of a burden. What Temkin means is that in comparisons of this sort, it might be the case that no number of people affected by the lesser burden will make that state of affairs worse than a state of affairs with a particular number of people affected by a severe burden. To use his example, there is no number—however large—of people afflicted by the mild annoyance of mosquito bites that would make such a state of affairs worse than a different state of affairs in which some small number of people undergo severe torture. Temkin ultimately argues that the two above claims, combined with a continuity claim (asserting that a sequence of steps between cases with merely slight quality disparities would ultimately get one from the case of torture to a case of mosquito bites), and the claim that the better-than relation is transitive are inconsistent. He laments the need to reject any of these four claims, but he argues that we must reject the transitivity of the betterness relation. There are various reasons to doubt the plausibility of this position, but they are beyond the scope of this paper’s analysis. Instead, let’s focus on the implications of Temkin’s first and second claims for the Two Hells objection—assuming that his claims about people’s intuitions are fairly accurate, as seems likely to be the case. Temkin’s discussion suggests that while people will generally side with the TU theorist in Parfit’s Two Hells case and in similar cases, people will generally side with the AU theorist in cases of suffering where either Hell 1 (or Hell 2) is compared to a Hell 3, where there are an enormous number of individuals each of whom has a utility level that is only slightly negative (e.g. due to mosquito bites or mild headaches).
I agree that people might have this intuition, but it can’t be maintained for reasons identical to the reason one can’t reject the repugnant conclusion. If we accept the 3 principles Huemer uses, this gets us to the modified two hells case.
Start with a world with 1 person with -10 utility. We’d surely agree that another world where that person has -11 utility and another person has -1 utility would be worse, because it makes new people with bad lives and lowers everyone else’s average utility.
Now we compare that to a world in which 2 people have -6 utility, so there’s lower average utility and total utility. This world seems clearly worse.
But there’s another intuitive principle. If we accept that for any amount of misery there is a worse world with people with slightly higher utility, then this is sufficient to get us to the reverse repugnant conclusion (called the second hell scenario here).
If 1 person with -1000 utility is less bad than 10 with -950 utility which is less bad than 100 with -900utility… we can keep going until we get an overall worse world with lots of people with slightly negative utilities.
(There are also other compelling arguments for it).
However, there are other cases in which average utilitarianism gets the disasterously wrong result.
We have 1 billion people with -100^100^100^100^100^100^100^100 utility. You have the choice of bringing an extra 100^100^100^100^1000 people into existence with average utility of -100^100^100^100^100^100^100^99. Should you do it? It would increase average utility, yet it still seems clearly wrong—as clearly wrong as anything. Bringing miserable people into existence who experience more than the total suffering of the holocaust every second is not a good thing, even if there are existing slightly less miserable people.
It would imply that if you became convinced that the average quality of life were very negative in the real world and you could create a machine that would spawn babies who would fall into a fire and die, but experience less negative utility than the average person, you should do so, because it would raise average utility. This is absurd.
There is currently one person in existence with utility of 100^100^100. You can bring a new person into existence with utility of 100^100^10. Average utilitarianism would imply that, not only should you not do this, doing it would be the single worst act in history, orders of magnitude worse than the holocaust in our world.
You are in the garden of Eden. There are 3 people, Adam (utility 5), Eve (utility 5), and God (utility 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000^1000000000000000000000000000000000000000000000000000000000000000000000000000^100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000). Average utilitarianism would say that Adam and Eve existing was a tragedy and they should certainly avoid having children.
You’re planning on having a kid with utility of 100^100, waaaaaaaaaaaaaaaaaay higher than that of most humans. However, you discover that there are oodles of aliens with utility much higher than that. Average utilitarianism would say you shouldn’t have the child.
Average utilitarianism would say that if fetuses had a millisecond of joy at the moment of conception, this would radically decrease the value of the world, because fetuses would bring down average utility.
Similarly, if you became convinced that there were lots of aliens with bad lives, AU would say you should have as many kids as possible, even if they had bad lives, to bring up the average.
(This article here is also quite good at analyzing average utiliarianism.
Average utilitarianism is dumb because it implies that we should kill unhappy people