Huemer has given 3 very plausible principles, when arguing we should accept the repugnant conclusion. However, they also have important implications for population ethics.
“The Benign Addition Principle: If worlds x and y are so related that x would be the result of increasing the well-being of everyone in y by some amount and adding some new people with worthwhile lives, then x is better than y with respect to utility.7
Non-anti-egalitarianism: If x and y have the same population, but x has a higher average utility, a higher total utility, and a more equal distribution of utility than y, then x is better than y with respect to utility.8
Transitivity: If x is better than y with respect to utility and y is better than z with respect to utility, then x is better than z with respect to utility”
Huemer goes on to explain that this entails we accept
Total Utility Principle: For any possible worlds x and y, x is better than y with respect to utility if and only if the total utility of x is greater than the total utility of y
This can get even more interesting results under uncertainty. For the populations we’ll discuss, let the first number be the number of people and the second number be their utility (eg (6,8) would be 6 people with utility of 8). Additionally + denotes we’re talking about another part of the world (6,8 + 5,3 would mean 6 people have 8 utility and 5 people have 3 utility).
Now compare our world to a different world. Our world has about 7.8 billion people. Let’s say their average utility is 99. Thus, our world is worse than a world with the values (7.8 billion, 100). This in turn would be worse than one with (7.9 billion, 100 + 10^30, 1). This would be worse than one with (10^30, 2). Thus, from this we conclude that our current world is worse than one with 10^30 people with life value only 1/50th as good as our lives are. Let’s add a principle
Risk Making Principle: For any population with utility of (X,Y), a 1/N chance of a population with (NX, 1.5Y) would be better. This would mean that, for example, a world with utility value of (50,50) would be less good than a 1/5 chance of one with (250, 75).
This is very intuitive. The predicted number of people will be higher, and their utility will also be higher. However, this principle entails that, if we assume Bostrom is right and the future could have 10^52 people, but there’s only a 1/10^10 chance of that, and we assume future people will have average utility equal only 1.5 times better than present people (an assumption I dispute strongly), the far future is more important than the present. Heck, even if the odds of that glorious future were only 1/10^20, it would still be a 1/10^20 chance of (10^52, 150). This is better, according to the risk making principle, than the current world. Thus, even if we take a very conservative estimate (10^10) of the odds of 10^52 future people, ignore other future values—treating any number of future people greater than 10^52 as equal to 10^52 and ignoring any smaller number of future people, it would still be so valuable that a 1 in 10^10 chance of it is more valuable than the current world. This shows the overwhelming value of the far future.
I’ve defended transitivity elsewhere and the other premises. Each of them is incredibly hard to deny.
In terms of the risk making principle, Tomasik provides additional arguments for accepting that we should maximize expected value—a principle far less modest than the one risk making principle I outline. However, it shows how well supported the principle is—it’s worth checking out if you have no already.
To recap—the principles that are obvious and entail we should accept the repugnant conclusion also entail, when combined with other modest principles, we should care overwhelmingly about the far future.