Two New Infinite Paradoxes
You're going to have to abandon some obvious principle--twice
Lots of people complain about the difficult paradoxes in population ethics and anthropics. I don’t really find those paradoxes to be that scary—they mostly just involve showing ways that non-SIA and non-utilitarian views go hideously wrong (for example, if you’re willing to accept the repugnant conclusion and a few related results, you don’t have any paradoxes of population ethics). But there is one class of paradoxes that is serious and scary: the paradoxes of the infinite. Infinity has this bad habit of ruining every plausible principle and requiring people to accept things that sound totally insane.
I have two new ones.
1 The hotel transport
You’re put in an infinitely large hotel (with aleph null people, for those curious). A coin is flipped. If it comes up heads, then a billion people get deposited in a room labeled 1, a billion in a room labeled 2, a billion in a room labeled 3, and so on. Thus, for each natural number N, there are a billion people in a room labeled N. If it comes up tails, one person gets put in every room number. So one person in a room labeled 1, one in a room labeled 2, and so on. At some point after being deposited in a room, you’ll be able to see your room number, but there will be a brief period before you see your room number.
Here is the paradox—each of the following seem plausible:
Before seeing your room number, you should think at 50% odds that the coin came up heads (after all, you have no evidence either way).
After seeing your room number, you should think at a billion to one odds that the coin came up heads (for example, if your room number is 500, then heads means 1 billion people are in room 500 rather than only one person. It’s likelier that you’d be in a state if many people are in that state, rather than just a few).
Your credence shouldn’t change after seeing your room number (otherwise you get a guaranteed update when you see your room number. Your credence in tails always goes up, no matter which number you see, as all the numbers are evidentially the same).
These are, of course, inconsistent. It can’t be that before you see your room number, you think heads has 50% odds, after you see your room number you think heads is a billion times likelier than tails, but your credence doesn’t change between heads and tails. So we’ll have to give up one of the principles.
Giving up 1 seems quite bad. After all, before you see your room number you have no evidence about whether the coin will come up heads or tails. Nothing about the world yet has been affected by the coin flip. So 50% seems clearly right.
Giving up 3 also seems very bad. Before you see your room number you know that you will see it soon. You shouldn’t expect that once you see some evidence you’ll have a credence higher than your current credence. If you know the evidence is out there that will change your credence, then you should change your credence now in anticipation of the evidence. Predictably updating is a Bayesian heresy, condemned at the third Lateran council.
This leaves 2. I think 2 is the least costly to give up. After all, the odds that you’re in any particular room is either zero or infinitesimal. 0 times 1 billion is still 0. It’s not that clear what infinitesimal times 1 billion is, but it’s not obvious that it’s more than the original infinitesimal. So it doesn’t seem that terrible to say that your credence doesn’t change.
Still, this is a bit of a bullet to bite. It does seem intuitively obvious that in Hilbert’s hotel (an infinitely big hotel), you’re less likely to be in room one than in rooms 1 billion-2 billion. After all, if you learned that you were either in room one or 1 billion-2 billion, you should think it’s very unlikely that you’re in room one. You should be more frightened, after learning you’re in either room 1 billion-2 billion or room one, that everyone in room 1 billion-2 billion is getting executed than if you learned the one person in room one was being executed.
But probably we’ll have to give that up. Sorry! I don’t make the rules! Infinite ethics requires that we give up lots of things that seem obvious. If it requires that you think two events with infinitesimal odds are equally probable even when one seems more probable, maybe that’s not the end of the world. And the other two premises are obvious enough that there’s very strong pressure to give up 2.
2 Ex ante pareto
Here are five principles that sadly cannot all be true:
If some state of affairs is better for everyone, in expectation, than non-existence, then it’s good that it exists.
If some state of affairs has chanciness, and there is no way that it might turn out that would make it better than non-existence, then it’s not better than non-existence.
If a state of affairs has infinite groups of people, each with 1,000 badly off people and one well-off person, assuming the people who are badly off are as badly off as the well-off people are well-off, then it’s not better, in expectation, than a barren void without any people.
The goodness of a state of affairs doesn’t depend on how the people are arranged in groups.
If a state of affairs has infinite people who each roll a fair six-sided die, and benefit if it comes up 1-5, then it is better for everyone in expectation that they exist.
Sadly, these conflict. And don’t worry too much if you didn’t totally understand all the principles—describing the case should make it clearer.
Consider a state of affairs where there are infinite people. They all roll a six-sided die. If it comes up 1-5, they get one util. If it comes up 6, they all lose one util. Assume they have no other utils in their life.
The fifth principle says this state of affairs is good, in expectation, for everyone. The people all have a 5/6 chance of benefitting and only a 1/6 chance of being harmed.1 1 says that states of affairs that are better in expectation for everyone than non-existence are better than a barren void with no people. So 1 and 5 jointly imply that it’s good that this state of affairs exists. It’s better that everyone exists and has a 5/6 chance of a util and a 1/6 chance of -1 utils than it would be for no one to exist.
But here is the bad news: this collection of people could be divided into groups, each of which have 1,000 badly-off people and one well-off person. This is a fact of math. Cardinality measures the size of infinite sets by whether they could be paired one to one. The natural numbers (1, 2, 3, 4, etc) and the prime numbers are the same cardinality, because you could pair them one to one. You could pair 1 with the smallest prime, 2 with the second smallest prime, 3 with the third smallest prime, and so on.
But if two infinites are the same cardinality, then you can also pair every member of one group with five members of the other group. Or with 100 members. You could pair one with the first twenty primes, two with the next twenty primes, and so on. Or you could go the other way. You could pair the smallest prime with the first ten natural numbers, the next smallest prime with the next ten natural numbers, and so on.
The sets of well-off and badly-off people have the same cardinality. There are infinitely many of each (aleph null, to be precise, which is the smallest infinity). Both groups are the same cardinality as the natural numbers. In both groups, you could pair each person with a single natural number. This means that you could pair them 1:1 or 5:1 or 1:5.
Thus, you could divide them up into groups in the following way: you could pair every well-off person with 1,000 badly off people. 3 says that such a state of affairs wouldn’t be better than non-existence (the most popular view is that the two states of affairs are incomparable, but certainly this one wouldn’t be better). 4 says that dividing the people into groups doesn’t affect the value of the state of affairs. So then from those, it follows that if a six-sided die is rolled where everyone has a util if it comes up 1-5 and has a negative util if it comes up 6, the resultant state of affairs wouldn’t be better than non-existence.
Now, 2 says “If some state of affairs has chanciness, and there is no way that it might turn out that would make it better than non-existence, then it’s not better than non-existence.” So from 2 with the others, it would follow that the original state of affairs, before the dice get rolled, is neither good nor bad (for it is guaranteed to neither be good nor bad after the dice are rolled).
Together, then, we get a contradiction. 1 says that the state of affairs is better than non-existence because it’s better for everyone in expectation. But 2-5 imply that because the resulting state of affairs could be divided into groups containing mostly miserable people, it’s not better than non-existence.
This paradox seems harder than the first one. I can think of a few ways to go.
One way to go is simply to reject 4. 4, remember, says that the goodness of a state of affairs doesn’t depend on how the people are arranged in groups. But you might think that how good a state of affairs is does depend on how the people are divided into groups. Methods of aggregating infinite quantities of value generally depend on how the value is distributed. I don’t like this view—it seems obvious that the goodness of a state of affairs doesn’t depend on how the people are arranged—but it’s coherent, and it’s what some people think.
However, I think this view will have a parallel problem. Suppose you think that the value of the world depends in some way on how value is distributed. It depends, for instance, on whether if you took a larger and larger sphere of space, in the limit it would have mostly happy people or mostly sad people. This view will violate Ex-Ante Pareto. It will sometimes prefer states of affairs that are worse in expectation for everyone.
For example, consider two states of affairs with infinite people:
Everyone has a 1/6 chance of gaining and a 5/6 chance of losing. However, the people will be distributed so that every individual region of space has mostly winners.
Everyone has a 5/6 chance of gaining and a 1/6 chance of losing. However, the people will be distributed so that every individual region of space has mostly losers.
Ex-ante Pareto says that 2 is better than 1. Everyone in expectation is better off. But this conflicts with views that care about the arrangements of happy people. 1 is rigged so that each region of space has mostly winners.
For this reason, I don’t think that rejecting 4, and caring about distribution is much help.
That leaves 1, 2, 3 and 5. I’ll include them to refresh your memory, and cross out four as we’ve already discussed it.
If some state of affairs is better for everyone, in expectation, than non-existence, then it’s good that it exists.
If some state of affairs has chanciness, and there is no way that it might turn out that would make it better than non-existence, then it’s not better than non-existence.
If a state of affairs has infinite groups of people, each with 1,000 badly off people and one well-off person, assuming the people who are badly off are as badly off as the well-off people are well-off, then it’s not better, in expectation, than a barren void without any people.
The goodness of a state of affairs doesn’t depend on how the people are arranged in groups.If a state of affairs has infinite people who each roll a fair six-sided die, and benefit if it comes up 1-5, then it is better for everyone in expectation that they exist.
I’m also pretty inclined to accept 3 . I think there are strong reasons to think that an infinite collection of people, where infinite people are well-off and infinite are badly-off, is neither good nor bad. If that’s right then 3 is true. Now, you might reject that and think such a state of affairs can be good. But even if you think that, if a collection has infinite groups of people, and they’re all mostly badly off, then you’ll think it’s bad. So no one should really be rejecting 3.
So then the remaining things to reject are 1, 2, and 5. 5 might seem trivial, but it’s less obvious than you’d think. After all, there are infinite resultant well-off and badly off people. There’s one way of calculating objective probability, therefore, where it’s undefined. On this way of calculating it, your odds of benefitting is undefined.
I don’t think this is the way to go. If you were in an infinite hotel, and there were infinite clones of you, and they all rolled fair dice, seems obvious that the odds you’d get 1-5 would be 5/6. Your credence should be 5/6, not undefined. If you don’t think something like this then probably an infinite world breaks probability (see section 2).
The only remaining ones are 1 and 5. There’s something to be said for rejecting 5. Infinity breaks a lot of otherwise obvious principles, so you shouldn’t be that shaken to see it break reflection—the idea that your evaluative attitudes shouldn’t predictably change. I imagine the psychology of the person who rejects five in the following way:
Stage 1: before rolling the dice, you think the state of affairs is better than non-existence. At this stage, everyone is happy to be alive. Everyone’s prospects are better than non-existence.
Stage 2: at this stage, there are infinite well-off and badly off people. This stage is undefined in value.
Thus, our evaluation must differ between 1 and 2. At stage 1, everyone is better off in expectation than if they didn’t exist. At stage 2, that isn’t true.
I don’t find this that plausible. It seems if you know a state of affairs is going to end up being undefined, then you should think it’s undefined now. But it’s coherent.
The premise I’d give up is 1 which says “If some state of affairs is better for everyone, in expectation, than non-existence, then it’s good that it exists.” Certainly I think this holds in finite cases. But a lot holds in finite cases and not in infinite cases. If you have infinite people who each undergo some chancy process where they benefit if it goes one way and are harmed if it goes another, then it’s guaranteed that infinite people will benefit and be harmed. It is guaranteed that it will end up undefined. As a result, it doesn’t seem that bad to think it’s undefined now.
And we can tell a plausible story of why it holds in finite cases but not infinite cases. A big part of the reason it’s good to do things that benefit everyone, in expectation, is that many more people benefit than are harmed. But this basic logic breaks down in infinite cases. If infinite people each roll a die, it won’t actually be true that more get 1-5 than 6. So what holds in finite cases might not hold in infinite cases.
I agree this is pretty costly. But so it goes with infinity. The math of the infinite is super weird, so lots of stuff that seems obvious must be false. I also wouldn’t be shocked given all the paradoxes of the infinite came along and showed that the way we’ve been thinking about infinity is totally wrong, and there’s some elegant solution to all the paradoxes. Though for reasons I’ll lay out in a future article, there are various serious challenges for any such theory.
Note, being a negative utilitarian doesn’t help you get around this, because we could modify the benefits to be in terms of reduced suffering. We could have everyone start in a static state with some constant amount of suffering, and have them have a 5/6 chance of reduced suffering. At the end of this process, the number of benefitted and harmed people will be the same cardinality. Thus, the same paradox arises.


Regarding the first paradox, however you want to think about infintesimal credences, I think it's obvious that the probability that you would be in room 500 *given that you exist* doesn't depend on whether the coin landed heads or tails. After all, in either case, all of the rooms have the same number of people in them. In a finite case, the probability that you will be in some room given that you exist is just the proportion of people in that room. In the infinite case, that's not well-defined, but it seems clear that any analogue of the concept of proportion that does apply is going to say that there are the same proportion of people in Room 500 regardless of whether the coin landed heads or not, since the relative numbers of people in each room are the same.
So if there is an update at all, it will be an anthropic update coming from the fact that you exist at all, not from seeing your room number. It could be argued that there should be an anthropic update in favor of heads on the basis of SIA - after all, it seems like there's some sense in which you're a billion times more likely to be created if the coin lands heads. In that case, Premise 1 would be wrong, though given the messiness with infinity, it's hard to say for sure if this is the correct way to reason about it. Regardless of how to deal with the anthropics, Premise 3 will always be right.
Very interesting problems with infinity! I think I don't know enough about anthropics to provide much of a sensible comment on the first problem, although I guess your solution makes sense to me, approximately.
In the second problem, I think I also agree with your conclusion. However, I want to emphasise pretty strongly that the part of your conclusion I most agree with is:
// Certainly I think this holds in finite cases. But a lot holds in finite cases and not in infinite cases.//
I certainly do not think we should be making inductions from what moral principles we need in order to have some consistent ethical principles in the case of infinitely many people, and then generalise those to have to apply to finite people. It would be really bad if philosophers started doing these generalisations I think (for both my sanity and for the goodness of the world)!
However, something pretty interesting arises in your evaluation of the second claim. Most of the problems you're encountering are to do with the fact that you're dealing with aleph null, which we know, cannot be assigned a uniform measure (i.e. there is no way to define a value function on aleph null so that v(0) = v(1) = v(2) = ..., and that the sum of v(n) for all n is equal to 1). This is causing the issue with undefinedness.
If instead in problem 2, we were talking about continuum many people (a much weirder state of affairs, to be sure), so that we can pair up each person with a real number between 0 and 1, then we can totally deflate this issue! We can say that the state of affairs before rolling the dice is that we expect an average utility of 4/6, or 2/3. After rolling the die, we'll have partitioned the reals into two subsets, which we can call "happy" and "sad". These will both end up being some crazy unmeasurable subset of the reals.
I'm not going to prove this (hence I'm not absolutely certain about it), but I think if we allow ourselves to then apply principle 4, and rearrange them, it won't be too hard to show that we'll be able to create a measurable set with the same people, just reordered so that the people corresponding to [0, 5/6) are "happy" (i.e. +1) and the people corresponding to [5/6, 1] are "sad" (i.e. -1). Then we totally _can_ consistently define the happiness of the people. Just take the measure of the sets, and you get back 2/3, exactly what you'd want.
So I would be careful about the statement of premise 5, as it will only apply for certain types of unmeasurable infinites.