Discussion about this post

User's avatar
Plasma Bloggin''s avatar

Regarding the first paradox, however you want to think about infintesimal credences, I think it's obvious that the probability that you would be in room 500 *given that you exist* doesn't depend on whether the coin landed heads or tails. After all, in either case, all of the rooms have the same number of people in them. In a finite case, the probability that you will be in some room given that you exist is just the proportion of people in that room. In the infinite case, that's not well-defined, but it seems clear that any analogue of the concept of proportion that does apply is going to say that there are the same proportion of people in Room 500 regardless of whether the coin landed heads or not, since the relative numbers of people in each room are the same.

So if there is an update at all, it will be an anthropic update coming from the fact that you exist at all, not from seeing your room number. It could be argued that there should be an anthropic update in favor of heads on the basis of SIA - after all, it seems like there's some sense in which you're a billion times more likely to be created if the coin lands heads. In that case, Premise 1 would be wrong, though given the messiness with infinity, it's hard to say for sure if this is the correct way to reason about it. Regardless of how to deal with the anthropics, Premise 3 will always be right.

Expand full comment
James's avatar

Very interesting problems with infinity! I think I don't know enough about anthropics to provide much of a sensible comment on the first problem, although I guess your solution makes sense to me, approximately.

In the second problem, I think I also agree with your conclusion. However, I want to emphasise pretty strongly that the part of your conclusion I most agree with is:

// Certainly I think this holds in finite cases. But a lot holds in finite cases and not in infinite cases.//

I certainly do not think we should be making inductions from what moral principles we need in order to have some consistent ethical principles in the case of infinitely many people, and then generalise those to have to apply to finite people. It would be really bad if philosophers started doing these generalisations I think (for both my sanity and for the goodness of the world)!

However, something pretty interesting arises in your evaluation of the second claim. Most of the problems you're encountering are to do with the fact that you're dealing with aleph null, which we know, cannot be assigned a uniform measure (i.e. there is no way to define a value function on aleph null so that v(0) = v(1) = v(2) = ..., and that the sum of v(n) for all n is equal to 1). This is causing the issue with undefinedness.

If instead in problem 2, we were talking about continuum many people (a much weirder state of affairs, to be sure), so that we can pair up each person with a real number between 0 and 1, then we can totally deflate this issue! We can say that the state of affairs before rolling the dice is that we expect an average utility of 4/6, or 2/3. After rolling the die, we'll have partitioned the reals into two subsets, which we can call "happy" and "sad". These will both end up being some crazy unmeasurable subset of the reals.

I'm not going to prove this (hence I'm not absolutely certain about it), but I think if we allow ourselves to then apply principle 4, and rearrange them, it won't be too hard to show that we'll be able to create a measurable set with the same people, just reordered so that the people corresponding to [0, 5/6) are "happy" (i.e. +1) and the people corresponding to [5/6, 1] are "sad" (i.e. -1). Then we totally _can_ consistently define the happiness of the people. Just take the measure of the sets, and you get back 2/3, exactly what you'd want.

So I would be careful about the statement of premise 5, as it will only apply for certain types of unmeasurable infinites.

Expand full comment
67 more comments...

No posts

Ready for more?