SIA Is Just Being a Bayesian About the Fact That One Exists
SIA is the correct theory of anthropics; also, an explanation of what new information you update on to 1/3 in sleeping beauty
Anthopics is often dense and technical, so I’ll begin this by presenting a captivating story. I was chatting with someone and she said “is SSA the better theory of anthropics?” So I launched into a lengthy digression of roughly the following form.
The Self Indication Assumption claims “all other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Thus, when evaluating two hypotheses, if all else is equal, they should prefer the one that says that there’s a higher probability of them in particular existing. This has some counterintuitive results. Here are two that come broadly from Bostrom’s famous book on the subject:
Jackets: There are two hypotheses that are equally probable. Hypothesis one claims that there will be 10 people created with red shirts and 100 with blue shirts. Hypothesis two claims that there will only be 10 people created with red shirts. When you are created, you find yourself with a red shirt. SIA says that, because both hypotheses hold that there will be the same number of people with red shirts created, you should be indifferent between the two. But this is a bit counterintuitive; to many it seems like, because on the first hypothesis most existent people won’t have red shirts and on the second all existent people will have red shirts, this favors the second hypothesis.
The presumptuous philosopher: There are two theories of the physical world. According to one, the universe is infinitely large. According to the other, the universe is very large but finite in size. Because you’re infinitely more likely to be created in the first hypothesis than the second, you should think that the first hypothesis is infinitely more likely than the second.
There are independent motivations for SIA; to avoid these results one has to posit a puzzling epistemic asymmetry between failing to create and creation. In fact, the alternative view to the SIA view, the SSA view, according to which one should reason as if they’re randomly selected from the set of all people in their reference class, which are beings that are relevantly like them according to some criteria, implies utterly nutty things including that, for example, one can make supernovas unlikely by procreating only if a supernova occurs, and implies that whether you should think that our civilization will die depends on causally isolated aliens. My favorite counterexample to SSA is a the flipside of the second result:
The presumptuous archeologist: The SSA has drawn their reference class. Archeologists discover overwhelming evidence that there were huge numbers of Neanderthals—quintillions of them—and that they’re in our reference class. On the SSA view, one should reject the overwhelming archeological evidence, because if you’re randomly selected from the reference class, it’s unlikely you’d be one of the chance few who isn’t a neanderthal.
The alternatives to SIA are wholly untenable. But even if this were not so, even if there was a working alternative to SIA, I would still think SIA was true. This is partially because, as Carlsmith points out, many of the most counterintuitive results of SIA are directly entailed by ironclad arguments; the problems for SSA are general problems for any alternative to SIA.
But perhaps the bigger reason is that I find SIA intuitively extremely obvious. It’s just what you get when you apply Bayesian reasoning to the fact that you exist. Take the presumptuous philosopher, for example. It seems very counterintuitive—how is it that you can know a priori that the universe is big just from the fact that you exist? But it’s straightforward probabilistic reasoning—if a million people are created then I’m more likely to be created, just as if a million jars are created, any particular possible jar is more likely to be created. In fact, if every agent is created, the odds I’d be created would be 1—however, if only a finite number of agents are created, the odds I’d be one of those finite agents would be zero. So the fact that I exist confirms the hypothesis that there are infinite agents over the hypothesis that there are finite agents.
And the probabilistic reasoning employed in Jackets to get the result contradicting SIA is totally wrong. It’s true that given that I exist, it’s more likely that I’d have a red shirt conditional on the hypothesis that there are just 10 people with red shirts. But I’d be less likely to exist at all! So the probabilities wash out.
Probabilistic math is weird sometimes. It gets unintuitive results. But that’s not a reason to give up on doing probability in the obvious way about the fact that I exist. SSAers say “but you must reason as if you’re randomly selected from all actual observers in your reference class.” But why? You’re not randomly selected from all actual observers in your reference class. God isn’t pulling you out of a reference class. You are just an agent at a time, and so the way to reason about things is to look at the probability of that agent existing at that time, just as for all probabilistic events the way to reason about them it so look at the odds of the event on the various hypotheses.
The only reason to reason as if you’re randomly selected from the observers in your reference class is that doing so meets our intuitions. But you don’t just get to change the way probability works because it meets your intuitions better. The correct response is to recognize that our intuitions are wrong, rather than devise a gerrymandered theory with no deeper justification that involves arbitrary drawing of reference classes.
Finally, thinking about things in terms of agents at times helps us explain what beauty learns in the sleeping beauty case. The sleeping beauty problem is well explained by Wikipedia:
The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.
The two answers people give generally are 1/2 and 1/3. People often justify the 1/2 answer on the grounds that you already knew you’d wake up, and so you learn nothing new beyond that a fair coin was tossed. But this is subtly wrong. Upon waking up, you know you’re awake at the time that you’re awake. Suppose that if heads is turned up, one will be awakened on Tuesday, while if tails is turned up they’ll be awakened on Tuesday and Wednesday. The person knows that they’re awake at the time upon finding they’ve woken up. There’s a 2/3 chance that it’s Tuesday and a 1/3 chance it’s Wednesday. Conditional on it being Wednesday, there’s a 100% chance that they got tails, while conditional on it being Tuesday, there’s a 50% chance that they got tails. So there’s a 1/3 chance they got tails and it’s Tuesday, a 1/3 chance they got tails and it’s Wednesday, and a 1/3 chance they got heads and it’s Tuesday.
"The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads."
In the case she is woken twice is she asked to identify the coin twice? it is unclear from this wording. If she is asked twice on tails and once on heads then thirding is obvious, if she is only asked the second wake on tails then halving is obvious.
I guess I should’ve clarified it further heh, but I was almost sleeping yesterday
What I meant is that’s really really hard to make a consistent a priori probability distribution in both cases. E.g. in jackets example it’s not quite obvious why you should be able to make any claims about the two hypotheses a priori
Maybe the first one is the second one with additional 100 jackets included with some probability, if the two jackets parts are independent
Maybe it’s not and there are just way more possible explanations for the first case than for the second
In the second case maybe you should give finite and infinite universes totally equal a priori probabilities
Or, maybe each one of finite universes with size X should be equally as likely as the infinite one
Maybe you should consider different ordinals for the infinite universe. Maybe not, maybe it doesn’t matter as long as the infinite universe is infinite
Still, it’s a pretty strange thing to a priori give only one possible probability distribution for each question