1 Introduction
I met you before the fall of Rome
And I begged you to let me take you home
You were wrong, I was right
You said goodbye, I said goodnight
SIA is the view of anthropics that says that all else equal you should think there are more people, because there existing more people makes your existence more likely. Ken Olum is one of the major defenders of it. Bostrom and Ćirković have a paper in which they criticize SIA. Here, I’ll respond to Bostrom and Ćirković’s paper.
Does anyone care when I write about anthropics? No! They are consistently my least read, liked, and commented-on posts. Nevertheless, I write them for posterity, for if either my future self or someone else has any interest in hearing about SIA. Everyone acts like anthropics is so tricky, yet there’s one view that vaporizes all the mysteries and people just ignore it. This is an injustice that cannot be allowed to continue!
2 Is SIA weird or unmotivated? Nope
B&C suggest that SIA is some strange unmotivated view:
SIA may seem quite dubious as a methodological prescription or a purported principle of rationality. Why should reflecting on the fact that you exist rationally compel you to redistribute your credence in favor of hypotheses that say that there are many observers at the expense of those that claim that there are few? This probability shift, it should be stressed, is meant to be a priori; it is not attributed to empirical considerations such as that it must have taken many generations for evolution to lead to a complex life form like yourself, or that we’ve discovered that the cosmos is very big and probably contains vast numbers of Earth-like planets. The support for “fecund” hypotheses (the support being proportional to the degree of the number of observers postulated) comes from the sole fact that you exist.
This is, I think, the wrong way of thinking about it. It’s not that hypotheses with more observers are inherently likelier. It’s instead that it’s likelier that you’d exist if there are more observers, just as if there were more blueberries, it would be likelier that any particular blueberry exists (I elaborate more on this here). SIA is thus not any strange, abberant, unjustified type of reasoning—it’s what you get when you reason in the standard way about your own existence. Given that I exist, then, I should favor views according to which more people would exist.
3 Presumptuous philosopher
Bostrom and Ćirković provide the presumptuous philosopher case as a counterexample:
It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite and there are a trillion trillion trillion observers. The super-duper symmetry considerations are indifferent as between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1!” (whereupon the philosopher runs the argument that appeals to SIA).
I’ve addressed this case before:
All views imply some presumptuousness. If you take anthropics seriously then you’ll think it can sometimes give you strong evidence for things that would overturn otherwise convincing empirical evidence.
Views that deny the presumptuous philosopher result must posit some fundamental difference between dying and not having been created on anthropic grounds.
The presumptuous philosopher result only follows if one is certain about anthropics which you shouldn't be.
Suppose that at the dawn of creation every possible person had time to ponder anthropics. They’d think it quite unlikely that they’d come to exist. From here, SIA would clearly be the right theory—for upon discovering that one exists, they are just updating on the fact that they came into existence, something that they realized was unlikely at the dawn of time. But clearly our anthropic reasoning shouldn’t depend on whether we had a few seconds at the dawn of creation to ponder anthropics.
Bostrom and Ćirković argue against a proposal by Olum of how to avoid the presumptuous philosopher result. I agree with them that Olum’s proposal fails. One should just bite the bullet. They then say:
Luckily, the physicists do not abort the experiment but instead offer the philosopher a bet on the outcome, agreeing to pay him one thousand dollars if the test comes out in favor of T4 in return for ten thousand dollars if it favors T3. The philosopher gladly accepts. As it happens, the physicists win the bet and get ten thousand dollars. As there is a one-in-a-million chance that the experiment has yielded a misleading result, a second experiment is proposed to verify the first. Despite the setback, the philosopher’s SIA-based confidence in T4 is hardly perturbed; he still assigns a probability of merely one in a million to T3, so he accepts a repeat bet with he physicists. The presumptuous philosopher is making a fool of himself.
T4 is the world with many more observers of greater density.
But this is just assuming that SIA is wrong! If SIA is right, then because they’re probably in T4, the philosopher will win. And there’s a reason to think this is so—because bigger universes have more betting philosophers, so if one keeps rerunning the experiment and taking the bet perpetually, the philosophers will, on average, get more money. Of course, you can always stipulate that the philosophers lose the bet and look foolish, but that’s not a good basis for an objection.
4 Alternatives to SIA?
Bostrom and Ćirković suggest that one can avoid the puzzle by adopting some restricted version of SSA or adopting a view other than SSA or SIA. They suggest that this avoids crazy results like Serpent’s Advice:
Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery. One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, one the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’s theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!”
Or Lazy Adam:
Assume as before that Adam and Eve were once the only people and that they know for certain that if they have a child they will be driven out of Eden and will have billions of descendants. But this time they have a foolproof way of generating a child, perhaps using advanced in vitro fertilization. Adam is tired of getting up every morning to go hunting. Together with Eve, he devises the following scheme: They form the firm intention that unless a wounded deer limps by their cave, they will have a child. Adam can then put his feet up and rationally expect with near certainty that a wounded dear – an easy target for his spear – will soon stroll by.
But no view other than SIA can avoid this result. To see this, imagine that Adam and Eve are in the garden but don’t know that they’re the only two humans. They reject SIA, so as a result, they think they don’t have any reason to think civilization lasts a long time. They are currently 50/50 between humanity having tons of people and it having just two people, but reason in the following way: “if civilization lasts super long, it’s unlikely that we’re the first two people—for we could be any of the people. In contrast, if there are only two people in total, then we’re guaranteed to be the two people. Therefore, if we ever discover that we’re the first two people we’ll get extremely strong evidence that humanity doesn’t last long.”
Suddently, God calls from the heavens “you are the first two people.” Well because they were previously indifferent between the two hypotheses, and the hypothesis that there are many people makes it super unlikely they’d be the first people, they can now be extremely confident that the future won’t have many people. This means that Lazy Adam and Serpent’s Advice cases can be created—because they know with extreme confidence that they won’t have many descendants, they can be confident about Eve’s infertility and that a deer would drop dead if that’s the only alternative to them having many offspring. But this is crazy!
Thus, one can’t have this fence-sitting middle-ground position, holding out for a better solution. Every view either implies that you can come to predict that improbable future events will happen based on alternatives resulting in many people existing or is SIA. There is no alternative. One must choose.
For this reason, SIA is the much better view. Both SIA and alternatives imply presumptuousness about the past. But only SIA avoids presumptuousness about the future in cases like the Lazy Adam case. Presumptuousness about the past is fine—there’s nothing wrong with reasoning that Julius Caesar existed because it predicts present evidence. But predicting that future events will happen based on their future consequences is obviously bad probabilistic reasoning, for you haven’t observed those future consequences yet, so they can’t give you a reason to believe a theory.
The only reason to infer some event happened on the basis of evidence is if the odds of that evidence occurring are higher if the event happened than if it didn’t. But because future consequences are not observed yet, one can never get evidence for how a future event will turn out based on its future consequences. Thus, all and only alternatives to SIA imply objectionable presumptuousness.
While I admit that I don’t care too much about this, I appreciate that you care. 🙂
SIA should be suspect because ultimately probability is just a fancy way of counting and if you phrased this just as a matter of counting possibilities this isn't really convincing.
I think the problem here is, much like with sleeping beauty, ultimately just a confusion about what it is you are counting. If you are trying to count the fraction of experiences that are like being woken up for beauty where the coin is heads divided by all such experiences that gives you one result -- and could be called probability in a sense. OTOH if you count the number of configurations of the whole world divided by the number where the coin is heads you get a different result.
Same thing here. You are just counting two different things.
--
More generally, if you accept the kind of reasoning a la SIA you end up having to say that you should be certain that theories which predict infinite numbers of people are infinitely more likely than ones which don't. And then what about uncountable cardinalities (nothing about these theories require the observers to be in same 'universe').