Alternatives to SIA are Doomed!
The doomsday problem afflicts all alternatives to SIA as do other absurd results
Cogito ergo sum ergo mundus magnus est is Latin for “I think therefore I am therefore the universe is big,” according to the extremely reputable Google translate. This sounds impressive, thus confirming the principle that everything sounds cooler in Latin. Yet this core claim is the basic thesis of SIA. More specifically, SIA claims “all other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Thus, views of reality on which there are more observers—the technical term for agents who are around to think about things—are more probable because it’s more likely I’d be one of them.
There are lots of objections to SIA. I’ve addressed many of these before—I basically think none of them are any good; they amount to little more than gawking at surprising conclusions that fall straightforwardly out of application of Bayes theorem to the fact that one exists. One of the biggest reasons to adopt SIA is that it just involves being a Bayesian about one’s existence—if there are more people then it’s more likely that I’d exist just as if there are more apples it’s more likely that any particular apple would exist. Here, I’ll present another reason that might be even more decisive.
It’s well established that SSA implies some crazy things. SSA says that you should reason as if you’re randomly selected from the set of possible observers in your reference class—SSAers don’t agree on who is in your reference class but many claim that your reference class is just all conscious agents. Specifically, SSA of this variety implies:
The doomsday result: if you should reason as if you’re randomly selected from the actual observers you should guess that humanity won’t be around very long. If it is then you’ll be in the first infinitesimal sliver of people which is very unlikely.
Adam and Eve: Adam and Eve are in the garden of Eden. They’re the first two people. They’re considering having sex. They somehow infallibly know that if they have offspring the human race will be extremely vast and thus they’ll improbably be in the first infintesimal sliver of people. So on SSA, Adam and Eve should be confident, based purely on those considerations, that Eve won’t get pregnant, because if she did she’d be in the first infinitesimal sliver of people. However, surely anthropic reasoning is not a reliable method of birth control! It’s even worse than the pullout method!
Lazy Adam: Adam is informed that if he has sex with Eve she’ll get pregnant and give birth to a vast civilization, with more people than the stars in the sky or the grains of sand on a beach. Adam is lazy but wants food. So he agrees to have sex with Eve unless a wounded dear runs up and drops dead at his feet. On SSA, because it’s very unlikely that Adam would be one of the first few people, he would be justified in thinking that the dear would drop dead at his feet.
Even Bostrom, when defending SSA, basically admits these results. He caveats by saying that you’re not really making the events more likely and that you’re not really influencing things. But he admits that in the Lazy Adam scenario, Adam is justified in expecting a dear to drop dead at his feet—at least as long as he’ll create a sufficiently large population conditional on it not happening. Problem: that’s absurd!! The Doomsday result is similarly absurd—we can’t just know that we’ll probably die soon just from the fact that we exist.
The problem for the denier of SIA is that these results all are just straightforward applications of Bayes theorem if one denies SIA. To see this, let’s first see how SIA avoids the Doomsday result. The Doomsday result says that the odds that you’d be one of the first 200 billion people if the world contains, say, googol people is super low. Therefore, given that you are one of the 200 billion people, it’s super unlikely that the future will contain googol people. Therefore, even if we get super strong evidence that the future will contain googol people—perhaps God comes to us and informs us of this and gives us technology that will prevent extinction prior to reaching the carrying capacity of the universe of googol people—we should still think we’ll die soon.
SIA doesn’t deny that it’s very unlikely that we’d be so early in the universe. The odds that we’d be in the first 200 billion people if the population size will be googol is 200 billion/googol=2x10^-89. Yikes, those aren’t good odds. In contrast, if we’ll go extinct at 200 billion people, the odds we’d be in the first 200 billion people is 100%. But SIAers say that it’s googol/200 billion times more likely that we’ll exist if the universe is very big and contains googol people. So the two numbers cancel out.
Similarly, SIAers agree that it’s very unlikely that Adam and Eve would be the first two people if the world is big. But it’s also more likely that they’d exist if the world is big. Therefore the two values cancel out. The same basic thing is true in the Lazy Adam case.
So SIA doesn’t dispute the Doomsday inference—that inference is pretty trivial. It’s true that if the world has more people it’s unlikely that you’d be in the first infinitesimal slice. But it also says that it’s more likely that you’d exist in the bigger universe. Rather than denying the Doomsday or Lazy Adam inferences, it provides another inference to cancel it out.
But if you deny the SIA then there is no way out. The Doomsday inference is straightforward—it’s undeniable that conditional on the universe having fewer people it’s more likely that I’d be in the first tiny slice of it. The same is true of the other inferences described above. Unless you say that universes with more people are more likely, the Doomsday and Lazy Adam inferences succeed.
Now you might object to this by redrawing your reference class so that people in the future aren’t in it. But this is bizarre—it seems like I could have existed in the future instead of now. And this will require a very arbitrary and haphazard reference class. Why would it be that future people aren’t in the reference class?
Furthermore, even if you do this, there’s a new version of the Doomsday argument. Suppose that there are two possibilities—the first one says that the world will contain quadrillions of people who are born in some room, each created after the prior one dies. The second possibility claims that the world will contain only one person who is born in that room.
Upon being created, on SSA, you should be 50/50 between the two hypotheses, for you don’t know your birth rank. Then suppose that there’s a rock in the corner with a number labeled with which observer you are. You look at the rock and find that it has the number 1, meaning you’re the first observer. Now you should be almost certain that you’re in the state of affairs where only one person will ever exist—that makes it certain that, given your existence, you’ll observe the number 1, while the other hypothesis makes the odds of it one in googol.
But from this one can get all of the doomsday results. This inference is basically the doomsday conclusion—from the fact that you exist early on you can know that the world won’t have many people. Furthermore, we can derive results like the Lazy Adam result. If there’s a button which will cause googol people to exist in the future and you agree to press it unless some improbable thing happens, you can be very confident that that improbable thing will happen. On this account, you can be sure that you’ll win at poker by pressing the button unless you get a royal flush. But clearly this is absurd!
So not only is SIA a straightforward application of probabilistic reasoning supported by powerful betting arguments, alternatives to it are absurd! They imply you can consistently win at poker by created a bunch of clones unless you get royal flushes. It isn’t close—SIA is by far the better view.
Hmmm. I'm starting to understand why "rejection" is such a good alt to run. Instead of having to deal with any of this nonsense, just refuse to make whacky inferences from your existence! That seems like a reasonable plan with no flaws.
So anthropic shadow says we underestimate x-risk because we can't observe worlds that were destroyed before observers were developed... SIA implies that x-risk is not a big deal- I'm much more likely to be in an observer moment in a universe where observers last a long time so either AI is not an x-risk in this universe or AI will be conscious and produce a huge number of observer moments.