Precisely Defining The Self-Indication Assumption Shows It Is Right
A new powerful argument for SIA
How to think about SIA
Scott Alexander once wrote “In my heart, there is a little counter that reads “XXX days without a ten-thousand word rant about feminism.” In my heart, there is a little counter that readers “XXX days without a 5,000-word rant about SIA.” While I can feebly, half-heartedly care about politics enough to write a few articles about it, I can only go so long without writing about SIA. I failed no new articles about SIA November. Quoting Mr. Rogers—perhaps talking about SIA, though it’s not clear from context—"it's you I like,” not pieces about the electoral college or Trump or shooting drills, but pieces about SIA and pieces about SIA only.
If that is intolerable to you, if you must persecute me for my love of SIA—no doubt a common form of persecution that affects millions of people—there are other blogs!
Okay now that the kids have left the room, let’s talk anthropics.
Suppose that God flips a coin (note: this is less surprising than many things people normally think he does, like creating Eve from Adam’s rib). If it comes up tails, ten people get created. If it comes up heads, one person gets created. From this process, you get created. What odds should you give to the coin having come up heads?
I think the answer here is that you should think tails is ten times likelier than heads. If tails predicts ten times as many people existing, then after coming to exist, you should think tails is ten times likelier. The view I’ve defended—at some length—is called the self-indication assumption; the basic idea is that if a theory predicts N times more people that you might currently be, it predicts your present existence N times as well. Because in the coinflip scenario you could be any of the people, and there are ten times as many if the coin comes up tails as if it comes up heads, you should think tails is ten times likelier than heads.
Note, it only makes sense to think a theory makes your existence likelier if the theory means there are more people you might currently be. It would be silly to think my existence is likelier if there are more shrimp, because I’m not a shrimp.
This core intuition that a theory that says there are more people that you might be makes your existence likelier is something that’s not too hard to grok. Yet getting highly precise about the definition of the view is tricky. In this article, I’ll describe how I think the view should be defined before showing how with this definition in mind, we have a powerful argument for the view. Let me first describe how I’ve tended to define it, though, as I’ll describe, I don’t think this is quite right:
SIA proposal 1: a theory according to which N times as many people that you might currently be exist as another theory predicts your present existence N times as well as the other theory.
This explains quite clearly how we can make sense of the coinflip case. Because if the coin comes up tails ten people come to exist, while only one person comes to exist if it comes up heads, tails is ten times as likely as heads. On the theory that it comes up tails, ten times as many people exist, so you should treat your present existence as making tails ten times likelier than heads.
This definition, however, is clearly incomplete.
The most famous problem in decision theory is the sleeping beauty problem. You’re put to sleep on Sunday. Then a fair coin is flipped. If it comes up heads, you wake up once on Monday, but you don’t know what day it is. If it comes up tails, you wake up once on Monday, again not knowing what day it is, then your memory is erased, and you wake up again on Tuesday, also not knowing what day it is. If it comes up tails you wake up twice with no memories, while if it comes up heads, you wake up only once with no memories.
The million dollar question: after waking up with no memories, what odds should you give to the coin having come up heads?
There are two main answers: 1/2 and 1/3. Thirders say that because you’ll wake up twice as much if the coin comes up tails, after waking up, you should think tails becomes twice as likely as it would otherwise be. Thus, tails ends up 2/3 probability and heads ends up 1/3 probability. Halfers claim instead that you didn’t learn anything new between Sunday and the wake up—you knew you’d wake up with no memories at least once—so you should keep your credence of 1/2 that you had on Sunday in the coin coming up heads (see here for why this argument is wrong).
I’ve argued elsewhere that thirders are correct. This is generally taken to be a quintessential result of SIA. However, the way we’ve formulated SIA so far, this isn’t so—because heads and tails both predict only one person exists, it gives no reason for tails to get a boost. Thus, we should instead update it to:
SIA proposal 2: a theory according to which N times as many time slices that you might currently be exist as another theory predicts your present existence N times as well as the other theory.
A time slices is simply a person that exists at some time. This view explains why one should think tails is twice as likely as heads in sleeping beauty—because you wake up twice as much, there are twice as many time slices. The person (you) exists at twice as many times. You have the experience of waking up twice, so there are two time slices that wake up. Thus, tails will get a probabilistic boost by a factor of two over heads.
This still has a problem. To see this, imagine that there are two people: Bob and John. A coin is flipped. If it comes up heads, they both have blue shirts. If it comes up tails, a random one of them has a blue shirt and the other has a red shirt. You wake up unsure if you’re Bob or John and you have a blue shirt. What odds should you give to the coin having come up heads?
SIA seems to want to answer: 2/3. And yet on this view, because whether the coin comes up heads or tails, you might be Bob or John, you won’t get any update in favor of either theory. In both cases there are two time slices—Bob and John during the one day they’re alive—that you might be.
Or suppose that a coin is flipped. If it comes up heads one person is created. If it comes up tails googolplex chimpanzees are created. Let’s give them all names: 1, 2, 3, 4, etc. Suppose that there’s a slight chance you’re a chimpanzee—there’s a one in ten billion chance that chimpanzees are smart enough that if there are a googolplex of them, some would be as smart as you and have your properties. Additionally grant that at most one chimpanzee could be this smart—there’s a one in a billion chance that one chimpanzee has the requisite level of intelligence and 999,999,999/a billion chance that no chimpanzees have that intelligence.
This simple version of SIA implies that you should regard tails as googolplex times likelier than heads. On tails, you might be 1, 2, 3, 4, etc, so you’ll get a googolplex to one boost in favor of tails. Yet clearly this is crazy. SIA shouldn’t favor theories according to which at most one person will be produced that is smart enough that you might be them, and almost certainly no one will, when the other theory guarantees someone being that way.
Fortunately, I think we have a way out. Consider,
SIA proposal 3: The rational credence in you being one of two time slices doesn’t depend on the presence of other time slices.
This may seem like a weird way to define it but it totally captures the spirit of the principle. Let’s imagine that a coin is tossed—if it comes up heads, one person gets created, while if it comes up tails, two people get created. Let’s call the first person to be created if the coin comes up heads 1H, the first person to be created if the coin comes up tails 1T, and the second person to be created if the coin comes up tails 2T.
If 1H and 1T were the only two people, you’d regard it as just as likely that you’re 1H as 1T. If 1T and 2T were the only two people, you’d regard it as just as likely that you were 1T as 2T. Thus, probabilistically you should think it’s just as likely that you’re 1T as 1H and as 2T, meaning tails overall has twice the probability of heads.
Similarly, the most famous objection to SIA is the presumptuous philosopher argument. Imagine there are two theories of physics that you think are equally likely before you consider anthropics. One theory predicts the existence of 1 billion civilizations while the other predicts the existence of a billion billion civilizations. SIA counterintuitively says that the second theory is a billion times likelier than the first, because it predicts a billion times as many people that you might be.
This view can explain why SIA regards the second theory as a billion times likelier than the first. Let’s call a conglomeration of a billion civilizations a cluster. Thus, on theory one, there is one cluster, while on theory two there are a billion clusters. (The demonstration I’m going to give will be in terms of clusters to make the big numbers easier to manage, but the same point applies with individuals).
Let T11 be the proposition that you’re in the one and only cluster if theory one is true. T21 is the proposition that you’re in the first cluster if theory two is true, T22 is the proposition that you’re in the second cluster if theory two is true, T23 is the proposition that you’re in the third cluster if theory two is true, and so on. = denotes the two propositions are of equal probability.
SIA holds that:
T11=T21=T22=T23=T24…T2(10 billion)
From this it will follow that theory two is a billion times likelier than theory one. Because there are a billion clusters which, were they the only clusters, you’d be as likely to be in as the only cluster that exists on theory one, SIA holds it’s 10 billion times likelier that you’re in one of the ten billion clusters that exist on theory two as the one cluster that exists on theory one.
I think we’ve gotten close to the right definition here, but we’re not quite there yet. To see this, imagine a coin is flipped. If it comes up heads, one person named John gets created and has a red shirt. If it comes up tails, two people get created with blue shirts named Fred and Bob. But here’s the kick: Fred and Bob only have blue shirts because the other existed. If Fred was the only person to exist, he’d have a red shirt—likewise, if Bob was the only person to exist he’d have a red shirt.
Suppose you’re created with a red shirt but you don’t know your name. It seems rational to have a credence of 1 in the coin having come up heads—after all, it coming up heads is the only scenario where anyone has a red shirt. But this formula so far would imply that you should think tails is twice as likely as heads: if Fred was the only person to exist on tails, then you’d regard being Fred and John as equally likely, and if Bob was the only person to exist on tails then you’d regard being Bob and John as equally likely. Thus, if you’re supposed to treat the relative probability of you being two people as being what it would be if there were no other people, you should regard tails as twice as likely as heads, which is obviously wrong. To get rid of these scenarios we should adopt:
The right definition of SIA: Holding all other variant features constant, the rational credence in you being one of two time slices doesn’t depend on the presence of other time slices.
I’ve bolded other variant features to denote that this is a term of art. An other varian feature is some feature that a person has that depends on the presence of other people. In the scenario described above, because Bob and Fred’s shirt color depends on the other, when doing the comparisons of the odds you’re John vs the odds you’re Fred, we’d keep Fred’s shirt color blue, as it would be if John was there.
How this shows it’s right
With this definition in place, I think we can see why SIA is right. If there are two people, John and Fred, why should your credence in you being them depend on the presence of other people? It seems the default view should be that the relevant probabilities aren’t affected.
There are, of course, cases where learning about a third possibility will affect the relative probability of one of two possibilities. For instance, suppose a coin is flipped. If it comes up heads, I get put in California, while if it comes up tails, I get put in Georgia. After learning that if it comes up tails I might also get put in Arizona, this would make it likelier I’ll get put in California than Georgia.
But crucially, in this case, the added possibility affects whether one of the other possibilities will happen. It’s now less likely I’ll be put in Georgia—now it only happens half the time if it comes up tails. But this is crucially not analogous to cases involving SIA—whether you’re John or Fred isn’t affected by the existence of some other person. Another person being added can’t change whether you’re John or Fred. If there are two possibilities, A and B, the mere addition of other possibilities shouldn’t affect the relative probability of A and B, so long as the new possibility doesn’t affect whether A or B happens and there’s no weird counterfactual dependence. SIA is the default view for the same reason that the default view is that the presence of a shark doesn’t affect the relative probability of you being two people: there’s no reason it would. If the presence of a shark doesn’t affect the relative probability that you are John or Fred, why should the presence of another person affect it?
This becomes especially clear when we consider a concrete case. Suppose that one person is created. Then a coin is flipped. If it comes up tails, in a million years an exact clone of them gets created. If you get created from this, not knowing whether you’re the original or the clone, what should be your credence in heads vs tails?
SIA reasons, using = to mean it’s equally likelier that you are, H1 to mean you’re the only heads person, T1 to mean you’re the first tails clone, and T2 to mean you’re the second tails clone:
H1=T1=T2.
Thus, tails is twice as likely as heads. Every non-SIA view rejects this.
But the only bit of it that is controversial is H1=T1. This means that the non-SIAer should think that if they’re the first person, probably the coin will come up heads (for the non-SIAer thinks H1>T1, meaning it’s likelier that you’re the first person and the coin will come up heads than that you’re the first person and the coin will come up tails). But surely this is a crazy belief.
It gets even crazier. Imagine that instead of there being just a single clone if the coin comes up tails, there are infinity clones. SIA reasons:
H1=T1=T2=T3…Tinfinity. Thus, H1 is infinitely unlikely.
However, the non-SIAer ends up concluding that H1 has probability .5, while T1 has probability 0—it’s one of infinity equally probably options. This means that they should think that if they are the first person, it’s infinitely likelier that the future fair unflipped coin will come up heads than that it will come up tails. This is nuts—surely you shouldn’t think that if you’re going to flip a coin later, and you’re the first guy in a potentially long series, there’s a 100% chance it will come up heads just because if it comes up tails a bunch of people get created. The odds the coin will come up heads shouldn’t be affected by the presence of the other people.
It gets even worse.
Suppose that if you know that if X then Y will happen with probability R. Then you merely learn X (assume the odds of you learning X is unaffected by the odds of X—that is to say, the probability that you’d learn X is no greater if Y will happen than if it won’t). You should expect Y to happen with probability R. This means that if you learn you’re the first person, on non-SIA views, you should be infinitely certain that the fair coin that hasn’t been flipped yet will come up heads (I elaborate much more on this and other points here).
Here’s another way to see that SIA is the default view. Suppose that a coin is flipped. If it comes up heads, Jack gets created. If it comes up tails, Jack and Tim get created. After you get created, not knowing whether you’re Jack or Tim, what odds should you give to the coin having come up tails? SIA reasons the probabilities are:
Heads and I’m Jack=Tails and I’m Jack=Tails and I’m Tim.
Non-SIA views would generally deny that heads and I’m Jack = tails and I’m Jack. But this is hard to believe. If I know that I’m Jack, and heads and tails will both result in me being created, why should I think either is likelier than the other? I should think it’s as likely that the coin came up heads and I’m Jack as the coin came up tails and I’m Jack, because both cases entail that I’m Jack, and if I’m Jack, they each have equal probability.
Note, this argument generalizes. If we grant that it’s as likely you’re Tim as Jack if the coin comes up tails, then if a coin is flipped that creates Jack if heads and Tim and Bob if tails, tails will be twice as likely as heads. Additionally, so long as we make Tim a clone of Jack, every single non-SIA view in the literature holds that the odds I’m Jack and the coin came up tails doesn’t equal the odds I’m Jack and the coin came up heads, even though both predict Jack existing.
Here’s one way a person could reply to the argument. Perhaps there is a principled justification for thinking that the odds you are some person depend on the presence of other people. The right way to assign credences is you divvy up your credence according to the probability of you being the various people who’s experiences are consistent with your evidence if the theory is true.
Thus, in the case involving clones that I discussed before—where one person is created, then a coin is flipped, and if it comes up tails they’re cloned in a million years—one would divvy up credences in the following way.
Both heads and tails predict someone with their evidence. Thus, they split their credence across the people they might be with their evidence. They end up with credence of 1/2 in their being the first person and the coin coming up heads, 1/4 in their being the first person and the coin coming up tails, and 1/4 in their being the second person and the coin coming up tails. This view, however, has a myriad of problems:
It violates the intuitive notion that the relative odds you’re the first person and the coin will come up heads vs the odds you’re the first person and it will come up tails doesn’t depend on the presence of clones in a million years.
It potentially violates the principle that if you know that if X then Y will happen with probability R% and then you just learn X, you should believe Y with probability R%. If it doesn’t, because it implies that you should think the odds that you’re the first person and the coin will come up heads are greater than the odds you’re the first person and the coin will come up tails, it implies that if you learn you’re the first person, you should think with probability 2/3rds that a fair unflipped coin will come up heads. Worse, it implies that by changing the numbers, it can be rational to, after learning you’re the first person, be arbitrarily certain that the fair coin will come up heads—and by changing the scenario, you can be made arbitrarily certain that you’ll get 100 royal flushes because if you don’t, you’ll be cloned a bunch of times.
This view ends up concluding that you should simply reason as if you’re randomly selected from among the presently existing people that you might be. But on such a picture, the number of people you might be is irrelevant. Thus, the view violates Bayesian conditionalization in the following case (this means according to the view, learning something that’s likelier if a theory is true than if it’s false doesn’t raise the likelihood of the theory). Suppose that a coin is flipped. If it comes up heads, one person gets created with a red shirt and 9 with blue shirts. If it comes up tails, ten people get created with blue shirts. Each person spends ten minutes in darkness before seeing their shirt color. It implies that while in darkness, you should regard the two hypotheses as equally likely, but after seeing you have a blue shirt, though you having a blue shirt is ten times likelier if the coin comes up tails than heads, you should remain indifferent between the two hypotheses. Both predict at least one person that you might be, so you’re indifferent between the two.
Imagine that there were 10 billion clones of you. They were all put to sleep. A coin was flipped. If it came up tails, they all woke up, while if it came up heads, only ten woke up. Upon waking up, it seems you should think that heads is a billion times likelier than tails. Yet in this case one could similarly argue: both theories predict someone waking up that you might be, so you should remain indifferent between heads and tails. Whatever explains the person’s error in this case also explains the SSAers error.
All of this I consider to be something of a sideshow. Only SIA can maintain the extremely obvious judgment that the relative probability of you being one of two people doesn’t depend on who will be created 10 trillion years after the two people are created. Violating it inevitably results in one thinking with more than probability .5 that a fair coin will come up heads. That’s pretty nuts!
> Presumptious philosopher
Your defence here is isomorphic to how people usually defend electoral college. You simply explain how SIA arrives to its conclusion. But just like how explaining the way electoral college works doesn't make ellectoral college less unfair, explaining how SIA reasons in this situation, doesn't make its reasoning in this situation less crazy.
> But the only bit of it that is controversial is H1=T1.
No. The other controversy is whether T1 and T2 are different outcomes of the experiment or the same one.
In general there are three different types of "anthropic probability problems".
1. A coin is tossed. On Heads n people are randomly selected from some set of possible people and created. On Tails N people are randomly selected from the set of possible people and created. You were created among such people.
Here SIA reasoning is correct. Your existence were not guaranteed by the conditions of the experiment, and so learning that you are created gives you actual evidence about the state of the coin.
2. A coin is tossed. On Heads a person is put into Room 1. On Tails a clone of this person is created and then either the original is put in Room 1 and clone in Room 2 or vice versa. You are in a Room and you are unsure whether you are the clone or the original.
Here SSA reasoning is correct. Your existence is guranteed by the conditions of the experiment, so you do not learn anything from it. However, the room assignment on Tails is random, so learning that you are in Room 1 gives you actual evidence about the state of the coin.
3. A person is put into Room 1. A coin is tossed. On Tails a clone of the person is put into Room 2. You are in one of the Rooms, unsure whether you are a clone or the original.
Here both SIA and SSA are wrong. Neither there is a chance not to exist in the experiment, nor to be in a different room than the one you are. So if you learn that you are in Room 1 you do not update of the coin, similarly to Double Halfing in Sleeping Beauty.
The fact that manistream anthropic theories are trying to reason the same way in all of these completely different scenarious inevitably makes them crazy in general case.
This is a coherent explanation of your view. I find this view to be rather crazy (probability, on the margin, is _about_ counting slices!), but it's coherent.