My Master Argument For The Self-Indication Assumption Is Even Better Than I Thought
Non-SIA views are finished!
Introduction
I’ve given my master argument for the self-indication assumption (SIA) in various places; half a dozen articles, a published paper (you can call me Dr. Bentham’s Bulldog), and a deeply hilarious article explaining my published paper loved by people the world over. As far as I can tell, the response from most critics of SIA to the argument has been to ignore it—while there’s quite a substantial wall of evidence supporting SIA, it’s been almost entirely ignored.
Well I’m now convinced that the argument is quite a bit better than I even dreamed of it being. There were points that I missed, that make the road even bumpier for anyone interested in rejecting the self-indication assumption.
Oh but I’m getting ahead of myself. I haven’t even explained what the self-indication assumption is. In short, it’s the idea that your existence gives you evidence that there are more people that you might currently be. For instance, if a coin is flipped that will create one person with a red shirt if it comes up heads and two with a red shirt if it comes up tails, if I have a red shirt, I should think tails is twice as likely as heads, for it predicts twice as many people that I might currently be.
In slogan form, from the fact that you exist, you get evidence that there are more people that you might be. Specifically, a theory that says there are N times as many people I might currently as another theory makes it N times likelier that I’d come to exist.
So now we come to the master argument. Suppose that a person comes to exist. Then, at the end of his life, a coin is flipped. If it comes up tails, 999 extra people get created. Assume that the people have similar sorts of experiences, so the people don’t generally know if they’re the first person or a later person.
Suppose that I get created from this process. On SIA, I should think that tails is 1,000 times as likely as heads because it predicts 1,000 times as many people that I might currently be. Then suppose that I learn that I’m the first person. Well, the odds I’d be the first person if the coin came up tails is 1 in 1,000 (I’m equally likely to be any of the 1,000 people), while it’s 100% if it came up heads. So, taking that into account, I should now think that it’s equally likely that the coin that has yet to be flipped (I now know I’m the first person, so I know the coin hasn’t been flipped yet) will come up tails and that it will come up heads.
(Note, while this scenario is a pretty standard scenario that compares SIA to alternatives, and is generally regarded to be a bad result to SIA, all the cases where SIA diverges from alternatives can be remade similarly. You can’t, for instance, agree with SIA’s priors here but diverge in other cases, for the same considerations will be present in other cases).
However, alternatives to SIA deny that tails is 1,000 times likelier than heads prior to finding out that I’m the first person. Generally, they’ll conclude that tails and heads are equally likely, for a theory predicting more people doesn’t raise its likelihood. But if tails and heads are equally likely before I learn I’m the first person, and my being the first person is 1,000 times likelier if the coin comes up heads than tails, then if I learn that I’m the first person, I should subsequently conclude that heads is 1,000 times likelier than tails. But this means that I should be very confident—at 1,000 to 1 odds—that a coin that hasn’t been flipped yet will come up heads, based purely on the fact that a bunch of people will be created in rooms if it comes up tails. By making the numbers different and subtly changing the scenario, you could make it so that I should be arbitrarily certain that I’ll get 1 billion consecutive royal flushes, just by making it so that I discover my birth rank and that a bunch of people will be created if I don’t.
(Note, this doesn’t just apply in super weird far off thought experiments. If, in the real world, you didn’t know for a while that you were the ~110 billionth human, and then learned that you were the 110 billionth human at some point, then you should be confident humanity would die out soon, because otherwise it would be unlikely you’d be so early. If, for instance, humanity’s fate would be decided by a die roll, you should think it’s very likely the die roll will get one of the results that doom us).
If you reject the argument, there are three ways to go. I’ve now come to think that all of these are far less viable than I used to.
The broadly SSAish route
The first is biting the bullet. Perhaps you really should be certain that an unflipped coin will come up heads based on the fact that if it doesn’t infinite people will be created. This has three major problems.
The first problem is that it’s just intuitively crazy. If you learn that you’re the 110 billionth human, and then discover that there will be infinite more people unless you get 100,000 consecutive royal flushes in poker, you should not expect to get 100,000 consecutive royal flushes. Vividly imagine the scenario—is it really plausible that, as the dealer deals, you should think “I’m definitely going to get 100,000 royal flushes?” While the scenario sounds a bit weird, it’s one that isn’t too hard to picture—and the non-SIA view is just crazy.
Second, this conflicts with a very plausible principle, though to explain the principle, I’ll need to define some terms. The objective probability of an event is the share of the time it will happen if you repeat it—for instance, a fair coin has an objective probability of 50% of coming up heads, because if you keep flipping coins forever, they’ll approach coming up heads half the time.
The principle says that if there’s an event that hasn’t happened yet with objective probability of N%, where the world up until this point is the same if the event happens as if it doesn’t, my credence in the event should be N%. For example, suppose that a coin will be flipped, and if it comes up heads, a city will be blown up. I should think there’s a 50% chance it will come up heads—because the things that it will make different about the world will occur in the future, I haven’t observed them yet, so there’s nothing to move me from the default, which is thinking its probability equals its objective probability.
If you don’t have any contrary evidence, you should think that the odds of an event equals its objective probability. If you see a fair coin and have no extra evidence that the coin will come up either tails or heads, you should think there’s a 50% chance it will come up heads. But if—as is true in the cases described by the principle—the world has been the same up until this point however the event will turn out, there’s nothing to move you from your default credence in the event that’s equal to its objective probability.
This principle, of course, rules out the bullet-biting option. Because the world will look the same before the coin flip if the coin will come up heads as if the coin will come up tails, before the coin flip you should think heads and tails are equally likely if you follow the principle.
Of course, you can always reject the principle. But as I’ve described, there’s a powerful motivation for the principle, and literally no other counterexample. So giving it up is a pretty big cost to salvage an already untenable view.
Third, the view has another very nasty implication. Suppose that one person will be created, then a coin will be flipped, and then three more people will be created if it comes up tails. On SIA, after learning I’m the first person, I should think heads and tails are equally likely. On alternatives, I should think that heads is four times likelier than tails. Suppose that, for instance, the way tails will cause three extra people to be created is that it will turn on a machine that will generate three more people.
But now suppose that I destroy the machine so that it doesn’t create three people. Assume I do this before the coin has been flipped but after I learned I’m the first person. Now, on such a picture, I should go back to thinking heads and tails are equally likely (now both heads and tails both only mean one person will be created).
But surely breaking a machine that will create three people if a coin comes up tails shouldn’t cause you to think that it’s more likely that the coin will come up heads. That follows from the following very plausible principle:
If A is true if and only if B is true, and you know this with certainty, then you should think A and B are equally likely to be true.
For example, if I know that water and H2O are the same thing, I should think it’s as likely that there’s water in the fridge as it is that there’s H20 in the fridge. But given that the coin will come up heads before you destroy the machine if and only if it will come up heads after you destroy the machine, destroying the machine shouldn’t change your credence.
Destroying the machine doesn’t make it any more likely that the coin will come up heads, so it’s hard to see how it could change my credences.
Slightly technical bit that I’ll put in block quotes—feel free to skip it:
Of course, there are cases where there is some change such that B will happen after the change if and only if A would have happened before the change, and yet after learning about the change you should think B is likelier to happen than A was before the change. For example, if a coin is flipped, and I’ll die if it comes up heads, after the coin was flipped, I should think that there’s a 100% chance that it came up tails. However, before flipping the coin, I should have think that the odds in the proposition “the coin will come up heads,” being true was 50%, even though the first was true if and only if the second was true. So maybe the non-SIAer could say something is similar here—that destroying the machine gives you evidence that the coin will come up heads, and so after destroying the machine, you should think heads is more likely.
But these are cases where the change produces an observation that’s more likely on one hypothesis than another. The reason that I should think it’s more likely that the coin has come up tails after it is flipped is that if it weren’t, I’d be guaranteed to be dead by now. More precisely:
If some change occurs such that after the change H* will happen if and only if H would happen pre change, your credence in H* should only be higher than your credence was in H if the change itself is likelier to occur if H* is true than if H was true.
But this rules out destroying the machine changing your credence, for worlds where the coin will come up heads are no likelier to have you destroy the machine than worlds where the coin will come up tails. Given that there’s no backwards dependence—you’re not, for instance, having your machine-destroying decisions determined by an oracle that’s more likely to recommend destroying the machine if the coin will come up tails—the odds of you destroying the machine can’t be impacted by the future coinflip.
For these reasons, I think that the first option of just biting the bullet is obviously out. It is wildly counterintuitive and conflicts with two different plausible principles. There are only two other logically possible ways of avoiding the argument.
What everyone will have to reject
Before we get to it, let’s see in a bit more detail how the argument works. I’m going to use a case involving infinites, because it makes the math easier, but if you don’t like infinites, just pretend I swapped them out with a big number.
Suppose that a single person gets created. A fair coin is then flipped. If it comes up tails, infinite other people get created. Suppose that you learn that you’re the first person. The following 3 intuitively plausible claims conflict:
After learning you’re the first person, you should think that there’s above a 0% chance that the coin will come up tails.
Before learning you’re the first person, you should think it’s equally likely that the coin comes up heads and tails (I use the phrase ‘comes up’ to make it clear that you don’t know whether it’s has been flipped already or will be flipped in the future).
After learning you’re the first person, you should think that it’s infinitely likelier that the coin comes up heads than before learning that you’re the first person (because if the coin comes up heads, there’s a 100% chance that you’re the first person—on account of there only being one total person—while the odds are 1/infinity of you being the first person if it comes up tails, on account of you being equally likely to be any of the infinite people. If there are two theories A and B, and you observe something infinitely likelier if A is true than if B is true, then you should think A becomes infinitely relatively likelier).
Technical bit that you can skip in blockquotes:
These contradict for the following reason.
By 2, heads and tails are equally likely before you know your birth rank. Then, after learning you’re the first person, by 3, you should think heads is infinitely likelier. This means that after knowing you’re the first person, you should be infinitely certain that the coin will come up heads, which contradicts 1.
SIA rejects 2. The bullet biting response, that is embraced by SSA, for instance, rejects 1. This still leaves open the possibility of rejecting 3. 3 can be decomposed into two different principles:
3A: The odds I’m the first person if the coin comes up heads is 100%, while the odds I’m the first person if the coin comes up tails is either 0 or infinitesimal (infinitesimals are numbers smaller than any specific number like 1/4 or 1/8 but bigger than 0—people often say that infinitesimals are the odds of hitting any particular point coordinate if you throw a truly random dart).
3B: If my being the first person has a 0 or infinitesimal probability if the coin comes up tails and a 100% probability if the coin comes up heads, after learning that I’m the first person, I should think tails becomes infinitely more likely.
Any view that rejects 3 will have to give up on one of those. But they seem extremely plausible! Let’s begin with 3A.
The UDASSAish route
If there are infinite people, each of whom I might be, then it seems there’s a 0% or infinitesimal chance I’m the first person. This follows from the notion that you should think you’re equally likely to be any of the people whose experiences are consistent with your evidence—if, for instance, 5 people are created in rooms, and this is all I know, I should think I’m equally likely to be any of the five people.
Now, maybe you think that principle breaks down in cases involving infinite. But the same points can be made with finite numbers. If a person is created, then a fair coin is flipped, and googolplex more will be created if the coin comes up tails, on such a view, I should think that the unflipped coin is ~googolplex times more likely to come up tails than heads.
You might reject 3A by thinking that the odds that you’re various people depends, for instance, on how simply you can describe a person (UDASSA is a view like that). I think such views are very implausible, as they require rejecting the idea that if there are N people with experiences the same as yours, you are equally likely to be any of them. But crucially, even if we accept them, we’re not out of the woods, as they imply a very similar argument.
Remember, the structure of the argument so far has been that in the coinflip case—where one person gets created, then a coin is flipped, and infinitely more get created if it comes up tails—you being the first person is infinitely likelier conditional on heads than tails. This means that after learning you’re the first person, unless you have SIA’s priors—that tails starts out infinitely likelier than heads—you should become absolutely certain that the unflipped coin will come up heads. This view rejects the bit in bold.
But crucially, we can say something very similar even if we give up the bold sentence. Surely the odds of you being the first person, if there are infinite people, is very low. Let’s be very generous and say it’s 1 in 10,000. But this means that by the same logic, after learning that you’re the first person (which is 10,000 times likelier on heads than tails), you should think that the odds that the future unflipped coin will come up heads is 10,000 times the odds it will come up tails.
Now, you could get around this to some degree by giving up both 3A and 2 (remember, 2 said “Before learning you’re the first person, you should think it’s equally likely that the coin comes up heads and tails.”) Perhaps before flipping the coin, you should think that heads is 10,000 times likelier than tails, and so after learning that you’re the first person, you conclude that heads and tails are equally likely.
Such a view is, however, problematic. I have seven main worries:
It requires giving up on the very plausible principle that if there are N people with your experiences, you’re equally likely to be any of them (pun intended).
It’s very hard to see how such a view could secure that the odds of a coin coming up heads, before flipping it, is 50%. While it’s possible, I’m doubtful that there is a plausible formula that you could construct that would guarantee that in all cases, before flipping a coin, there’s a 50% chance that it would come up heads (or, even more concerningly, something rather near a 50% chance, rather than, say, a 99% chance).
On such a view, the odds you’re the first person in a group of N people is the inverse of the number of times more likely the theory that there are N people is than the theory that there’s one person. For instance, if there’s a 1 in 100 chance that you’re the first person in a group of 1,000 people, then before learning your birth rank, the theory that there are 1,000 people must be 100 times likelier than the theory that there’s only a single person (explanation of why this is so in a footnote).1 SIA has a natural and elegant way of securing this, by saying that you’re equally likely to be any of the existing people consistent with your evidence, yet a theory that predicts there are X times more people with your evidence has a prior X times higher. Now, aside from the previous worry, that it’s very doubtful that a viable model can be constructed where this is the case, such a view will have very similar defects to SIA. For instance, given that the odds of you being the first person in a party of infinity is very low, such a view will imply that prior to learning that you’re the first person, you should think that the odds there are infinite people are very high. But this is exactly the counterintuitive implication of SIA.
Such a view is just very strange and unnatural. To see this, imagine that a coin is flipped that will create two people if it comes up tails and one person if it comes up heads. After being created, what should your credence be in heads? Seems the natural answers are 1/2 and 1/3, but this view answers something different—somewhere in between. That’s very strange.
Every specific formulation of a view like this—e.g. UDASSA—seems very implausible.
Necessarily, every view will have to imply that if there are infinite people, for every number, there is someone who your credence in you being is less than that number. For instance, if there are infinite people, for some of them, you’ll have to think there’s less than a 1 in googolplex chance of you being them, for otherwise your credence in your being any of the people in the infinite group would be infinite (infinity times 1/googolplex is still infinity). Suppose that a coin is flipped. If it comes up tails, some fellow named Bob is put in room one. If it comes up heads, Bob is put in a random room—not singled out and put in room one. This view implies that you should think it’s much likelier that you are Bob if the coin came up tails than heads, even though whether you’re Bob is not affected by whether the coin came up tails. This contradicts the principle described earlier that if A is true if and only if B, you should think A and B are equally likely. Just changing which people are in which rooms shouldn’t change the probability that you’re one of the people.
The main reason to adopt a view like this is so that one can have a probability distribution across events that adds up to 1. If there are some events that are mutually exclusive and jointly exhaustive—that means that at most one of them can happen, but one of them has to happen—it seems that your credences in them should add to one. For example, if a coin is flipped, heads has probability of .5 and tails has probability of .5—they’re mutually exclusive, jointly exhaustive, and they add up to 1. But if you add up a bunch of zero or infinitesimal probability events, it’s not clear how you get them adding up to one—so it’s claimed that you shouldn’t have your credence in all the events be zero or infinitesimal. But I think this motivation is undermined by the fact that there are other cases where you’ll have to have credences like this. If there are infinite identical worlds, none of which stand in any spatiotemporal relation with each other, nor do they have any distinguishing features, you should think there’s a zero or infinitesimal chance you’re in each of them. This poses the same worries.
The CCish route
The only remaining option for the non-SIAer is to reject 3B, which, if you remember, says:
3B: If my being the first person has a 0 or infinitesimal probability if the coin comes up tails and a 100% probability if the coin comes up heads, after learning that I’m the first person, I should think tails becomes infinitely more likely.
On this picture, even though you being the first person is infinitely likelier if the coin comes up heads than tails—in the scenario where infinite people will be created after the first person if the coin comes up tails—after learning that you are the first person you shouldn’t regard that fact as making heads any likelier than it was before. One such view is called compartmentalized conditionalization (CC). On this view, after having some experience X, you should treat the relevant data as simply “experience X was had.” If, for instance, I see a red wall, I should regard theories as likelier proportional to the probability that if they were true, someone would have my past experiences combined with the experience of seeing a red wall.
On CC, after learning you’re the first person, you get no update in favor of heads, because both heads and tails predict that someone would have the experience of learning that you’re the first person at some point. Sure, heads makes it likelier that you’d learn that if you exist, but CC doesn’t think that sort of stuff matters. On CC, it doesn’t matter how often some experience is had—the only things that matters is the probability that it would be had at all on the various theories.
I think rejecting 3B is not at all viable for four main reasons.
CC is the only natural and reasonably intuitive version of a view that rejects 3B. So it’s a massive problem that CC is extremely false, having a bunch of extremely implausible results and implying skepticism (I wrote a paper arguing against CC that hopefully will be out at some point).
It requires just giving up on Bayesianism. If there are two hypotheses, and one of them makes some event more likely than the other, Bayes theorem says that event is evidence for the first hypothesis. You being the first person is more likely conditional on the coin coming up heads, so it should be evidence! (And no, you don’t get out of this just by saying that Bayes theorem doesn’t quite work in the normal way with de se evidence—the way to modify Bayes theorem to make it applicable to de se evidence also applies to this case).
It violates a principle called the conversation of evidence. On this view, if there’s some evidence that you’ll be able to observe no matter how it turns out (it’s not the sort of thing that you’ll only observe if it turns out a certain way) you shouldn’t expect your credence to go up after observing the evidence. For instance, you shouldn’t expect on average to be more confident that the person committed the crime after seeing the coroner’s report—for if you expect in the future to get evidence that makes X more likely, then you should now think X is more likely, anticipating this evidence. But on this view, after learning what day it is, your credence in tails can only go up—if it’s not the first day, then tails is more likely, and if it is, your credence doesn’t change. Those who violate conservation of evidence can be money pumped, for their credences predictably change.
Most concerningly, this doesn’t even really avoid the challenge. Let’s assume you’re equally likely to be any of the people whose experiences are consistent with your evidence (which we talked about earlier, and for reasons discussed earlier, the same basic argument can go through even if we drop this assumption). On such a view, then, after coming to exist, you should think there’s a 50% chance that you’re the first person and the coin came up heads, and an infinitesimal probability that you’re each of the people if the coin came up tails (infinitesimal probability you’re the first person, infinitesimal probability you’re the second, and so on).
But this means that it’s infinitely likelier that you’re the first person and the coin is to come up heads than that you’re the first person and the coin is to come up tails (the 50% share of probability space wherein there are infinite people is split between all infinity people you might be, while the 50% share of probability space where there’s just one person isn’t split across any people). But this means you should believe “if there’s a coin that will be flipped at the end of today, it’s infinitely likelier that it will come up heads than tails.” But—even more weirdly—once you learn that a coin will be flipped at the end of today, you start thinking heads and tails are equally likely.
A variant of this point has been cleverly made by Titelbaum (he made it about the sleeping beauty problem, but the same point applies to other cases like the ones I’m discussing). Suppose one person will be created, then a fair coin will be flipped. If it comes up tails, another person gets created. Then, after the other person has been created, another fair coin will be flipped. This second fair coin doesn’t do anything—it’s flipped just for fun.
The person who rejects SIA—including those who reject 3B—think that tails is not twice as likely as heads. This means that they think the probabilities are as follows:
1/2 chance I’m the first person and the coin will come up heads.
1/4 chance I’m the first person and the first coin (the one that determines if there’s a second waking) will come up tails.
1/8 chance I’m the second person and the first coin (the one that determines if there’s a second waking) has come up tails, and the coin that will be flipped at the end of the day (the one that doesn’t do anything) will come up heads.
1/8 chance I’m the second person and the first coin (the one that determines if there’s a second waking) has come up tails, and the coin that will be flipped at the end of the day will come up tails.
But if you notice, this means that even though I know with certainty that a fair coin will be flipped at the end of the day today, there’s a 5/8 chance that it will come up heads. Half of probability space is occupied by the share of worlds where the coin comes up heads and there’s only one person, and some of the other half of probability space has the coin that’s flipped at the end of today coming up heads, so overall you should think there’s above a 1/2 chance that the coin that will be flipped at the end of today will come up heads.
Thus, this view doesn’t avoid the problems. It implies both:
You should sometimes think that if a fair coin will be flipped at the end of today, it’s 100% certain that it will come up heads, because if it comes up tails a bunch of people will be created.
The fair coin you know will be flipped at the end of the day will have a more than 1/2 chance of coming up heads.
While the view was largely constructed to avoid such results, it is not successful in doing so!
Recap
Every view will either have to reject (A) that Bayes theorem applies to reasoning about the odds that you are different people, (B) that you shouldn’t think you’re infinitely likelier to exist if there are infinite people, (C) that you should think you're equally likely to be any of the people that exist whose experiences are consistent with your evidence, or (D) that you shouldn’t be infinitely certain that unflipped coins will come up heads because if they come up tails infinite people will be created.
Compartmentalized conditionalization rejects A, SIA rejects B, UDASSA rejects C, and SSA rejects D. The problem for non-SIA views is that they’re committed to rejecting very obvious things, and all committed to rejecting something very near D—perhaps high certainty rather than infinite certainty, but whatever they’re committed to is very implausible. The disastrously bad result of SSA—that you can be infinitely confident in the results of fair unflipped coins—have analogues that apply to all the non-SIA views. The master argument for SIA is decisive; not only is there no plausible candidate for a way out, there can’t be a plausible way out. All of the logically possible ways to reject the argument are very implausible. I’ve used this meme before, but as far as I can tell, it settles the debate.
Suppose that, for instance, the theory that there are infinite people is 10,000 times likelier than the theory that there is just one person and the odds that you are the first person is 1 in 100,000 conditional on there being infinite people. A person is created, then a fair coin is flipped, and if it comes up tails, infinity other people are created. Before learning you’re the first person, you think heads is 10,000 times likelier than tails. After learning you’re the first person, because that’s 100,000 times likelier if the coin comes up heads than tails, you should now think that heads is 10 times likelier than tails.
To really make this argument you need a narratological Bayesian framework
You posted a wojak meme about coins coming up heads half the time, yet you’re a thirder.
Curious!
(No I did not read any other part of this article)