A New Extremely Strong Argument For The Self-Indication Assumption
Anonymous sources tell me non-SIAers are COPING and SEETHING in response to this argument
1 Introduction
If you’re a longtime reader of this blog, you’ve probably heard me bang on at considerable length about why the self-indication assumption is the right view. The SIA, for those who have been living under a rock, says your existence gives you evidence for there being more people. More precisely, your existence confirms by a factor of X theories on which there are X times as many people that you might presently be. Even more precisely…
To give an example, suppose that a coin gets flipped. If it comes up heads, one person gets created. If it comes up tails, ten people get created. SIA says that if you get created from this process, you should think at ten to one odds that the coin came up tails. Because for all you know you might be any of the ten people created by the tails coin flip, your existence favors tails over heads by a factor of ten. Tails results in the creation of ten times as many people that you might presently be, so you should think it’s ten times as likely it resulted in the creation of you.
As is a monthly tradition, I have a new argument for the SIA. But before you say “this is boring,” and leave this article to begin reading a 78,294th thinkpiece about Sydney Sweeney, note that whether the SIA is right has huge implications for all sorts of issues, including whether the universe is infinite and whether God exists. This is one of the single issues with the most earth-shattering implications for one’s worldview, so it’s worth really thinking through.
Suppose that a coin is flipped that creates one person if heads and ten if tails. This coin creates you. However, after this coin is flipped, the same coin will be flipped over and over again. It will be flipped, let’s say, 1,000 times, and each time it will create one person if heads and ten people if tails. The question: what should your credence be in the coin that created you having come up tails?
Before we can answer this question, let’s answer a different one. Suppose that a coin is flipped a million times that creates one person if heads and ten if tails each time. You do not know your birth rank. What odds should you give to the coin having come up tails?
Here the answer is clear: ~10/11 (I’m too lazy to do the math exactly but the exact answer is the proportion of observers, on average, across 1,000 trials who would get created by heads coinflips).
If you know that 10/11ths of people have property X, then unless you have some specific reason to think you’re special—a reason that couldn’t be given by everyone—you should think at 10/11ths odds that you have that property. If, for instance, you know 10 in 11 people have the gene theta, and you have no special evidence regarding whether you have gene theta, you should think the odds are 10/11 that you have gene theta. Thus, because 10/11ths of people are created by tails coin flips, you should think at 10/11ths odds that the coin came up tails.
Okay, now that that’s out of the way, let’s go back to the earlier case where you know you were created on the first coin flip and some coins will be flipped after you. I’m going to argue that in this case, your credence in the coin that was flipped and created you having come up heads should be 10/11ths, because it should be the same as your credence in the last case (where you don’t know your birth rank. Then I’m going to argue that your credence in this case—where a coin is flipped that creates one if heads and ten if tails, and then the experiment will be repeated later—should be the same as your credence in the scenario where a coin is flipped that creates one person if heads and ten if tails and there is no or past repeats of the experiment. From these it follows that if a coin is flipped that creates one person if heads and ten if tails, and you get created as a result of the coin flip, you should think at 10:1 odds that the coin came up tails.
Let me try to simplify. Let’s name the various experiments.
Repeat flipping unknown rank: this is the one where the experiment is done a bunch of times, but you don’t know which rank you are. So, a coin is repeatedly flipped that creates one person if heads and ten if tails—it’s done, say, 100,000 times.
Repeat flipping first rank: this is the one where the experiment is done a bunch of times, but you know you were created on the first experiment. So, a coin is repeatedly flipped that creates one person if heads and ten if tails—it’s done 100,000 times—but you know that you were created in the first experiment. All the other 99,999 experiments will be done after you.
Single flipping: in this case, the experiment is done only a single time. A coin was flipped that created one person if heads and ten if tails. You got created from this.
Here is my argument in short.
Your credence in the coin having come up heads in Repeat flipping unknown rank should be the same as your credence in the coin having come up heads in Repeat flipping first rank.
Your credence in the coin having come up heads in Repeat flipping first rank should be the same as your credence in the coin having come up heads in Single flipping.
Your credence in the coin having come up heads in Repeat flipping unknown rank should be either 1/11 or approximately 1/11.
Therefore, your credence in the coin having come up heads in Single flipping is either 1/11 or approximately 1/11.
That is only consistent with SIA.
So SIA is true.
So…let’s see if the premises are true, shall we.
2 Repeat flipping unknown rank=repeat flipping first rank
This premise says “Your credence in the coin having come up heads in Repeat flipping unknown rank should be the same as your credence in the coin having come up heads in Repeat flipping first rank.” So if you know that the experiment will be done a bunch of times, how likely you should think you were to have been created by a coin coming up tails doesn’t depend on whether you were in the first experimetn. When I first thought of the argument, this was the premise that seemed shakiest to me, but now I think there are several distinct arguments for it that make it pretty unavoidable.
Here is a plausible principle: if
There are events that are completely causally isolated from each other.
There are agents with credences in some propositions as part of the events.
Changing when in time the events occur doesn’t affect the truth of any of the propositions.
The changing when in time the events occur isn’t any likelier if any of the propositions are true than if they are false.
then
Agents credences shouldn’t change as a result of the order of the events being rearranged.
This sounds like a pretty complicated and abstract principle but most of it was just tying up random loose ends—I think the essence of it is simple and straightforward. Suppose that some people are going to do an experiment where they flip a penny. Then, later, in some distant galaxy, some other people are going to do a different experiment where they flip a nickel. Changing around the order that the events happen—making it so that the nickel flipping occurs first—shouldn’t change anyone’s credences. How likely you should think it is that the nickel will come up heads shouldn’t depend on whether it was flipped before or after the penny.
But if you accept the principle then you can’t hold different credences across the two cases. The only difference between the two is in the order in which the two events occurred. In repeat flipping first rank your rank is first, while in repeat flipping unknown rank, you don’t know your birth rank—the cases are otherwise the same. But if changing the order doesn’t change what one’s credences should be then your credences have to be the same between the two cases.
A second argument: physics seems to indicate that there’s no objective simultaneity. When some event happens is relative to a reference frame. There’s often no precise fact of the matter about which of two events happened first. Thus, a view on which the order of these events affects credences will often be literally indeterminate in cases where there’s no fact of the matter concerning the order of events.
A third argument: it seems like in cases like this delaying when some event occurs shouldn’t change your credence. If an experiment like the coinflip one is done, merely changing when it time it happens shouldn’t make your credence different. However, denying this premise violates this principle, because it holds that delaying when the first coin-flipping experiment is done so that it’s done after the others changes what your credence should be.
A fourth argument: as Ken Olum notes, the argument for a ~10/11 credence doesn’t just apply if the event is iterated temporally. It also applies if the event is iterated spatially. For example, suppose that there are googol galaxies and in all of them the experiment is done (let’s say simultaneously). A coin is flipped in each of them, and if it comes up heads one person is created, while if it comes up tails ten people are created. Relatively inarguably, in this case, you should think at 10:1 odds that the coin came up tails, because across these galaxies, 10/11ths of people get created by tails coin flips.
But then the question: if you should count all other events and take a portion over them if they’re spread out spatially, why not temporally? Why are time and space treated differently? What is the principled probabilistic basis for this?
You can’t get around this by treating them the same way spatially because space doesn’t have a natural ordering the way time does. You can’t, for instance, say that the person in galaxy one should have credence of 1/2 in the coin coming up heads, because there’s no fact of the matter about which galaxy is galaxy one. How do we determine which galaxy gets ranked which way?
A fifth argument: just as a point of mathematics, people will be more accurate if they assume that those in galaxy one should have 10/11ths credence in tails if the experiment is iterated. Those who follow the right betting advice should be more accurate. Following the betting advice that treats these two symmetrically boosts accuracy. Thus, one should follow that betting advice.
(Quick explanation of why accuracy goes up: if you bet on tails and you’re right, then you’re right a lot. If you bet against tails, then you lose more bets and this damages the aggregate accuracy score of people more).
Now, there’s one way that a person could get around the considerations I’ve given so far. They could say that what matters for your credence isn’t that you know your experiment number is early but that you know which experiment number you are. So, for instance, if you learned that you are experiment number 50,233, you should still have credence of 1/2 in heads! After all, you know there’s a 1/2 chance that the coin in experiment 50,233 comes up heads and a 1/2 chance it comes up tails. Once you know which experiment you are on, you no longer consider yourself equally likely to be produced by every experiment, so you no longer think that there’s a ~10/11 chance that the coin came up tails.
I think this view has a lot of problems.
First of all, the view egregiously violates reflection. Reflection says: if you know that you will later get evidence that will cause you to think R with probability P, then you should now think R with probability P. For example, if you know that you will later visit a museum that will have evidence that will rationally convince you that there’s a 99.9% chance that Genghis Khan was a real guy, then you should know think that there’s a 99.9% chance Khan was a real guy. After all, you know that there’s evidence out there that should convince you to have a credence of 99.9%, so it seems you should move your credence to 99.9% in anticipation of the evidence.
But now suppose you don’t know which experiment you’re on but you do know that you’ll be told in five minutes what experiment you’re on. This view implies that you should think at 10/11 odds that the coin came up tails, even though you know that in five minutes you’ll be back to 1/2 odds. This is crazy! If you know that there some evidence (namely, the evidence concerning which experiment you’re on) out there that should make you have a credence of 1/2 in tails then you should have a credence of 1/2 now. You shouldn’t predictably update your credences in this way! And if you plan to, then one can construct a money pump by selling you a bet on whether the coin has come up heads, and then buy it back from you when, in five minutes, you value it at a lower price.
A second problem: there’s no fact of the matter concerning how much you have to learn about an experiment to know, in some objective sense, what experiment it is. This view says that once you identify that it is e.g. the 500th experiment, then you can declare “well, I knew the 500th experiment had a 1/2 chance of the coin coming up heads, so now I think at 1/2 odds that the coin came up heads.” But what’s special about knowing that it’s the 500th experiment?
You always know that the experiment you’re creating in is the experiment that you were created in. So you can always say—let experiment N refer to whichever number experiment I’m on. For example, if I’m on experiment 500, then N=500. I know experiment N had a 1/2 chance of the coin coming up heads. Therefore, I think at 1/2 odds that it came up heads.
Similarly, is it enough to know the spatial order of the events? Do you have to know the temporal order? And what if they’re in worlds that are neither spatially nor temporally linked? The view becomes hopelessly defective and arbitrary.
A last problem: the view seems to break down in cases where you’re not sure which experiment you are in. For example, suppose that you think there’s a 99% chance that you’re in experiment one and a 1% chance you’re in experiment two. What credence should you have in the coin having come up heads? A natural answer is: 1/2, because whether you’re in experiment one or two, there’s a 1/2 chance it came up heads. But that same logic will generalize to the earlier case where there are 100,000 experiments but you don’t know which one produced you. Thus, this view either breaks down completely or implies that one should always have a credence of 1/2 in the coin having come up heads—even if they know that ten out of every eleven people created by the coin were produced by tails coin flips. I’ll talk later about more reasons that view isn’t tenable.
But overall, I think this premise is on very firm ground. So…let’s see about the others.
3 Repeat flipping first rank=single flipping
The next premise states your credence in the coin having come up heads in Repeat flipping first rank should be the same as your credence in the coin having come up heads in Single flipping. Single flipping is the version of the case where the experiment is just done once, while Repeat flipping first rank is the one where the coin is flipped a bunch of times, but you know you were produced by the first coin flip.
This is pretty straightforward: if a coin is flipped that creates one person if heads and ten if tails, to figure out the odds it came up heads, you don’t need to figure out whether they’ll perform the same experiment in the future. Which experiments are done in the future has absolutely no impact on the outcome of of the present experiment. It thus shouldn’t affect your credences.
The alternative just seems totally ridiculous. Upon learning that the experiment was done, you should think at 50% odds that the coin came up tails. But then, after you learn that they will do the experiment again a bunch of times in a million years—something that cannot affect the experimental results—you should now think at 10:1 odds that it came up tails. Nuts! Causally unrelated events happening in millions of years shouldn’t change your credence!
The alternative view implies—counterintuitively—that if in the real world someone did the aforementioned experiment, and you wanted to figure out if you should have a credence in tails of 10/11 or 1/2, you’d have to know whether a stable universe would reappear after heat death. That’s pretty crazy!
4 Repeat flipping unknown rank credence=1/11 or ~1/11
The next premise says your credence in the coin having come up heads in Repeat flipping unknown rank should be either 1/11 or approximately 1/11. In the scenario where the coin is flipped repeatedly, each time creating ten people if it comes up tails, and one if it comes up heads, if you’re created, then you should think the odds the coin that created you came up heads is ~1/11.
This premise is pretty straightforward. 10/11 of people, in the limit, get created by tails coinflips. If you know that 10/11 of people are created by tails coin flips, then you should think you were created by a tails coin flip at 10/11 odds. If you know X% of people have some property, then unless you have some special reason to think you’re atypical—a reason that doesn’t apply to other people—you should think you have the property at X% odds.
Or, put another way, if X% of people have some property, and they all are exposed to the same evidence, it shouldn’t be that they all should have a credence in them personally having the property that differs from X%. If people know that 90% of people have red shirts, and they can’t see their shirt color, they should think at 90% odds they have a red shirt. It really shouldn’t be that 90% of people have red shirts but it’s rational for everyone to think at 50% odds that they personally have a red shirt.
Here is another argument for thinking that tails is ~10 times likelier than heads in the repeat scenario. Let’s imagine making a series of changes to the scenario. I’ll argue that none of these changes should change your credence, and in the end, it will be really obvious that tails is ten times likelier than heads.
You learn the exact number of times the coin—across all experiments—came up heads. It will have come up heads roughly half the time, so you learn the exact number. This, presumably, shouldn’t change your credence. It shouldn’t be that after learning that the number of heads coin flips is something normal and expected, your credence fluctuates wildly. Let’s assume for simplicity that you learn that the coin comes up heads and tails an equal number of times.
The person planning the experiments switches the experiments where the coin will come up heads to come first. This just changes around the order. It doesn’t change around the outcome of any coin flip.
Instead of a series of coinflips determining the creation of people, they’re created by a machine. So instead of X people being created by heads coin flips, and then 10X being created by tails coin flips, X people are created and given the label heads, while 10X people are created and given the label tails. Once again, this just changes the mechanism by which people are created—so it shouldn’t change your credences.
But with these three changes, there is simply a machine that creates X people with the label “heads,” and 10X people with the label “tails.” In such a scenario, you should obviously think at 10:1 odds that you have the label tails. Same is true here.
5 If the others are true, then SIA is true
The previous premises have established that if a coin gets flipped that creates one person if heads and ten people if tails, upon being created from the flip, you should think the odds that the coin came up tails are 10/11. Your credence when the coin flip is repeated should be 10/11 or so, which should be the same as the scenario where you’re first and the coin flips are repeated, which should the same as the scenario where it’s just done once. 10/11 credence vindicated!
So why, if this is right, is SIA right? I think this step is confusing to a lot of people, but it’s very straightforward.
The right way to reason about probabilities is through a theorem called Bayes theorem. Bayes theorem says:
Start with the prior of a theory—how likely it is before looking at the evidence. Treat this as an odds ratio. So, rather than writing 50%, write 1:1, meaning it’s as likely to be true as false. A 2:1 odds ratio means a theory is twice as likely to be true as false, while a 1:2 odds ratio means it’s half as likely to be true as false.
Then look at the evidence by multiplying the odds ratio for the evidence. The evidence gets an odds ratio that corresponds to how many times likelier you are to get that evidence if the theory is true than false. If the evidence is, say, three times likelier if the theory is true than false, then you multiply by a 3:1 odds ratio. So if you start with a theory with 2/3 odds (a 2:1 odds ratio) and get evidence that’s twice as likely if the theory is true than false, then you should think the theory gets an odds ratio of 3x2:1x1. In other words, it gets an odds ratio of 6:1, and so its probability is 6/7.
So now let’s apply this to the aforementioned scenario. Obviously your prior in a fair coin coming up heads should be 1/2—or a 1:1 odds ratio. But now, we’ve established that if the fair coin coming up tails will make ten people be created, while only one will be created if it comes up heads, then you should think tails is ten times likelier than heads. That means that the evidence makes tails ten times likelier—it’s good for an odds ratio of ten.
This is what SIA says. It that if one theory says there are X times as many people you might currently be as another, then your existence, in favor of the first theory, is good for an odds ratio of X. So if your existence, given there are ten people that you might be get created if the coin comes up tails, favors tails by a factor of 10, that serves to confirm the SIA.
Now note: this same argument can be made with any number. If 10 was replaced with 10,000 or 1 million, the argument would still work. If the coin flip was replaced with a die being rolled or some physical theory, the argument would still work. So the lesson is general: you always favor a theory by a factor of X if it says X times more people exist.
Let’s see how this works in the context of the real world. Suppose that there are two theories on offer. The first theory says there’s a giant-ass multiverse. The second says there’s a single universe. Assume the multiverse theory predicts a million times more people than the single universe. Assume additionally that the two theories have equal priors—you think they’re equally likely before you look at the evidence.
Now imagine that this “experiment,” so to speak, is going to be run a bunch in the future, and the results will be probabilistically independent. In the future, repeatedly both universes and multiverses will be generated. Half the time a universe will be generated, and half the time a multiverse will be.
Well now, by the argument in this article, you should think at 1 million to one odds that you’re in the multiverse right now. So then, as long as the odds you’re in a multiverse now doesn’t depend on the creation of people in the future, the point will generalize. I use coin flips when talking about SIA as an illustrative example, but SIA is a principle for all cases. It doesn’t just work for coin flips.
In other words, to explain the result in this case, you have to think more people existing makes your present existence likelier. But once you think it does that in one case, the logic generalizes—and you become an SIAer! Yay!
Let me make two other points about this. The first is that, once you buy the logic, then you already have bought into a result that’s vulnerable to the objections to SIA. The main objection to SIA is that it’s just so crazy that your existence gives you evidence for lots of people being created. But if you buy that tails is 10X likelier than heads—and, by the same logic, tails would be a quadrillion times likelier than heads if it involved the creation of a quadrillion people—then you already have bought into a result that exposes you to this objection.
Second, all the non-SIA views imply that you should think tails and heads are equally likely.1 So if you accept the SIA judgment here, you will have to reject every alternative view.
6 Conclusion
Tell your friends the good news! Shout it from the rooftops: SIA is true! Why is that good news? Because it means God probably exists. And that is very good news indeed. It is quite fortunate that a view that vastly raises the likelihood of God’s existence is just about the most plausible view in all of philosophy.
At least, assuming the people are all clones with the same experience.
I don’t know about autism, but man, you certainly are obsessive. Did you perhaps receive a chain letter that said you would burn in hell for eternity if you did not harvest a thousand souls for the Lord?
I'll have to think more about the argument for SIA... but even assuming it holds, you say:
> Now imagine that this “experiment,” so to speak, is going to be run a bunch in the future, and the results will be probabilistically independent. In the future, repeatedly both universes and multiverses will be generated.
Here you make explicit the conditions where anthropic reasoning holds. I agree with those conditions.
But crucially, the "God" issue doesn't meet them! If universes-made-by-God and universes-not-made-by-God were generated probabilistically, then you could presumably use SIA to argue which one you're more likely to find yourself in.
But as far as I know, no-one believes that universes-made-by-God and universes-not-made-by-God are generated probabilistically. Theists think that all universes are generated by God, and non-theists think that none are.
Which means we're not in the conditions of the experiment, and SIA doesn't tell you anything about God.
Which is a relief, because it would *massively weird* if some armchair reasoning gave you positive information about how reality itself came about.