Several New Arguments For The Self-Indication Assumption
If you exist in a room you know someone exists in that room. So does everyone else.
1 Only SIA accords with non-anthropic evidence
And we will all go together when we go
What a comforting fact that is to know
Universal bereavement -
An inspiring achievement!
Yes, we all will go together when we go
—Tom Lehrer, we will all go together when we go.
(This article is a bit long—but if you think my article is long, read Scott’s about lab leak).
The self-indication assumption (SIA) is the view of anthropics according to which, given that you exist, you should think more people exist, on the grounds that if more people exist, it’s likelier that you, in particular, would exist. I’ve written pretty extensively in favor of SIA elsewhere, here, here, here, here, here, here, here, here, and I think in other places. Here’s a famous case that differentiates SIA from other views:
God’s extreme coin toss: You wake up alone in a white room. TfIhere’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” Conditional on the message being true, what should your credence be that the coin landed heads?
SIA holds that you should think tails is 1 million times likelier than heads, while other views generally hold you should think that heads and tails are equally likely. This is generally thought to be a bad result of SIA. Here, I’ll defend this result of SIA, arguing that we can give a powerful argument from it from non-anthropic evidence.
Imagine that there’s a watchman in every one of ten rooms. I’m not going to say what religion the watchman is. Okay fine, it’s a Jewish watchman (keen readers will flag this as an example of a joke—an unserious utterance made for comedic effect).
This person watches what goes on in the room, seeing, for instance, whether a person is created in the room. A fair coin is flipped, the results of which no one knows. If it comes up heads, one person is created in one room. If it comes up tails, ten people will be created, one of whom will be put in each room.
Suppose you wake up in the room. On SIA, you should think that it’s ten times more likely the coin came up tails than heads. On basically every alternative, you should now be indifferent. But the watchmen in the rooms aren’t indiffernet—they found someone appear in their room. That’s 10 times as strongly predicted on the hypothesis that the coin came up tails, so they should think that the coin came up tails.
You and the watchmen have essentially the same evidence. You both know that you came up in the room and that the watchman was there to see it. The rational odds for you to give to the coin coming up tails should be the same as the watchmen’s. There’s a right answer to the rational odds based on the evidence—both of you can incorporate the other’s evidence. So if it’s rational for them to think that the evidence favors the tails hypothesis, then you should reason that they have no less evidence than you, and the rational odds for them to give for tails is ten times what it is for heads, so you should think the coin almost definitely came up tails. It shouldn’t be that the person created in the room is irreversibly apart from the watchman in the room, that anthropic evidence and non-anthropic versions of the same evidence produce wildly different results. Given that both of these people know what the other knows, whichever one has more information, the other one should simply defer to! But clearly, watchman should think the odds are ten to one that the coin came up tails, so that’s what the created person should think.
I don’t have an idea of how exactly to make precise the principled basis for the claim that the two groups should have the same credences. Every way I tried to formulate it had narrow, Gettier-esque counterexamples. But I think it’s easy to see that in such a case, their credences should be the same—that’s easier to see than the precise set of necessary and sufficient conditions under which people should have the same credences.
2 Bigger families
Given that I exist, all else equal, I should think that my family was bigger. If all I know was that I existed, I should update in favor of my parents having more children. This is true whether SIA or SSA is true—the total share of people coming from a larger family is greater than coming from a smaller family.
Here’s another plausible principle: the odds that you came from a big family shouldn’t be affected by the existence of other people. The total number of kids had by people who are not my parents shouldn’t affect my expected number of kids had by my parents. The odds that my parents had me has nothing to do with the odds that other people had other kids that are not me!
But together, these straightfowardly a result only amenable to SIA. On SIA, if there are two people in existence, and they have some number of kids, all else equal, you should think they had more kids. Alternatives to SIA deny this, for the only difference between the world in which the family is larger and the world in which it’s smaller is that there are more total created people if the family is larger. But from the fact that:
If there are other people in existence, you should think you came from a bigger family, all else equal, rather than a smaller family.
The presence of other people doesn’t affect whether you should think you came from a bigger family, all else equal, rather than a smaller family.
It follows:
If you came from a single family which was the only family in existence, you should think it was bigger (having more kids) all else equal, rather than smaller. Yet this third result is only amenable to SIA. It is, in fact, almost identical to God’s coin toss.
3 The first Carlsmith result: moving boulders with your mind (and other illicit causation and dependency)
Joe Carlsmith, as part of his Ph.D thesis, has a variety of objections to the self-sampling assumption. The self-sampling assumption is the view that one should reason as if they’re randomly selected from all the beings in their reference class, where one’s reference class is the set of beings enough like them. A common reference class is all conscious beings, for example, or all conscious rational beings. I’ll argue here that most of the objections Carlsmith gives to SSA are quite general and apply to most non-anthropic views. One case Carlsmith gives is:
Save the puppy: You wake up in a red jacket. In front of you is a puppy. Next to you is a button that will create a trillion more people, all wearing blue jackets. No one else exists. A giant boulder is rolling inexorably towards the puppy, and it will crush the puppy with very high probability. You want to save the puppy, but you can’t reach it. However, you accept SSA, and you understand the power of reference classes. So you make a firm commitment: if the boulder doesn’t swerve away from the puppy, you will press the button; otherwise, you won’t. Should you now expect the boulder to swerve, and the puppy to live?
Obviously, you should not expect the boulder to swerve! And yet imagine you didn’t know your birth rank. If you reject SIA, you’ll think that more people existing isn’t more likely. So a total history in which many people are created is no likelier to have you. Thus, you reason in the following way “if I am the first person, I’ll get very strong evidence that a bunch of clones not to be created.”
Then, you learn that, in fact, you are the first person. Now, you get absurdly strong evidence that a bunch of clones won’t be created, and consequently, that by creating a bunch of clones unless some event happens, that the event will happen. Thus, you get the ability to move boulders with your mind.
Some might object, as Bostrom does, that one isn’t actually causing anything to change. The odds of the boulder swerving are indeed higher if the clones will be created than if the clones won’t be created. But the clones being created doesn’t actually make the boulder more or less likely to move.
This, it is claimed, is analogous to the smoker’s lesion case. In the smoker’s lesion case, smoking is not harmful to one’s health. However, the vast majority of people who smoke have a lesion on their lungs which makes their health worse. Upon finding out that one has smoked, one gets evidence that their health is worse, but they have no reason not to smoke, because smoking isn’t bad for them.
Decision theory, however, is very complicated and it’s hard to know what to think. Often, cases look similar but produce radically different results. Consider, for example:
Perfect deterministic twin prisoner’s dilemma: You’re a deterministic AI system, who only wants money for yourself (you don’t care about copies of yourself). The authorities make a perfect copy of you, separate you and your copy by a large distance, and then expose you both, in simulation, to exactly identical inputs (let’s say, a room, a whiteboard, some markers, etc). You both face the following choice: either (a) send a million dollars to the other (“cooperate”), or (b) take a thousand dollars for yourself (“defect”).
This is, I think, enough to show that there are some cases where one wouldn’t ordinarily think that A causes B, but A affects B in a way such that one should bring about A to make B likelier. This seems to be the case here. To see this, if my reasoning is right, alternatives to SIA think that one, from their existence, gets very strong evidence that a bunch of clones won’t be created. Now, suppose that you have evidence that some future event won’t occur. Now suppose that you don’t want some present event to occur. It seems in such a case, you have a reason to make the future event and present events chained together, so that the present event can only occur if the future one will. If I know, for instance, that the future won’t involve me moving my leg in a circle, I could “make” anything happen by agreeing to move my leg in a circle unless that thing happens.
Second, the surprisingness comes not from the fact that there’s any illicit causation. There does seem to be illicit causation, but that’s only a bit of the counterintuitiveness. It’s equally weird that after agreeing to make clones unless the boulder swerves, one can be confident that the boulder will swerve. The odds of some event shouldn’t depend on its consequences in that way!
Third, this scenario involves impossible correlations.
The structure in the smoker’s lesion case is the following: a future bad thing depends on a past thing and a present thing depends on the past thing. Charting out the dependencies we get:
Future bad—>past
Present—>past.
(A—>B means A depends on B)
The structure in Newcombe’s problem: future good thing depends depends on past good thing which depends on present thing (you can affect).
Future—>Past—>present decision
Now, compare this to UN++, a case exactly like the boulder swerving case in its setup (this was the example that Bostrom analyzed in terms of supposedly illicit causation):
UN++ It is the year 2100 A.D. and technological advances have enabled the formation of an all-powerful and extremely stable world government, UN++. Any decision about human action taken by the UN++ will certainly be implemented. However, the world government does not have complete control over natural phenomena. In particular, there are signs that a series of n violent gamma ray bursts is about to take place at uncomfortably close quarters in the near future, threatening to damage (but not completely destroy) human settlements. For each hypothetical gamma ray burst in this series, astronomical observations give a 90% chance of it coming about. However, UN++ raises to the occasion and passes the following resolution: It will create a list of hypothetical gamma ray bursts, and for each entry on this list it decides that if the burst happens, it will build more space colonies so as to increase the total number of humans that will ever have lived by a factor of m. By arguments analogous to those in the earlier thought experiments, UN++ can then be confident that the gamma ray bursts will not happen, provided m is sufficiently great compared to n.
The structure in UN++: present thing 1 (you being around now) depends on future thing 1 (how many future people you’ll make), and future thing 1 is made to depend on present thing 2 (whether you make a bunch of people if a supernova hits) and future thing 2 (whether a supernova hits).
present thing 1—>future thing 1—>present thing 2 and future thing 2
This is weird! Present thing 2 affects the probability of future thing 2 without there being any chain of dependence between present thing 2 and future thing 2, where there’s no extra thing that present thing 2 and future thing 2 both depend on.
If two things (A, B) correlate, it’s either because A raises the likelihood of B, B raises the likelihood of A, C raises the likelihood of both, or it’s a coincidence. None of these explanations can be the case here, so therefore, there must be illicit likelihood raising. The creating people and the supernova not happening correlates in a downright impossible way!
4 The second Carlsmith result
Carlsmith has two interesting cases, which broadly are similar to cases given by Armstrong, Dorr, and Arntzenius:
Coin toss + killing: God tosses a fair coin. Either way, he creates ten people in darkness, and gives one of them a red jacket, and the rest blue. Then he waits an hour. If heads, he then kills the blue-jacketed people. If tails, he kills the red-jacketed person. After the killing in either case, he rings a bell to let everyone know that it’s over. You wake up in darkness, sit around for an hour, then hear the bell. What should your credence be that your jacket is red, and hence that the coin landed heads?
Coin toss + non-creation: God tosses a fair coin. If heads, he creates one person with a red jacket. If tails, he creates nine people with blue jackets. You wake up in darkness. What should your credence be that your jacket is red, and hence that the coin landed heads?
SIA reasons about these two cases in the same way. It says in both cases that, because they predict the same number of people with blue jackets, the two cases are analogous. Alternatives to SIA reason in a different way. To see this, in the killing case, imagine that after being created, each of the people in the dark rooms thinks about anthropics (I know that’s what I do when I’m in a dark room). They each reason in the following way: “I could be one of the people originally in the room, and probably after being created, I’d be killed if the coin came up heads. Therefore, given that I remain alive, I have evidence that the coin came up tails.”
Thus, if one survives, they get evidence that the coin came up tails. In contrast, only views like SIA hold that in coin toss + non-creation, one should think the coin came up tails. Thus, only SIA and a few very related views avoid a radical asymmetry between killing and non-creation, wherein the odds that a coin came up tails depend on whether, if it did, other people who are not you, got killed and created. I take this consideration to be very decisive.
So to recap, SIA says that to find out the odds of your existence conditional on some hypothesis, you just have to look at how many people there would be, who you might be, based on your evidence, if that hypothesis was true. Alternatives instead say that depends on:
—What will happen in the future.
—Whether other people who aren’t you were created and then killed.
—Whether you’ll awake again tomorrow.
—Whether there were a lot of prehistoric humans.
5 The third Carlsmith result
Carlsmith also argues that SSA updates heavily in favor of solipsism. SSA says you should think that people with your current evidence—people who, based on your experiences currently, you might be—are a large sample of your reference class. So, for instance, a theory according to which everyone in the universe has my exact experience is majorly supported by SSA. Therefore, solipsism gets a big boost, for it does predict exactly that!
Now imagine that I wake up in a dark room. I have no idea what my birth rank is. I reason that theories according to which there is only one person total will get a big probabilistic boost, because they make it much likelier that I’d be the first person. Assuming I reject SIA, I should think that if I ever learn that I’m the first person, then I’ll get very strong evidence for the non-existence of other people.
Then suppose I do find out that I am, in fact, the first person. I’ll now get extremely strong evidence that other people don’t exist. Thus, even if I observe other people walking around, waving to me, high-fiving me, acting like normal people, I should still think solipsism is very likely—that they’re zombies or figments of my imagination—if the number of other people that appears to exist is sufficiently great. SIA disagrees because the odds of you existing are higher if there are more people, which cancels out the update in favor of solipsism from you being the first person.
So, if SIA is false, Adam in the garden should have been a solipsist.
6 Bayesian updating from random creation
I have painstakingly tried to show how SIA is just being a Bayesian about your existence. There’s one case that bears this out quite a bit more clearly than others. Consider:
Let John 1 denote John in room 1, John 2 denote John in room 2, etc. There are an infinite number of rooms John might be in. A fair coin is flipped. If it comes up heads, a random John will be created. If it comes up tails, every John will be created. Upon waking up in a room, what should John’s credence be in the coin coming up tails?
SIA answers: 100%. Because tails predicts infinitely more people than heads, tails is infinite times likelier than heads. All alternatives to SIA answer: 50%. Both theories predict the same share of people will be in my reference class, and that my experience will be had by someone. Unless one takes the fact that some experiences are repeated—which only SIA does—to be relevant in itself, even if it doesn’t affect the share of the reference class, they should think one gets no evidence in favor of tails.
But clearly, updating in favor of tails is just being a bayesian. Let N be the room number of me. P(John N)|tails=infinity times P(John N)|heads. I should thus get an infinitely strong update in favor of tails. So, therefore, a result consistent only with SIA follows from straightforwardly being a Bayesian about one’s existence.
7 Presumptuous hatcher
Andrew WilesBentham’s bulldog gently smiles
Does his thing, and voila!
Q.E.D., we agree
And we all shout hurrah!
As he (dis)confirms whatFermatBostrom
Jotted down in that margin
Which could've used some enlarging
The most famous counterexample to SIA is the presumptuous philosopher case given by Bostrom, Cirkovic, Cian and Dorr, and various others:
It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite and there are a trillion trillion trillion observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1!”
Nearly everyone presents this case, concludes it’s a death blow for SIA, and then moves on. Bostrom, after explaining earlier how SIA effortlessly vaporizes all the earlier puzzles he presents, dismisses it in just a few pages by presenting the presumptuous philosopher case. Unsurprisingly, I don’t think this result is wrong—in fact, I don’t think it’s a bad result at all. This case is like the following:
Presumptuous Hatcher: Suppose that there are googolplex eggs. There are two hypotheses: first that they all hatch and become people, second that only a few million of them are to hatch and become people. The presumptuous hatcher thinks that their existence confirms hypothesis one quite strongly.
I claim that this case is analogous to the presumptuous philosopher. The only difference between the two is that in the hatcher case, there are eggs assigned to the people who don’t exist. But surely that doesn’t matter. Discovering that all possible people are assigned an egg which actual people come from doesn’t affect the correctness of the presumptuous philosopher’s reasoning. But surely the presumptuous hatcher’s reasoning is right—the odds their egg would hatch are higher if the coin comes up tails than heads.
We can give an additional argument for the correctness of the presumptuous hatcher’s reasoning. Suppose that one discovered that when the eggs were created, the people who the eggs might have become had a few seconds to ponder the anthropic situation. Thus, each of those people thought “it’s very unlikely that my egg will hatch if the second hypothesis is true but it’s likely if the first hypothesis is true.” If this were so, then the presumptuous hatcher’s reasoning would clearly be correct, for then they straightforwardly update on their hatching and thus coming to exist! But surely finding out that prior to birth, one pondered anthropics for a few seconds, does not affect how one should reason about anthropics. Whether SIA is the right way to reason about one’s existence surely doesn’t depend on whether, at the dawn of creation, all possible people were able to spend a few seconds pondering their anthropic situation. Thus, we can argue as follows:
The presumptuous hatcher’s reasoning, in the scenario where they pondered anthropics while in the egg, is correct.
If the presumptuous hatcher’s reasoning, in the scenario where they pondered anthropics while in the egg, is correct, then the ordinary presumptuous hatcher’s reasoning is correct.
If the ordinary presumptuous hatcher’s reasoning is correct then the presumptuous philosopher’s reasoning is correct.
Therefore, the presumptuous philosopher’s reasoning is correct.
8 Conclusion
SIA is, by far, the best view of anthropics. There’s only one apparent counterexample which isn’t even much of a counterexample because it starts to seem obvious when you think about it for five minutes. Alternatives to SIA must affirm:
—bizarre and radical precognition of events.
—the ability to move boulders with one’s mind.
—impossible correlations.
—that one’s anthropic reasoning depends on whether other people were created and killed rather than not created at all.
—major updates towards solipsism for people who have low birth ranks.
—that whether you should think you came from a big family depends on whether other people have big families.
—radical divergence between anthropic and non-anthropic evidence.
—bizarre results in the presumptuous hatcher and presumptuous archeologist cases.
—bizarre deviations from bayesian reasoning about one’s existence.
As this article has shown, these results aren’t only problems for SSA. They are problems that are universal among views other than SIA. Each of them follows from denying SIA plus applying straightforward Bayesian reasoning.
I think there are some serious issues with the first two arguments. For the watchman case, the watchman is watching the room and has a chance to see it empty. But you don't have a similar option, because there's no way you could possibly come to realize you don't exist; if you're even observing at all, every possible world in which you don't exist is immediately ruled out. If you fix the watchman's description such that he also has no chance to witness it empty - maybe he's told that heads means you appear in his room and tails means you appear in his room and a million other people appear outside - then it would also be reasonable for him to be totally 50/50 once you appear, right?
The bigger issue I have, though, is with the second one. It seems just false to say the total share of people coming from larger families is greater than the total share of people coming from smaller ones! That seems like a totally contingent fact - if you had a world of ten families where nine had two children and one had six children, then most people would be from small families. And further, it does seem totally reasonable to think that the odds you came from a big family depend on facts about how large other families tend to be. If I find out that the average family in my world has three kids, I would assume I probably had two siblings. It would super weird to me if someone *didn't* think that mattered. Of course, it doesn't matter in a metaphysical sense - it's not like other people having more children causes my parents to have more children, except maybe by increasing the social pressure to do likewise. But if I'm assuming my parents are probably typical, then why not look at what size typical families are?
Also, this seems open to another super strong objection (imo) to the SIA and thirding, which is that it requires affirming that it's sometimes reasonable to believe one thing today even though you know you'll have good reason to believe another thing tomorrow. You can imagine a situation where God flips a coin and puts me in a small family if it's heads and a big family if it's tails. Before the flip, I should definitely think the odds are 50/50. But once I wake up as a newborn, I should immediately update towards tails in your view, right? But then it just seems inconceivable to me that I shouldn't updates towards tails *before* the flip, knowing that I will inevitably do that update later. Is there a good explanation for how this could be made coherent?
What do you think of the following anthropic argument against certain forms of theism:
1. I currently know that I'm on earth.
2. If certain forms of theism are true, I exist finitely on earth and infinitely in heaven, so the probability that I would currently know that I'm on earth are 0%.
3. If naturalism is true, there is a non-zero chance that I'd currently know that I'm on earth.
4. Therefore, the fact that I currently know that I'm on earth is evidence strongly favouring naturalism over this specific form of theism.
This argument could also be run based on any time-sensitive knowledge, making a ridiculously strong case against a form of theism where I exist finitely on earth and infinitely in heaven (or maybe hell, given that I'm making this argument).
Do you think that the easy way out is to reject premise 3 because of Huemer's immortality stuff and also the anthropic argument FOR theism showing that the probability of my existence on naturalism is plausibly 0?