Speedrunning Anthropics
What to read about anthropics if you aren't already caught up to speed
Introduction
I’ve already explained why I think anthropics is very important. Aside from just being super interesting and tricky, it has major implications for lots of areas of thought. Here I’ll try to give a summary of the main views and what makes anthropics trippy. Anthropics is widely regarded to be the trickiest area of philosophy, but I don’t think it’s hard to understand the main reasons why it’s tricky—this post should lay it out (so don’t stop reading if you want to know about anthropics).
What is anthropics? And what’s the sleeping beauty problem?
Anthropics, for those who don’t know, is the study of how to reason about your existence. This sounds very general, let me give you two examples of problems in anthropics. One comes from the one and only Adam Elga:
The Sleeping Beauty problem: Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking.2 When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?
There are two main answers: 1/2 and 1/3. Halfers say that you already knew you’d wake up, so you don’t learn anything new, and as a result, you should stick with your answer of 1/2. Thirders disagree and say that because there are more wakes on tails, your credence in tails should be twice as great as your credence in heads.
I won’t talk too much about sleeping beauty because it’s been discussed to death (though if you want to see why I’m a thirder—see here).
Here’s another case
Here’s a second paradigm (and easy) case in anthropics coming from Nick Bostrom:
The world consists of a dungeon that has one hundred cells. In each cell there is one prisoner. Ninety of the cells are painted blue on the outside and the other ten are painted red. Each prisoner is asked to guess whether he is in a blue or a red cell. (Everybody knows all this.) You find yourself in one of the cells. What color should you think it is?
The obvious answer is that you should be 90% sure your cell is blue. That’s because most people with your current evidence are in blue cells. You don’t know which of the people in the cells you are, but of the ones you might be, most are in blue cells.
Where it starts to get weird + an explanation of the self-indication assumption
Okay, now here’s a weirder case:
God’s extreme coin toss with jackets: God flips a fair coin. If heads, he creates one person with a red jacket. If tails, he creates one person with a red jacket, and a million people with blue jackets.
…
God keeps the lights in all the rooms on. You wake up and see that you have a red jacket. What should your credence be on heads?
There are two main answers. One way you could answer is that heads and tails are equally likely. On this view, more people being created makes it more likely that you’d be created. It’s true that given that you get created, tails makes it so that the odds you’d have a red jacket are 1/1,000,001. But because 1,000,001 times as many people get created, the odds you’d get created are 1,000,001 times higher, so the probabilities cancel out.
This view is called the self-indication assumption. On this view, what matters to probabilistic reasoning is the total number of people with my current evidence. What I mean by “with my current evidence” is people who you currently might be. So because heads and tails both mean one person will be made with a red shirt, and I have a red shirt, only the red-shirted people are candidates for being me. As a result, I should be indifferent—both theories predict an equal number of people with my current evidence. In contrast, if tails meant 10 people would have red shirts, then I should think tails is 10x as likely as heads.
The presumptuous philosopher
I think the self-indication assumption is right (and have argued for it at great length). But it has some weird implications. To see this, consider another case from Bostrom:
The Presumptuous Philosopher:
It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite and there are a trillion trillion trillion observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1! (whereupon the philosopher runs the Incubator thought experiment and explains Model 3).”
Of course, one shouldn’t be certain of their theory of anthropics, so the presumptuous philosopher is overconfident. Still, it seems weird that the correct theory of anthropic reasoning would count your existence as such strong evidence of some theory of ultimate reality. This is the most common objection to SIA—that if more people existing makes your existence more likely, you can be super confident that the universe has infinite people, which is super weird. I don’t find this objection persuasive because I think every view of anthropics implies presumptuousness and that rejecting the presumptuous philosopher result implies that contraception doesn’t work…but I digress.
The self-sampling assumption
Think back to the God’s extreme coin toss with jackets case from a few paragraphs back where heads meant 1 guy with a red jacket gets created and tails means that 1 guy with a red jacket gets created plus 1 million with a blue jacket. If you have a red jacket, the other main view is that you should think tails is 1,000,001 times likelier than heads. On this view, you should reason as if you’re randomly selected from the actual people. You should think of yourself as a random draw from the collection of people (in your reference class—we’ll come to that). Because you could have been any of the 1,000,001 people if it was tails, the odds are only 1/1,000,001 that you’d be the guy with the red shirt. In contrast, if the coin came up heads, you’d be guaranteed to be the guy with the red shirt. So you being the guy with the red shirt is 1,000,001 times as likely if the coin comes up heads.
What the fuck is a reference class? I’ve never seen one!
Okay, so I mentioned that the self-sampling assumption says you should reason as if you’re randomly selected from the people in your reference class. But what is a reference class? It’s the set of people you should reason as if you’re randomly selected from. Consider another case (slightly modified from Carlsmith):
Modified God’s Coin Toss With Chimips: God flips a coin. If it comes up heads, he makes one human. If it comes up tails, he makes one human and nine chimps. You are created—and a human. What odds should you give to tails?
Here, this case seems relevantly different from God’s extreme coin toss with jackets. It seems like in an important sense you couldn’t have been one of the chimps. Thus, it doesn’t matter how many chimps get created on each hypothesis—that’s just not relevant to your probabilistic reasoning (if you disagree about chimps, replace it with bacteria).
So your reference class is the class of entities that you should reason as if you’re randomly selected from. How do SSAers decide on a reference class? Answer: they just basically make it up to comport with their intuitions. There isn’t a principled basis for a reference class!
What’s wrong with SSA?
Okay, aside from the totally made-up reference class, what’s wrong with SSA? Seems to make sense of our intuitions. But unfortunately, it implies some crazy things. To see this, consider:
Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery. One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, one the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’s theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!”
On SSA, this is perfectly good reasoning. After all, if you’re randomly drawn from all humans and there are a lot of humans, it’s super unlikely you’d be the first human. For this reason, if you’re early, you have super strong evidence there won’t be many humans, and so Adam could know in advance that Eve wouldn’t get pregnant. But this is nuts! Here’s another even nuttier case:
Assume as before that Adam and Eve were once the only people and that they know for certain that if they have a child they will be driven out of Eden and will have billions of descendants. But this time they have a foolproof way of generating a child, perhaps using advanced in vitro fertilization. Adam is tired of getting up every morning to go hunting. Together with Eve, he devises the following scheme: They form the firm intention that unless a wounded deer limps by their cave, they will have a child. Adam can then put his feet up and rationally expect with near certainty that a wounded dear – an easy target for his spear – will soon stroll by.
Again, for the same reason, SSA implies Adam should be incredibly confident that the deer will drop dead. But this is extremely counterintuitive.
Conclusion
Here, I haven’t covered all the views in anthropics (there’s another view that a lot of people like but that I think is super implausible, as I explain here). There’s also a view that some people like where they think that there just aren’t such things as probabilities in this sense. But this is very implausible—surely in the first case I gave, you should think being in a blue room is much likelier than being in a red one. (This is the one, for reference):
The world consists of a dungeon that has one hundred cells. In each cell there is one prisoner. Ninety of the cells are painted blue on the outside and the other ten are painted red. Each prisoner is asked to guess whether he is in a blue or a red cell. (Everybody knows all this.) You find yourself in one of the cells. What color should you think it is?
I think the self-indication assumption is by far the best view. But it’s a bit weird. It implies that your existence gives you very strong evidence that the universe is big. It implies the presumptuous philosopher result. Still, I think it can be shown that any other view will have even weirder results.
I'm going to ask an absurdly dumb question, because I just don't understand anthropic arguments and just want to give up when I hear them.
Don't SIA accepters accept the existence of a reference class? Otherwise wouldn't the SIA give us reason to accept panpsychism by saying it is more likely that there are experiencing things? If you accept the SIA, I'm assuming you don't think you believe this increases the chance electrons are conscious?
Again, I apologize, I have no idea what's even happening.
I'm surprised by the amount of comments which are just "these questions are dumb and you should feel bad for asking them" rather than making any objective claim or response to anything.