1 Introduction
I’ve written in lots of places about the anthropic argument. It’s the argument that personally moved me the most, having been instrumental in my conversion to theism. Unfortunately, I was recently looking back on my defense of it, and saw that there were various things that I misframed or defended poorly. Thus, I thought I’d write an update, defending it from the various objections, and explaining how it can avoid various controversial commitments. In this post, I will aggressively plagiarize myself—think of this as an updating of the old anthropic argument post, rather than something wholly distinct.
The argument is surprisingly simple to lay out, though analyzing it in detail gets into some tricky territory. It begins with the assumption that something called the self-indication assumption (SIA) is true. That’s the view that a theory that predicts more people existing that you might presently be explains your existence better. More specifically, suppose one theory says that N times as many people exist that you might presently be as another theory. SIA says—and maybe there are weird caveats which we’ll get to later—the first theory explains your existence N times as well as the second one.
According to SIA, if a theory predicts 10 times as many people exist as another theory, so long as for all you know you might be any of the people, then it’s 10 times as likely you would come to exist. Consider the following example:
God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” Conditional on the message being true, what should your credence be that the coin landed heads?
If one is convinced of SIA, they should think that heads is 1 million times less likely than tails, because if the coin comes up tails, 1 million times as many people exist.
Now, let me get one important thing out of the way: SIA says that you should think that there are more people who you currently might be. That’s the relevant group of people. So if, for example, you have a red shirt, and God will create 1 person with a red shirt if a coin comes up heads, and 10 people with red shirts if the coin comes up tails, then tails is 10 times likelier than heads, because right now, for all you know, you could be any of the people with red shirts. In contrast, if God will make 1 person with a red shirt if a coin comes up heads, and 1 with a red shirt + 9 with a blue shirt if it comes up tails, and I get created by this process with a red shirt, heads and tails are equally likely. This is because I know I’m one of the red-shirted people, and heads and tails predict equal numbers of red-shirted people being created. In other words, they predict equal numbers of people that I might presently be.
You might wonder: why is this what SIA tells us to care about? Why people who we might currently be? Well, it would be weird if SIA told you to think from the armchair that there would be a lot of shrimp. The existence of more other people different from you doesn’t increase the probability that you would exist. As we’ll see, the arguments that motivate the SIA only give you reason to think there are more people that you might be, rather than that more people total.
Here’s another way of thinking about it. Suppose we’re back to the case where God will make one red-shirted person, and then he’ll also make nine blue-shirted people if a fair coin comes up tails. If the coin comes up tails, perhaps there’s a sense in which you could have existed as one of the blue-shirted people. However, even if this is right, then this means it’s 10 times likelier that you’d exist, but it’s 1/10 as likely that you’d have a red shirt given that you exist. So adding more people other than you, even if it raises the probability of your existence, doesn’t actually make it more likely that you’d exist and have the properties that you have. Even if we say there’s some sense in which you could have been a shrimp, for instance, then a theory that says there are more shrimp might make it more likely that you’d exist, but proportionally less likely you’d exist as a non-shrimp. The two probabilities exactly cancel out.
(This is one cool feature of SIA—it has the elegant property that it doesn’t matter how you conceptualize who you could have been, which there’s probably not even a fact of the matter about).
Anyway, granting SIA, the case for theism is pretty straightforward. Your existence, according to SIA, gives you reason to think that theories are better so long as they predict more people exist. But theism predicts that way more people exist than atheism. It’s good to create a person and give them a good life, so God would create, in his infinite power and goodness, some ungodly (Godly?) number of people. In contrast, on naturalism, one would expect there to be many fewer people. SIA tells you to think that there are tons and tons of people; there being this many people makes sense if there’s a God who wants to create, but little sense on naturalism.
Now you might think: isn’t infinite people enough? If an atheist believes in an infinite multiverse, or an infinitely big universe, then their theory predicts the existence of infinite people. How can theism have the advantage then?
The answer: because there are bigger and smaller infinities. This sounds weird but it’s uncontroversial in math. If the universe was infinite in size, it would have aleph null people—that’s the smallest infinite. But the number of possible people is at least Beth 2—that’s an infinite way bigger than aleph null. In fact, Beth 2 is infinitely bigger than something that’s infinitely bigger than aleph null. Beth 2 is more than the number of numbers that exist—and there’s no plausible atheistic account of reality on which Beth 2 people come to exist.
So even if the universe is infinite, it doesn’t have enough people. If SIA tells you to think that a theory with more people is better, a theory with infinitely more people is infinitely better. That means that the only thing we should look at in terms of a theory is which theory best predicts that the number of people is the most that it could be. Theories on which the number of people is the most it could be infinitely outperform other theories. After all, each infinity is infinitely bigger than the infinities that are smaller than it; therefore, by SIA, the theory on which there is the most people there could be predicts your existence infinitely better than other theories.
Thus, theism predicts at least Beth 2 people existing, while atheism does not. But your existence is infinitely likelier if Beth 2 people exist than if fewer than Beth 2 people exist. Therefore, let L (which, remember, is at least Beth 2) be the size of the collection1 representing the number of possible people:
There are at least L people (deduced from the fact that you exist).
The existence of at least L people is vastly likelier if theism is true than if theism is false.
Therefore, the fact that L people exist is strong evidence for theism.
Notably, to think that 1) is right and L people exist, you don’t need to think every possible person exists, for the following reason. Infinites come in different size. The sorts of infinities that mathematicians talk about are infinite sets. Two infinite sets can be the same size even if it seems like one has more stuff than the other—for example, the set of all natural positive numbers (1, 2, 3, 4…) is the same size as the set of all natural numbers (…-4, -3, -2, -1, 0, 1, 2, 3, 4…). So long as you pair each member of an infinite set with a member of another infinite set, the infinite sets are the same size. You could pair each member of the set of all natural numbers with each member of the set of positive natural numbers by pairing the nth even number with the nth positive number, and the nth odd number with the nth negative number. For instance, you’d pair 1 with -1, 2 with 0, 3 with -2, 4 with 1, and so on.
Thus, in order to think that God creates a number of people that’s equal in size to the maximum number of people there could be, you don’t have to think he makes every possible person. This is analogous to the number of natural positive numbers being the same size as the number of positive numbers, even though not every natural number is positive. Additionally, if you think that the number of people is too big to be a set, as I’ll argue for later, then God doesn’t need to make every possible person—he just needs to create a number of people too large to be a set.
Thus, to summarize the argument in two sentences: SIA says theories that say there are more people are better, in which case you should think the number of people is the maximum that it could be. This makes more sense if there’s a God who desires to create and has no limits on how much he can create than if there isn’t.
Important clarification: The proposal is that God makes a big multiverse. That’s where the people are. Obviously he does not stuff infinity many people into earth.
2 Suppose I accept that there’s a giant infinity (at least Beth 2) worth of people. Why does that undermine atheism?
Suppose you grant what I’ve argued so far, that the number of people is the most there could be (which we’re calling L)—at least Beth 2 and probably more. Why is that a problem for atheism?
First of all, some fact is evidence for a hypothesis if that fact is likelier if the hypothesis is true than if it’s false. A person’s blood being at the crime scene is evidence that they committed the crime, because it’s likelier their blood would be at the crime scene if they committed the crime than if they didn’t. As we’ve already explored, it’s not unlikely that God would create a huge infinity worth of people. Thus, the odds of there being at least L people—where that’s the most there could be—on theism is pretty high. If a God is infinitely powerful and has a reason to create (that reason being: it’s good to create, and he’s perfectly good), we’d expect him to do a lot of creating.
In contrast, the odds of there being at least L people on atheism are super low. Remember, L is at least Beth 2, more than the number of numbers. On atheism, we would not expect there to be more people than numbers. In fact, the number of atheists who have proposed worldviews on which L people exist is around 2 (and we’ll discuss those later). To get at least Beth 2 people, one needs an extreme degree of gerrymandering and shenanigans—a natural multiverse doesn’t get Beth 2 people, nor does a universe that’s infinite in size. God can easily make Beth 2 people, but if he doesn’t do that, the odds of there being Beth 2 people are very low. If, as I’ll argue later, there are a lot more than Beth 2 people, the situation is significantly worse for the atheist.
Second, even if the atheist can explain the presence of L people, an atheist world will contain L deceived people. Throughout the infinite multiverse, there will be at least L Boltzmann brains—brains that randomly fizz into existence in the recesses of outer space, before quickly disappearing—as well as infinite people with your present set of beliefs who are in some way massively deceived: simulated, on a planet about to be destroyed (and thus who are mistaken in thinking the sun will rise tomorrow), Schizophrenic, and so on. But if there are infinite deceived people, and no more non-deceived people than deceived people, then you should have no confidence in the reliability of your cognitive faculties. You shouldn’t even trust that the sun will rise tomorrow, for there are just as many people with your exact experiences for whom the sun rises as for whom it doesn’t.
You might ask: aren’t there more non-deceived people than deceived people? No! The way infinities are measured is by their cardinality—that’s determined by whether you could put the members of the infinite set in one to one correspondence. If you have five apples, and I have five bananas, they’re sets of the same size, because you can pair them 1:1.
Often, infinities can be the same cardinality even if one seems bigger than the other. For instance, the set of all prime numbers is equal in size to the set of all natural numbers, because you can pair them one to one: you can pair 1 with the first prime, 2 with the second prime, 3 with the third prime, and so on.
Crucially, even if deceived people are rarer and non-deceived people are common, the number (measured by cardinality) of deceived people will be the same as the number of non-deceived people. To see this, suppose that there are infinite galaxies. Each galaxy has 10 billion people who are not deceived and just one person who is deceived. Intuitively you’d think that there are more non-deceived people than deceived people.
This is wrong! There are the same number. Suppose the galaxies are arranged from left to right, with a leftmost galaxy but no rightmost galaxy. Imagine having the deceived people from the first 100 trillion galaxies move to the first galaxy (containing 10 billion deceived people). Next, imagine having the next 100 trillion galaxies move to the second galaxy. Assuming you keep doing this for all the people, just by moving the people around, you can make each galaxy have 100 trillion people who are deceived and only 10 billion who aren’t deceived. So long as the number of deceived people is not a function of where the people are located, it’s impossible to hold that there are more deceived people than non-deceived people based on the fact that deceived people are rarer than non-deceived people. How rare deceived people are can be changed just by moving people around.
Thus, if naturalism is true, then if there are L people total, there are L people who are deceived, just as there are L people who aren’t deceived. Your credence in you being deceived should thus be undefined. Just as finding yourself in a world where there are no more non-simulated people than simulated people should erase your confidence in you not being simulated, finding yourself in an atheistic world where there are no more non-deceived people than deceived people should erase your confidence that you’re deceived. An atheist who buys that the world is infinite thus can’t even coherently claim that induction is likely.
You might ask: why not have a credence of 1/2 in your being deceived if there are both L deceived and non-deceived people? The short version is that this produces inconsistent results; there are L people of all kinds; deceived, non-deceived, deceived or non-deceived, deceived and named Francois, and so on. The argument given above can also generate the conclusion that there are no more non-deceived people than deceived people named Francois—as by reordering the universe, you could make every galaxy be filled almost exclusively with deceived people named Francois.
Now you might worry: if God makes infinite worlds, won’t there still be infinite massively deceived people? So then shouldn’t this undermine induction? No, because for every particular person, they’re placed in a world optimal for their flourishing, which is unlikely to be a counterinductive world.
Here’s an analogy: suppose you’re in Hilbert’s hotel (that’s an infinitely big hotel). There are infinite copies of you in the hotel. You roll a six-sided die. What’s your credence in the die coming up 1-5? The answer is, of course, 5/6. But note: the cardinality of the people who get 1-5 is the same as the cardinality of people who get 6. In fact, by moving people around, you could make it so that every room has a hundred people who get 6 and only one who gets 1-5—or the opposite. This is a weird property of infinity—if you have two infinites of the same cardinality, you can match them up any which way. You can match ten members of one set to each member of the other set or do the opposite.
On theism, the situation is rather like that in Hilbert’s hotel when you roll dice. Every particular person is in the scenario ideal for their flourishing: that probably doesn’t involve being deceived. However, on non-theistic views, there’s no analogous process: there are simply infinite deceived and non-deceived people, and the infinites are the same cardinality. Thus, non-theistic views consistent with SIA probably result in skepticism.
3. 4 strong arguments for the self-indication assumption
Now that we know what SIA is and why it supports theism, why think it’s true? I’ve elsewhere made a list of 27 arguments for it, but here I’ll have to confine myself to a few of the best.
3.1 Doomsdayish arguments
Suppose at the dawn of history, there is to be one person. This person flips a coin. If it comes up heads, they’ll die off without having any offspring. If it comes up tails, they’ll have 999 offspring. Each of the people don’t know their birth rank for the first bit of their lives—they don’t know if they’re the first person or one of the later people. However, after a while—say, halfway through their lives—they’re told their birth rank.
Consider the situation if you get created but haven’t yet found your birthrank. If you reject SIA, you don’t think it’s more likely that the coin—whenever it’s flipped—comes up heads than that it comes up tails. Tails is only more likely if you accept SIA, and think that your existence is more likely if more people exist (because if it comes up tails, 999 extra people get created). So one who rejects SIA should think, prior to finding out their birth rank, that the odds of tails is 50% and the odds of heads is 50%.
Let H=the hypothesis that the coin, whenever it’s flipped, comes up heads. ~H is the hypothesis that it comes up tails. Notice that I say comes up rather than has come up or will come up, because the person who doesn’t know their birth rank doesn’t know if the coin has been flipped yet or will be flipped in the future—they don’t know if they’re the first person or one of the later people.
Conditional on H, you’re guaranteed to be the first person. Conditional on ~H, you have a 1/1000 chance of being the first person. So after being created, ignorant of your birth rank, you should reason: if I find out that I’m the first person, H will be 1,000 times likelier than ~H.
Now suppose that you do find out that you’re the first person. If you reject SIA, you should think H is now 1,000 times likelier than ~H. But wait—H is the proposition that the coin when it’s flipped is to come up heads. Now you know the coin hasn’t been flipped yet, because you’re the first person. So if you reject SIA, you must think that the odds of the coin coming up heads are 1,000 times greater than the odds of the coin coming up tails—even before the coin is flipped—just because you’ll have a bunch of kids if it comes up tails.
This shouldn’t be possible. Before flipping the coin in that scenario, you shouldn’t really think that the coin is 1,000 times likelier to come up heads than tails. It has a 50% chance of coming up heads, clearly.
Here’s one way to see this: imagine there’s an event with an objective probability of N%. That means that if the process is repeated, the event will happen N% of the time. For example, a coin has an objective probability of coming up heads of 50%, because if you keep flipping it, it will come up heads half the time. In such a case, if the only difference between the event happening and it not happening will be in the future, then your credence in it happening should be N%.
More precisely: if some event has an objective probability of N%, and the only difference between it happening and it not happening will be in terms of what will happen in the future, your credence in the event happening should be N%.
For example, suppose someone is going to flip a coin. If it comes up heads, a city will be destroyed. I should think that there’s a 50% chance it will come up heads—because the city blowup will be in the future, it shouldn’t influence my judgments, so I should just revert to the default 50% odds.
This principle rules out thinking the odds of the coin coming up heads, in the original scenario described, are more than half. Because the only differences between it coming up heads and tails are in terms of how many future people will be created, your credence in it coming up heads should be 50%.
There are two main ways to go if one rejects SIA. The first is to bite the bullet and just say that the odds of the coin coming up heads really are 1,000 times the odds of it coming up tails. But this implies even crazier things. Consider two more results that follow from it.
First experiment: Serpent’s Advice Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery. One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, one the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’s theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!”
If they discovered their birth rank, on such a picture, they’d get extremely strong evidence that they won’t have kids, because the theory according to which humanity doesn’t have many people makes it likelier that they’d be so early. For this reason, then, people who bite the bullet in the coin case will also have to bite the bullet in the Adam and Eve case. This gets even crazier in the second case, called Lazy Adam:
Assume as before that Adam and Eve were once the only people and that they know for certain that if they have a child they will be driven out of Eden and will have billions of descendants. But this time they have a foolproof way of generating a child, perhaps using advanced in vitro fertilization. Adam is tired of getting up every morning to go hunting. Together with Eve, he devises the following scheme: They form the firm intention that unless a wounded deer limps by their cave, they will have a child. Adam can then put his feet up and rationally expect with near certainty that a wounded dear – an easy target for his spear – will soon stroll by.
Once again, assume that they didn’t know that they were early at some time and then found it out. On such a picture, they get ridiculously strong evidence they won’t have many kids, so can be confident that whenever they agree to reliably procreate unless a deer drops dead at their feet, a deer really will drop dead at their feet.
Such a position is, I think, absurd. One doesn’t get magical powers to make whatever they want to happen happen by discovering that they’re early and then agreeing to have a bunch of kids unless the thing happens.
The second way to go is to reject the probabilistic principle behind the argument. On this picture, prior to finding out your birth rank, you should think the odds conditional on ~H (remember, that’s the proposition that the coin doesn’t come up heads when flipped) of you being the first person are 1/1,000, while they’re 100% conditional on heads. However, once you discover that you’re the first person, H doesn’t get confirmed. This has six problems.
It violates Bayesian conditionalization. After learning you’re person one, which is 1,000 times as likely if the coin comes up heads as if it comes up tails, it holds you should remain indifferent between heads and tails. But surely if you learn something that is 1,000 times as likely if a theory is true as if it’s false, your credence in the theory should go up.
It violates conservation of evidence. If you’re about to learn your birth rank, on average your credence in the coin coming up tails will go up. This is because if you learn you are the first person, your credence doesn’t change, while if you learn you’re not the first person, the coin coming up tails is confirmed. But this isn’t how probability should work; it shouldn’t be that after receiving some evidence in the future, you expect your credence to rise on average.
It violates the following principle. Suppose there are two theories A and B that are mutually exclusive and jointly exhaustive (one of them has to be right, but they can’t both be right). If and only if B is true then either C or D is true. Learning that D is false should raise your credence in A. This principle is violated because in this case, if the coin came up tails, definitely you’re either the first person or one of the later 999 people, so learning you’re not one of the later 999 people should raise your credence in you being the first person.
By changing the numbers, it implies that so long as a second coin will be tossed, you should think that there’s a 5/8 chance that a fair coin that will be flipped later will come up heads. To see this, imagine that one person gets created. Then, at the end of the day the person is created, a coin is flipped. If it comes up tails, a second person is created. Then at the end of the day the second person is created, a coin is tossed that has no effect. In this case, suppose you get created but don’t know your birth rank: what odds should you give to the following proposition: the coin that will be flipped at the end of today will come up heads?
Following the above reasoning, you should think there’s a 1/2 chance that only one person will exist from this process. But if there’s only one person, then you are guaranteed to be the first person—meaning necessarily the coin will come up heads at the end of today. But you should also think there’s some chance that you’re the second person created and the coin will come up heads: specifically, you should think the odds of that are 1/8, as it requires that three things happen each of which have probability of 1/2: the coin flipped at the end of day one comes up tails, you’re the second person, and the coin flipped at the end of day two comes up heads. Thus, in total, you should think there’s a 5/8 chance that the fair coin that will be flipped at the end of today that hasn’t been flipped yet will come up heads (4/8 you’re the first person and it will come up heads, 1/8 you’re the second person and it will come up heads). But this is absurd. Fair coins come up heads half the time, and the principle discussed earlier rules out a credence other than .5 in the fair coin coming up heads.
It gets worse when we return to the original case, where a person is created, and then a coin is tossed that creates an extra 999 people if it comes up tails. If you follow this line of reasoning and get created, not knowing your birth rank, you’ll think that it’s as likely that the coin comes up heads as that it comes up tails. But given that if the coin comes up tails, the odds are only 1/1000 that you’d be the first person, while you’re guaranteed to be the first person if the coin comes up heads, you should then hold the following belief: “if I’m the first person, the odds are 1/1000 that the coin that will be flipped at the end of today will come up tails.” The 50% of probability space occupied by tails worlds is evenly split across the thousand days it might be, but all 50% of probability space occupied by heads worlds is centered on the scenario where you’re the first person and the coin will come up heads. But this is very counterintuitive—you shouldn’t think that if you’re the first person the coin is likelier to come up heads than tails: fair coins come up heads half the time, and the plausible principle discussed before rules out a credence of anything other than .5.
5) jives especially poorly with the fact that the person adopting this view thinks that when you learn you’re the first person, you should then think heads and tails are equally likely. This is weird, given that they hold that prior to learning you’re the first person, it’s 1,000 likelier you’re the first person and the coin will come up heads than that you’re the first person and the coin will come up tails. Prior to learning that you’re the first person, the adopter of this view thinks that if you’re the first person, there’s a 1/1,000 chance that the coin will come up tails—but if this is right, then learning you’re the first person should make you think there’s a 1 in a 1,000 chance the coin will come up tails.
This follows from the following obvious principle:
If the probability of X given Y is R%, and you learn Y and nothing else, if you’re guaranteed to learn Y if it’s true, and you won’t be told it if it’s false, then your credence in X should be R%.
Example: if I’ll be told I’m in Kansas if I’m in Kansas, and the odds I’ll be in the left part of Kansas given that I’m in Kansas are .5, and then if I learn I’m in Kansas and nothing else, I should think the odds that I’m in the left part of Kansas are .5. There can’t be a counterexample to this, because the conditional probability of some event given the hypothesis just is how likely the event is given the hypothesis.
Thus, this first argument for SIA is, in my view, completely decisive. Non-SIA views must hold that your credence in a fair coin that hasn’t been flipped yet coming up heads ought to be other than .5, and other probabilistic absurdities. A view that affirms Bayesian conditionalization ends up resulting in arbitrary certainty in the outcomes of future fair chancy events, and views that don’t affirm Bayesian conditionalization produce similar—and analogous—absurdities.
3.2 Disconnected influence argument
In this section, I’ll provide an argument that generally vindicates SIA. However, I’ll just discuss one simple case for simplicity, though note: the principle generalizes.
Imagine a coin gets flipped. If it comes up heads one person gets created, while if it comes up tails, a million people get created. After getting created from this process, what should your credence be in the coin having come up heads? SIA holds that tails is a million times likelier than heads, while other views generally hold that heads is likelier.
But now imagine that this experiment is repeated every million years. Since the start of the universe 13.8 billion years ago, every million years, people have done this experiment; they’ve flipped a coin, and created a million people if tails and one person if heads. Now everyone agrees with SIA! For every person who is created by heads coin flips, there are a million people created by tails coin flips. If the experiment is repeated, coins are guaranteed to come up both heads and tails many, many times. Because tails creates a million times as many people as heads, there are a million times as many people created by tails coin flips. Thus, you should think at a million to one odds that you were created by a tails coinflip.
Thus, SIA’s rivals agree that tails is a million times likelier than heads in the case where the experiment is repeated every million years. But that means they imply that in order to figure out if a coin was flipped five minutes ago that created one person if heads and a million if tails, you need to know what was going on before the time of the dinosaurs. If they did the experiment a bunch of times before the time of the dinosaurs, then you should think that tails is a million times likelier than heads; if not, then you should think heads and tails are equally likely.
But this is absurd! The odds a coin was flipped five minutes ago that created one person if heads and a million if tails doesn’t depend on what was happening 10 million years ago!
It gets even worse. Now imagine that rather than the coin being flipped every million years, it’s flipped once every hundred galaxies throughout the observable universe. Each time, if it comes up heads, it creates one person in that galaxy, while if it comes up tails, it creates a million people. For the same reason, non-SIA views now hold that you should think heads is a million times less likely than tails. For every one person who gets created by heads coin flips, around a million get created by tails coin flips.
Thus, non-SIA views hold that to determine the odds a coin was flipped five minutes ago in our galaxy, that created one person if heads and a million if tails, you need to know what people are doing in other galaxies and were doing millions of years in the past. In fact, by the same logic, you need to know what people were doing in other galaxies millions of years in the past, and in spatiotemporally disconnected universes. This is completely insane!
3.3 Birth control
Here’s one case where SIA’s implication is a bit weird—coming from Joe Carlsmith’s Ph.D thesis:
God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” Conditional on the message being true, what should your credence be that the coin landed heads?
SIA answers that tails is a million times likelier than heads because it predicts a million times more people existing. Other views disagree, for they deny that more people existing makes your existence more likely. Given that SIA is the only view that says tails is a million times more likely than heads in God’s extreme coin toss—and is the only one that could be that way in principle, for only it says that if there are more people, your existence is more likely—if this judgment is correct, then SIA would be proved. Fortunately, I think we can show that this judgment is correct. Consider first:
Single-case contraception: You wake up alone in a white room. There’s a message written on the wall saying “I am your father. Your mother and I were the only two people in existence prior to you. I flipped a coin before having sex. If the coin came up heads, I used extremely effective contraception with a failure rate of one in one million. If it came up tails I used no contraception. All sex acts without contraception produce children.” Conditional on the message being true, what should your credence be that the coin landed heads?
The obvious answer is that heads is a million times less likely than tails. From the fact that you exist, you get very strong evidence that your parents didn’t use super-effective contraception. If the contraception only has a failure rate of 1 in 1 million, then you get 1 million to one evidence against them using that contraception.
Here’s another way to see this: you know that your mom got pregnant. The odds of that are 1 million times higher conditional on the coin coming up tails than heads. So, therefore, you get 1 million: 1 evidence for tails over heads.
Next consider:
Contraception worldwide: You wake up alone in a white room. There’s a message written on the wall saying “The generation before you had one million people. A fair coin was flipped. If the coin came up heads, they all used extremely effective contraception with a failure rate of one in one million and had sex. If it came up tails they all used no contraception. All sex acts without contraception produce children.” Conditional on the message being true, what should your credence be that the coin landed heads?
Intuitively, this scenario is relevantly analogous to the one before. The odds you’d be born are a million times greater conditional on tails than conditional on heads. The odds your mom would get pregnant are, likewise, a million times greater conditional on tails than conditional on heads. This one is like the previous case—the odds of you being born from parents who use contraception doesn’t depend on whether other people are using contraception. But the only difference between this and the last case relates to whether other people use contraception. Next, consider:
Single prolific contraception: You wake up alone in a white room. There’s a message written on the wall saying “The generation before you had two people, including me, who had sex a million times. A fair coin was flipped. If the coin came up heads, for each sex act, we used effective contraception with a failure rate of one in one million and had sex. If it came up tails we used no contraception. All sex acts without contraception produce children.” Conditional on the message being true, what should your credence be that the coin landed heads?
This case is importantly like the last case. In both cases, a million sex acts happen either with or without contraception. The only difference is that in this case, one pair has all the sex, while in the last case, the sex was split across a million people. But surely that shouldn’t be relevant to probabilistic reasoning. Additionally, one can reason in the same way as they did in the last case: let sex act N denote whichever sex act produced the person in the room. For example, if they were produced by the 150th sex act, N is 150. The probability of one’s mom being impregnated by sex act N is 1 in 1 million if the coin came up heads, while it’s 100% if the coin came up tails. So one thereby gets 1 million to 1 evidence for tails.
Finally, consider:
God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” Conditional on the message being true, what should your credence be that the coin landed heads?
This case is relevantly like the last one. In both cases, the first people will flip a coin and then create a million people if the cion comes up tails and 1 person if the coin comes up heads. They only differ in how the people are made—but surely that’s irrelevant to the relevant probabilistic reasoning.
Thus, it’s rational in this case to, like in the previous cases, think tails is 1 million times likelier than heads and that SIA is confirmed.
3.4 Elga’s argument
In the first major paper on anthropic reasoning, Adam Elga presented an argument for SIA that is, I think, overwhelmingly decisive. Here’s a simplified example, though the same point applies to every case where SIA is applied: imagine a coin gets flipped and if it comes up heads, only Jon gets created. If it comes up tails, Jon and Jack get created. Suppose you get created but you’re not sure if you’re Jon or Jack. What odds should you give to the coin having come up heads? SIA answers: 1/3. Other views answer: 1/2.
The argument Elga gave for SIA’s verdict—well, he was talking about a different problem called the sleeping beauty problem, but the same point applies—is that the probability that you’re Jon and the coin came up heads = the probability that you’re Jon and the coin came up tails = the probability that you’re Jack and the coin came up tails. Thus, because there are three equally probable outcomes and two of them require that the coin came up tails, tails must be twice as likely as heads.
It’s obvious that the odds that you’re Jon and the coin came up tails = the odds that you’re Jack and the coin came up tails. You have no special evidence either way, so you’re equally likely to be either of them. The controversial bit is that the probability that the coin came up heads and you’re Jon = the probability that the coin came up tails and you’re Jon.
But this part is pretty intuitive. Here’s one way to see this—suppose that it’s likelier that the coin came up heads and you’re John than that the coin came up tails and you’re John. Well, that would mean that if you find out that you’re Jon, you should think the coin came up heads, with 2/3 probability. But how could this be—whether the coin comes up heads or tails, Jon exists, so it’s hard to see why finding out that you’re Jon should make you think heads is probably true.
Here’s another way to see it: imagine that Jon gets created on day one. Then, at the end of the day, a coin gets flipped. If it comes up tails Jack gets created. You wake up, not sure either what day it is or whether you’re Jack or Jon. If you reject SIA, prior to finding out what day it is, you should think there’s a 50% chance that you’re Jon and the coin, whenever it’s flipped, comes up heads, 25% chance that the coin comes up tails and you’re Jon, and 25% chance that the coin comes up tails and you’re Jack.
Then, suppose you find out that it’s the first day. Well, the odds that now would be the first day conditional on the hypothesis that the coin comes up tails is 50%, while it’s 100% if the coin came up tails. So now, if you reject Elga's logic, you should think that there’s a 2/3 chance that the fair coin that hasn’t been flipped yet will come up heads. But then weirdly if you learn that the coin won’t create Jack if it comes up tails, then you should be back to 50/50. This is very strange.
This same point can be applied to vindicate SIA more broadly. Anytime there’s a theory that says there are more people, by this reasoning, that theory will be favored. For example, suppose that a coin gets flipped—a million people will be created if it comes up tails, one person if it comes up heads. Elga’s reasoning vindicates SIA’s verdict that tails is 1 million times likelier than heads—the odds the coin came up heads and you’re the first person=the odds the coin came up tails and you’re the first person=the odds the coin came up tails and you’re the second person…=the odds that the coin came up tails and you’re the millionth person. So from this basic, correct reasoning pattern, SIA is vindicated.
4 Objections
No doubt at this point you’re so compelled by the reasons presented that you’re shocked—shocked—that anyone might object to the argument. You’ll be disappointed to know that people do, in fact, object to the argument. So, let’s see what’s wrong with these objections. (If you want me to address the most popular objections to SIA, skip to section 4.7).
4.1 Would God do a lot of creating?
Here’s a first objection you might have: maybe God wouldn’t create at all! God alone is utterly perfect, and a world with him alone has no deficiencies. Thus, he’d have no reason to create. (If you want a very lengthy discussion of this and other related arguments, see here and here).
I think this objection is both wrong and not responsive. It’s wrong because, as I’ve argued in a published paper and various other places, it’s good to create. This is pretty intuitive; we normally think that so long as I have a good life, filled with love, joy, and achievement, my parents did something good in creating me.
It’s true that a world with God alone has the single best thing, but it’s not the best possible world. A world with God and other good stuff is better than a world with God alone because it has more good stuff. Now, if you count a world with God and other good stuff as a thing, then that thing is better than God alone, but God remains the best non-composite thing—the best single thing that doesn’t involve stitching together a bunch of different things.
Second, I think this is irrelevant to the argument. The anthropic argument claims that anthropic theory—specifically, SIA—raises the likelihood of theism. It’s not about whether God would create in the first place; it’s about how many people would be created by God vs. atheism. Thus, talking about whether God would create at all is changing the subject.
When evaluating an argument in a vacuum, you’re supposed to keep lots of stuff in the background. Specifically, you’re supposed to keep fixed the stuff irrelevant to the argument, and ask how much the stuff the argument is about raises the likelihood of one view. I claim we should keep in the background that there are some creatures like us, and have the update be from the number of creatures like us, as that’s the focus of the argument. This is for several reasons:
If you’re going to consider the arguments for why God wouldn’t create creatures like us, then you should also consider the arguments for why creatures like us wouldn’t be created if atheism is true. But that means that before figuring out if the anthropic argument is successful, you have to decide on whether each of the following arguments are successful: the fine-tuning argument, nomological harmony, the argument from consciousness, psychophysical harmony, evil, the existence of the physical world, and much more! That’s totally unwieldly and makes it impossible to ever conclude that an argument is good. When evaluating an argument, ideally you ought to limit the scope of your focus to that which is specific to the argument.
The number of creatures is specifically the focus of the anthropic argument. Just like it’s cheating to respond to the problem of evil by saying “our world’s evils are contingent and finely-tuned,” and then bring up the contingency and fine-tuning arguments, it’s cheating to bring in arguments about whether God would create, when addressing an argument that is specifically about how many creatures God would create.
When evaluating an argument, and whether it raises the likelihood of a view, you should take as background the stuff that you ought have already considered even if you’d never heard of the argument. This is why it’s cheating to bring up the existence of contingent stuff in response to the problem of evil; perhaps contingency favors theism, but it’s a separate consideration that you ought to have considered even if you’d never heard of the problem of evil. But even before considering the anthropic argument, if evaluating theism vs atheism, you ought to consider the odds that God would create.
What determines if an argument is successful is whether it would convince a reasonable agnostic. If an argument would convince someone who was on the fence, or even a person who thought your view was probably false but wasn’t sure, it’s a pretty good argument. Sure, the problem of evil isn’t going to convince someone who is otherwise certain of theism, but it’s a good argument because it should convince someone who was on the fence and had never heard of evil. But crucially, a person who was on the fence should have considered the various other arguments about whether or not God would create at all. Thus, these considerations don’t affect its success—whether it would convince a reasonable agnostic.
When evaluating the success of an argument, you should look at how many times likelier a person would think the view that the argument is in service of is after hearing the argument than before. This is called its Bayes factor. An argument from evil might be successful, even if it doesn’t ultimately convince someone, if it makes them think theism is less likely than it was before. But once again, before considering the anthropic argument, you should have already thought about whether God would create beings like us. Given that the theistic hypothesis that you ought to have considered before thinking about the anthropic argument should have included the supposition that God would create, none of the considerations affect how much you should update from the anthropic argument.
4.2 What if I don’t buy SIA?
I’ve explained earlier why I think SIA is by far the best view. But even if you’re not sure about SIA, anthropic reasoning should favor theism.
The main rival of SIA is the self-sampling assumption (SSA). The idea behind SSA is that you should reason as if you’re randomly drawn from the people in your reference class. Your reference class is the collection of people enough like you that you reason as if you’re randomly drawn from them.
Let’s illustrate this with an example. Suppose a coin is flipped. If it comes up heads, 1 person with a red shirt gets created. If it comes up tails, 1 person with a red shirt gets created and 9 people with blue shirts. If you get created with a red shirt, SSA says that tails is 10x less likely than heads, because there’s only a 1 in 10 chance you’d be the one with the blue shirt.
The problem is, on SSA we’d expect the world to look very different. SSA majorly favors solipsism and other theories on which there are few people. If the world is just you, there’s no one else in your reference class. Thus, on SSA, it’s weird that you exist in a world with so many other people. In contrast, as I’ve argued above, given SIA, theism is the most likely worldview. If two views fit together really well, this should raise your credence in the conjunction of them.
For this reason, given that theism + SIA make your existence likely, while your existence is unlikely on SSA, your existence gives you a big update in favor of theism + SIA. So, therefore, if you’re not sure about which anthropic theory is correct, because theism and SIA fit together so well, you should believe the pair of them.
But we can imagine the situation is worse. Maybe you’re not just uncertain about SIA—you positively reject it. You’re confident that it’s false. If this is so, I still think the anthropic situation favors theism.
Theories of anthropics will generally favor either big or small worlds—either worlds with huge infinities worth of people or with few. SIA favors a big world—it gives you infinitely strong evidence that there are infinite people. SSA favors a small world—it tells you to think your reference class is small. The third most popular anthropic view, compartmentalized conditionalization (I won’t bore you with the details) also favors big worlds.
Theism is compatible with either big or small worlds. As I explained earlier, I think theism naturally predicts a big world. But most theologians seem to think we’re alone. Let’s say there’s a 1% chance we’re right. Well that means that there’s a 1% chance conditional on theism that there’d be a small world.
Thus, on every theory of anthropics theism does at least okay. It predicts high odds of a huge world and decent odds of a tiny world. Whether anthropic reasoning favors big or tiny worlds, theism does okay—though better if it favors big worlds.
In contrast, if naturalism has people at all, it predicts a medium-sized world. It predicts a world with either aleph null or finite people. But that’s an unhappy medium—the worst of both worlds. It’s too big to be viable on SSA—it bloats the reference class too much—but too small to be viable on SIA and compartmentalized conditionalization. For this reason, with little exception, whatever view of anthropics one has, theism is good news.
4.3 Maybe there are only aleph null possible people
Content warning: technical.
I discussed earlier that some infinites are bigger than others. If the universe is infinite in size, it would only have aleph null people—that’s the smallest infinite. But maybe you think that there are only aleph null possible people, so an infinite universe could have the most people that there could be. Atheism doesn’t have too much trouble with an infinitely big universe!
Let me note first of all that to have such a view, you absolutely have to believe in souls. If you think people are just arrangements of stuff, the number of possible arrangements of stuff is at least Beth 2 (there are Beth 1 points, and the number of different arrangements of Beth 1 points is Beth 2, so the number of possible arrangements of stuff is Beth 2. So long as you think a field which takes a value over each spacetime point is possible, then there are Beth 2 possible distinct worlds with fields, which could be have duplicates of the same agent).
Then, after you believe in souls, you have to think that there’s a weird arbitrary limit on the number of souls. For some reason, the number of souls just runs out when you get to aleph null. Why would this be? Unclear.
This view is really weird. It implies that it would be possible to just run out of souls in an infinitely big universe. On such a picture, if the universe is infinite in size, there might be people who try to have a baby but can’t because each of the souls is taken. Such a view should be rejected.
The view has another big problem—as I’ll argue, there are strong grounds for thinking that the number of possible people is not just aleph null or Beth 2 but unsetly many—too big to be a set. The infinites mathematicians talk about are infinite sets—but I think the number of possible people is a collection too large to be a set.
To explain what this means, I’ll have to introduce a bit of terminology. A set is a specific kind of collection. For example, if I have an apple and a banana, there’s a set of an apple and a banana. If you take the power set of a set, you get a bigger set—the power set is the number of subsets you can get from the subset (a subset is a smaller set containing only members of the original set). For example, the number of subsets of the set {1, 2} is 4—there’s the empty set containing nothing, the set with just 1, the set with just 2, and the set with 1 and 2.
This point about the power set of a set being bigger than the set also applies to infinites. Aleph null—also called Beth 0—is the smallest infinite set. It’s equal to the number of natural numbers: 1, 2, 3…. Beth 1 is the powerset of Beth 0, Beth 2 is the powerset of Beth 1, etc. Incidentally, Beth 1 is the number of real numbers—that includes the integers like 1, 2, and 3, as well as all the infinite non-repeating decimals and fractions.
My claim is that the collection of possible people isn’t just aleph null—in fact, it’s too big to be any kind of set. What this means is that for any set of any kind, one can make a bigger set only using parts of the collection of all people.
Talking about things too big to be a set might sound a bit weird, but some things are like this. There’s no set of all truths, for example. It’s too big to be a set for the following reason. Suppose it was a set with, for example, cardinality Beth 10 (cardinality measures the size of an infinite set). Well, because if A is a truth and B is a truth then A and B is a truth, every subset of the Beth 10 truths would be a truth, so there’d have to be Beth 11 truths. Thus, the number of truths is more than any set of any size. I claim possible people are like truths—the collection of them is too big to be a set.
(If this is true, it’s bad news for naturalism, because there’s no plausible story of how the number of people that exist, on naturalism, is too big to be a set).
There are two pretty good arguments for there being no set of all minds and one overwhelmingly decisive argument.
There is no set of all truths. But it seems like the truths and the minds can be put into 1 to 1 correspondence. For every truth (with the exception of truths that reference their own unknown status), there is a different possible mind that thinks of that truth. So therefore, there must not be a set of all possible people.
As Pruss argues, it’s very plausible that any world can be duplicated any number of times. Surely you could simply copy this world Beth 1 times. But if, for any infinite set, you could make that many copies of the world, then there must be no set of all possible people. If the set of all people was of cardinality Beth 10, then if you duplicated this world Beth 11 times, you’d get more people than are possible.
These arguments are pretty good. But there’s an even more decisive argument that appeals to SIA. You see, there are two ways of formulating the SIA. One of them says that you should favor a theory if it says there are more people (this is called the unrestricted self-indication assumption—or USIA), the other says you should favor a theory if it says a greater share of possible people are created (this is called the restricted self-indication assumption or RSIA).
To see the difference, suppose that there are two otherwise equally good theories. If one of them is true, there are Beth 1 possible people and they all exist. If the other is true, there are Beth 2 possible people and they all exist. On RSIA the theories are equal—they both say that every possible person is created. On USIA, the second theory is way better, because it says there are many more people.
If USIA is true, then you shouldn’t think there are alelph null people only—the theory that there are unsetly many people is infinitely better than any theory that says that the number of possible people is only a set. Fortunately, we have two strong arguments for USIA.
First, Elga’s argument before supports USIA. Suppose that we’re considering two theories—one of them says there’s one possible person who exists, the other says there are two possible people who both exist. Assume they’re equally externally credible. By Elga’s reasoning, the odds you’re the first person and only one person exists = the odds you’re the first person and two people exist = the odds you’re the second person and two people exist. Thus, for the same reason, the theory on which there is a bigger infinite of people is infinitely more likely than the theory on which there’s a smaller infinite of people.
Second, RSIA has crazy implications. For example, suppose that one is a necessitarian—they think that nothing other than what happens is possible. On such a picture, from the fact that you exist, you’d get infinitely strong evidence for necessitarianism—even over a theory on which there are many more people, but a smaller share of possible people—from the fact that you exist. This is clearly absurd.
Third, suppose that we discover that there are Beth 1 of a kind of creature called a Zorg. Suppose additionally that you are created and you don’t know if you’re a Zorg or a human. There are two theories: first, that there are aleph null possible and actual humans, and second that there are Beth 1 possible and actual humans. You reason: “if there are aleph null humans, it’s infinitely unlikely that I’d be a human rather than a Zorg.”
Then you learn that you are a human! This gives you an infinitely big update in the second theory. Thus, so long as your credence in you being a human in this case doesn’t depend on the number of things other than humans—in this case, Zorg’s—inevitably USIA will follow.
4.4 Why apply SIA to infinites?
(Credit to Robin Collins for the objections in this section).
A common worry is that SIA shouldn’t be applied to infinites. Perhaps it only applies to finite collections, or perhaps it doesn’t give you a reason to think the collection of people is a bigger infinity rather than a smaller infinity. Perhaps after you conclude that there are infinity people, SIA doesn’t care how big the size is of infinity.
But note, the arguments given for SIA in section 3 all are reasons why you should think the number of people is a bigger rather than a smaller infinity. I’ll just walk through one of these so that you see the picture (as this article is already 10,000 words), but the same points apply to all the other arguments for SIA.
Suppose that aleph null people get created and then a coin gets flipped that creates Beth 1 if tails (Beth 1, remember, is infinitely bigger than aleph null, though they’re both infinites). Upon being created, you should think tails is infinitely likelier than heads because:
The odds you're one of the first aleph null people and the coin will come up heads=the odds you're one of the first aleph null people and the coin will come up tails
The odds you're one of the first aleph null people given that the coin comes up tails are 0 or infinitesimal.
The only possible options are you're one of the first aleph null people and the coin comes up heads, you're one of the first aleph null people and the coin comes up tails, or you're one of the later Beth 2 people.
Therefore, tails is infinitely likelier than heads (all the heads' probability space are equal in probability to an infinitely small slice of the possible tails outcomes).
The above shows what’s wrong with the common worry that the odds of your existence are zero or infinitesimal. It’s not obvious that the odds really are zero or infinitesimal—maybe God creates every possible person, and thus your existence isn’t infinitely unlikely. But even if he doesn’t, you should hold in the background that you’re one of the created people. So long as you do that, the above argument will show that you should be certain that infinite people were created.
Analogy: imagine a person committed a crime. Their blood was found at the crime scene and their DNA was found on the murder weapon. When testifying (before a courtroom of weird Bayesians, no doubt) they say to the courtroom: “that my blood was found on the murder weapon isn’t evidence that I did it, because if we take into account the total evidence, which includes the infinitely surprising fact that pi is as it is, and that you exist, it’s no evidence either way.” This would be a bad argument, because you’re supposed to ignore bits of irrelevant data that don’t point in either direction. So long as we do that, ignoring the people who won’t exist on either theory, by the above argument, we end up certain that a bigger infinity worth of people was created.
4.5 Can every possible person be created?
You might worry that God can’t create every possible person as there are always more people that could be created—the collection of possible people is inexhaustible. I don’t think this is obvious—it might be that he makes every soul, so there are no more he could make. But even if this is right, it’s irrelevant to the argument. The anthropic argument doesn’t need the assumption that God makes every possible person. It only requires that theism predicts more people being than atheism. In the coinflip case, where tails creates ten people and heads creates one, tails is ten times likelier than heads, even though neither involves the creation of every possible person.
4.6 What about modal realism? And the Tegmark view?
Of all the theories ever proposed by atheists, there are, to the best of my knowledge, only two that naturally predict the existence of the most people that there could be. They are modal realism, according to which every possible world exists, and Tegmark’s view according to which every possible mathematical structure is physically instantiated.
The problem is that both of these views undermine induction. I’ve already explained in section 2 why I think any atheistic view will inevitably undermine induction. However, these views even more clearly undermine induction.
Let’s start with modal realism. If modal realism is right, and every possible world exists, then for every world where the laws continue functioning smoothly, there are infinite worlds where they chaotically break down. For instance, there could be a law that will make the world become a puff pastry one second from now, or a head of cabbage, or an octopus, or a rock, or infinity other things. For every world where the laws keep going according to plan, there are infinite worlds where they don’t.
Now, the infinite sets might be the same cardinality, but at the very least, there are no more inductive worlds than non-inductive worlds. Thus, you have no reason to trust induction. (Modal realism also has other problems, like potentially leading to complete moral nihilism, more worlds than there are, and poorly explaining modality—the first two of these criticisms likely also apply to the Tegmark view).
The induction worry also applies to the Tegmark view. For every world where the mathematical laws continue working normally, there are a bunch where they chaotically break down. There are more complicated and chaotic mathematical laws than orderly ones, so the odds of remaining order are infinitesimal or undefined (well, maybe the same number for cardinality reasons, but then the odds of induction are undefined, and you can’t even hold to obvious truths like that the sun will probably rise tomorrow).
Additionally, even if somehow these views can get the right number of people without undermining induction, the anthropic argument still succeeds because the odds of modal realism or the Tegmark view are super low if atheism is true. They’re highly specific and face many objections. The fact that you can explain some phenomenon on a theory doesn’t mean the phenomenon isn’t evidence against the theory—especially if the assumptions needed to explain it are specific and unlikely.
Thus, the existing theories that are supposed to explain the anthropic data face incurable difficulties. Only theism adequately explains your existence.
4.7 Does SIA imply you should think there are infinite chairs, tables, and so on?
Here’s a common objection that people give when they first hear about SIA: if you should think there are infinite people that exist, because that makes your existence more likely, shouldn’t you think that there are infinite tables and chairs that exist, on the grounds that that makes any table or chair you see more likely to exist. Or, to use an example from a recent critic, isn’t there a parallel argument for God from your pet rock’s existence—it’s likelier to exist if more rocks are created (my comments here are a slightly edited version of the comments that I left beneath that article).
This is the parody everyone immediately thinks of; the disanalogy has been explained in various places, including here and here. I feel as though I spend about a third of my life explaining why this parody—"if SIA is true, why not think there are loads of other things, like chairs and tables and rocks"—is false. This includes by email, in person, and by phone.
Let me first be clear on the extent to which your existence does give you evidence for infinite pet rocks. From SIA, your existence gives you evidence that there are more people that you might presently be. Thus, you should favor theories on which there are more people with your experiences—which will include your experience of interacting with rocks. After all, you know that you have your experiences. Therefore, directly from SIA, you get some evidence for many rocks—specifically, you get evidence that there are copies of you that have the experience of seeing rocks.
However, what SIA does not license you to do is conclude that there are more rocks that aren’t seen by observers you might presently be. SIA has no preference for hypotheses on which, say, space is filled with rocks, so that one of them might be your pet rock.
It’s true, on such an assumption, that the odds of your pet rock existing are higher if the world has more rocks. But the odds are no higher that your pet rock would exist and be seen by you. If there are more rocks, assuming you only see one of them, then while any rock is likelier to exist, it's no likelier to be seen by you.
(This is assuming that your pet rock might have been one of the rocks in space. If we think part of what’s required for it to be that particular rock is for it to be created on earth, then the number of space rocks doesn’t affect the odds of your rock existing).
This is crucially different from your existence: no matter where you exist, you'd be the first to know about it.
While SIA and principles according to which your pet rock’s existence give you evidence for many rocks sound similar, none of the motivations are the same. Look at the arguments for SIA in section 3—not a single one of them can be made for the conclusion that there are infinite rocks.
For instance, the Elga argument—the core probabilistic basis for SIA—goes as follows. Suppose a coin is flipped that creates Bob if heads and Bob and Todd if tails. The argument holds:
The odds you’re Bob and the coin came up heads = the odds that you’re Bob and the coin came up tails.
The odds you’re Bob and the coin came up tails = the odds that you’re Todd and the coin came up tails.
Therefore, tails is twice as likely as heads.
None of these arguments can be made for the principle that, say, your pet rock’s existence gives you evidence for infinite pet rocks.
The SIA wasn't invented by me—it has been defended and criticized by some extremely smart people. Such people, in the academic community, do not make this criticism. If you find yourself thinking a philosophical view held by smart and competent people is obviously stupid for a simple reason, that strangely has yet to show up among its academic critics, you should generally reconsider.
4.8 Isn’t it counterintuitive that SIA leads you to think there are infinite people? Isn’t that presumptuous?
The most common objection to SIA raised by serious people is that it’s quite counterintuitive that your existence gives you infinitely strong evidence that there are infinite people. Hopefully the things I’ve said in previous sections head this off—as I’ve shown, views other than SIA imply that you should be presumptuous in the outcomes of future events, that, for instance, you should be certain you’ll get ten royal flushes in a game of poker, so long as you learn both your birthrank and that many people will be created if you don’t get ten royal flushes.
In fact, this is a much worse kind of presumptuousness! It’s much worse for a view to strongly presume that future events will turn out in some highly improbable way, when those future events are genuinely indeterministic, than that past events have turned out some way. We often think unlikely things happened because they explain things about the world (e.g. I think the improbable event of Alexander the Great’s existing and conquering many surrounding countries happened). What makes SIA’s rivals so bad is that they result in near certainty that future improbable things will happen, when nothing about the world so far gives you evidence that they will. It’s much worse to think your existence gives you evidence that in a game of poker that hasn’t yet been played, you’ll get a hundred royal flushes, than that your existence gives you evidence the universe is big—for a bigger universe is likelier to have you.
In addition, SIA doesn’t actually tell you to be certain that the universe is infinite. You shouldn’t be certain of SIA, and if you’re not certain of SIA, you won’t be certain in an infinite universe.
This result doesn’t seem so bad. It’s intuitive that your existence is likelier if more people exist. One reason solipsism is improbable is that if there was only one person, it’s unlikely they’d be you! But if your existence is likelier if there are more people, it’s not unreasonable to think you’re infinitely likelier to exist in an infinite universe.
Lastly, alternatives to SIA imply similar kinds of absurd presumptuousness. For instance, as I’ve shown in a recent paper (section 5), any view other than SIA that follows Bayesian conditionalization will imply that you should be certain that the early Earth didn’t contain many humans, even if the archeological evidence says that it did—I’ll quote my explanation of this in a footnote.2 All theories of anthropics are presumptuous; the question is simply of what they presume. Better for your existence to give you evidence that there’s a bigger universe, likelier to house you, than for it to give you evidence that fair coins will come up heads with certainty, fair games of poker will contain long chains of royal flushes, and that the early Earth wasn’t populated with many neanderthals.
5. Conclusion
Here, I’ve explained and defended the anthropic argument. Even at nearly 14,000 words, this article is far from comprehensive. I have much more to say about why SIA is right, though most of it is explained at the link here, and about why the alternative theories are very bad (for some problems with its main rivals, see here, here, here, and here). Nonetheless, I have responded to the main objections, and argued relatively comprehensively that SIA is correct.
As I’ve explained, the only bit of the argument that is controversial is SIA. If SIA is right, there is an overwhelmingly powerful argument for theism. Fortunately, SIA is about as well-attested as any view in philosophy, being supported by a wealth of strong arguments, only a few of which I’ve been able to list. Were I still an atheist, I haven’t the faintest clue what I’d say about this argument. While one can always reject an argument, I take this one to have overwhelming force, and do not think anything plausible can be said in reply.
It does not really count as a collection, for technical reasons, but I’ll just call it that for simplicity.
Quoting my article:
More can be said in favor of the presumptuous philosopher. In fact, SSA, the main rival of SIA, is similarly presumptuous (Carlsmith, 2022, p.44–45). To see this, consider the following case (not from Carlsmith, but broadly similar to his examples):
Presumptuous Archeologist: archeologists discover that there was a type of prehistoric humans that was very numerous—numbering in the quadrillions. Over time, they uncovered strong empirical evidence that these beings were exactly like modern humans—certainly similar enough to be part of the same reference class. The archeologists give a talk about their findings. At the end of the talk a philosopher gets up and declares “your data must be wrong, for if there were that many prehistoric humans then it would be unlikely we’d be so late (for most people in your reference class have already existed). Thus, they can’t be in our reference class and therefore the data must be wrong.”
Clearly here the philosopher is being irrational. Yet this follows from SSA. SSA gives one reason to think that their reference class is small, and thus one must reject archeological evidence that conflicts with this belief. You might worry that this assumes some controversial notion about there being a symmetry, on SSA, between past and future observers. However, I think this problem will apply to every view that updates on one’s existence and rejects SIA. If a universe with more prehistoric humans is no more likely to contain me, but a world with more prehistoric humans makes it less likely that I’d exist so late, then one has strong reason to reject the existence of many prehistoric humans. This can be seen once again in the same way: if I didn’t know when I existed, and the existence of more people like me doesn’t make my existence more likely, then I would initially assign equal probability to the hypothesis that there are many prehistoric humans like me and that there are few. Upon finding out that I’m not a prehistoric human, however, I’d get extremely strong evidence that there are few prehistoric humans. Thus, just as SIA proponents must disagree with the cosmologists, so too must those who reject SIA disagree with the archeologists!
In addition, attempts to avoid the presumptuous philosopher result have severe problems (for a helpful collection of them, see Olum, 2002).Footnote4 Thus, even if the presumptuous philosopher result is as bad as the results surveyed in this paper, other considerations provide one strong reason to adopt SIA.
An additional argument can be given for the presumptuous philosopher result. The presumptuous philosopher is relevantly analogous to:
Presumptuous Hatcher: Suppose that there are googolplex eggs. There are two hypotheses: first that they all hatch and become people, second that only a few million of them are to hatch and become people. The presumptuous hatcher thinks that their existence confirms hypothesis one quite strongly.
This case is relevantly like the presumptuous philosopher case. Both the presumptuous philosopher and the presumptuous hatcher infer that there were many people that existed. The only difference is that in the Presumptuous Hatcher, both existent and non-existent people are paired with eggs, which only existent people hatch from.
Yet in Presumptuous Hatcher, the reasoning seems perfectly sound. Specifically, upon emerging from one’s egg, one can safely reason in the following way: the probability that my egg would hatch is much higher on the first hypothesis than on the second. Given that it did hatch, I have very strong evidence that the first hypothesis is true.
We can give an additional argument for the correctness of the presumptuous hatcher’s reasoning. Suppose that one discovered that when the eggs were created, the people who the eggs might have become had a few seconds to ponder the anthropic situation. Thus, each of those people thought “it’s very unlikely that my egg will hatch if the second hypothesis is true but it’s likely if the first hypothesis is true.” If this were so, then the presumptuous hatcher’s reasoning would clearly be correct, for then they straightforwardly update on their hatching and thus coming to exist! But surely finding out that prior to birth, one pondered anthropics for a few seconds, does not affect how one should reason about anthropics. Whether SIA is the right way to reason about one’s existence surely doesn’t depend on whether, at the dawn of creation, all possible people were able to spend a few seconds pondering their anthropic situation. Thus, we can argue as follows:
The presumptuous hatcher’s reasoning, in the scenario where they pondered anthropics while in the egg, is correct.
If the presumptuous hatcher’s reasoning, in the scenario where they pondered anthropics while in the egg, is correct, then the ordinary presumptuous hatcher’s reasoning is correct.
If the ordinary presumptuous hatcher’s reasoning is correct then the presumptuous philosopher’s reasoning is correct.
Therefore, the presumptuous philosopher’s reasoning is correct.
As a statistician, one of my main concerns with the Self-Indication Assumption (SIA) is that it treats ‘I exist’ as if it were a typical piece of evidence for updating our hypotheses, whereas from my own vantage point, my existence is guaranteed (probability 1). Hence, we do not truly have an independent observation. Let me try to be more clear:
Given universe A with billions of people and universe B with a gazillion of people the fact that "I exist" is not independent of the worlds being considered. The existence of observers is tied to the structure of each possible world, making "I exist" a given rather than an independent piece of evidence.
Again even more clear (hopefully):
Under SIA, we assign higher probability to World B because it has more observers, making our existence seem more likely in that world. However:
The observation that "I exist" isn't a random variable sampled from a distribution over observers. Instead, it is a given that applies equally in either world, conditional on the subject being able to make observations.
In other words it seems to me that the probability of "I exist" is 1 in both worlds, conditional on our own capacity to reason, so it cannot provide additional evidence to differentiate between them.
This undermines the usual machinery of Bayesian inference, which presumes we have evidence that can vary depending on which hypothesis is true. Here, the evidence ‘I exist’ does not really vary from my standpoint: it’s guaranteed if I’m around to observe it. Consequently, it seems like it does not provide additional, independent information to discriminate between Universe A and Universe B.
For this objection stated even more clearly see: https://mon0.substack.com/p/benthams-bulldog-ruins-my-weekend
Love your stuff btw!
>Thus, if naturalism is true, then if there are L people total, there are L people who are deceived, just as there are L people who aren’t deceived. Your credence in you being deceived should thus be undefined.
This doesn't follow. I'm not sure Bayesians even allow undefined probabilities, at least in the case of an ideal reasoner and non-super-mathematically-pathological propositions like "I am not being deceived" (as opposed to "this uniformly distributed random variable on the interval [0,1] belongs to this specific Vitali set I constructed via the Axiom of Choice"). But regardless of that technicality, just because the world is such that you can't meaningfully apply the principle of indifference or whatever anthropic principle to resolve a certain question, it doesn't automatically follow that you need to be skeptical in any sense about that question. Maybe you can fall back to other epistemic principles in cases like these, e.g., phenomenal conservatism.
It's also not obvious why the atheist can't get around all this by simply proposing some non-theistic axiological competitor that posits the same collection of worlds being created as you think God would create - i.e., the overall metaphysics of world creation is "just" that the best collection of worlds gets created full-stop, with no intermediary agent who's responsible for this. And so if you think that only induction-friendly hypotheses are in the running, and you think theism is induction-friendly due to the worlds that it would generate, this one should also be induction-friendly because it generates the same ones.
Finally, it's extraordinarily mysterious *how* God chooses which set of net-positive anti-inductive worlds he actualizes. Why would he choose one and not the other? Is it random? But we have no theory of probability that's going to work well to model this given the cardinalities involved, so it's "random" in the haziest possible sense (not to mention how weird it is to suggest an omniscient agent choosing something randomly, as if he doesn't know the outcome in advance, and as if there's some Metaphysical Random Number Generator/"MNRG" that is doing stuff outside of any universe and independently of his will and as if the same mystery doesn't adhere to the MNRG's process). In fact, if almost all worlds are inductive, shouldn't we have essentially 100% credence that the next induction I try to apply will succeed, instead of something appropriately lower like 99%? You might reply that we should go by the proportion of observer-moments within each universe where induction succeeds rather than across universes, but why? Putting aside that this sounds more like an SSA approach than SIA, if it's because the proportions are undefined like what you write about the atheistic multiverse hypothesis above, then you agree with my earlier point there - in cases where we can't reason this way because the math doesn't work out, we can just fall back to other principles. And indeed, even ignoring this weirdness, I should still be 100% certain that I'm not in an anti-inductive universe in general, which most would reject.