# 1. Explaining how the argument works

I’ve written a lot about the anthropic argument for the existence of God. There were a few months where most of what I wrote seemed to be in support of the anthropic argument, albeit scattered across lots of different articles. The argument has been mostly ignored—with one notable exception—and part of that may be my fault: I don’t have one place where I lay it out comprehensively. I thought this would be a good place to do it.

The argument is, by my estimation, *overwhelmingly powerful*. I think it has force like almost nothing else in the God debate—with the only arguments of comparable force being the problem of evil, fine-tuning, and psychophysical harmony. Even if I were pretty sure theism was false prior to considering the argument, after considering it, I’d be pretty sure theism is true. Here, I want to convince you to share my judgment, that learning about the anthropic argument should *dramatically boost* your credence in theism.

The argument surprisingly simple to lay out, though analyzing it in detail gets into some tricky territory. It begins with the assumption that something called the self-indication assumption (SIA) is true. That’s the view that a theory that predicts more people existing explains your existence better. More specifically, suppose one theory says that N times as many people exist as another theory. SIA says—and maybe there are weird caveats which we’ll get to later—the first theory explains your existence N times as well as the second one.

Imagine thinking of your existence as being drawn randomly from the collection of possible people. If a theory predicts 10 times as many people exist, then it’s 10 times as likely you would come to exist. That’s how SIA instructs you to reason. I think this should be clear, but just in case, let’s give an example.

God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” Conditional on the message being true, what should your credence be that the coin landed heads?

If one is convinced of SIA, they should think that heads is 1 million times less likely than tails, because if the coin comes up tails, 1 million times as many people exist.

Now, let me get one important thing out of the way: SIA says that you should think that there are more people *who you currently might be*. That’s the relevant group of people. So if, for example, you have a red shirt, and God will create 1 person with a red shirt if a coin comes up heads, and 10 people with red shirts if the coin comes up tails, then tails is 10 times likelier than heads, because right now, for all you know, you could be any of the people with red shirts. In contrast, if God will make 1 person with a red shirt if a coin comes up heads, and 1 with a red shirt + 9 with a blue shirt if it comes up tails, and I get created by this process with a red shirt, heads and tails are equally likely.

You might wonder: why is this what SIA tells us to care about? Why people who we might currently be? Well, it would be weird if SIA told you to think from the armchair that there would be a lot of shrimp. The existence of more other people different from you doesn’t increase the probability that *you *would exist.

Here’s another way of thinking about it. Suppose we’re back to the case where God will make one red-shirted fellow, and then he’ll also make nine blue-shirted fellows if a fair coin comes up tails. If the coin comes up tails, even if we think there’s a sense in which you could have existed as one of the blue-shirted fellows, while that means tails means it’s 10 times likelier that you’d exist, it’s 1/10 as likely that you’d have a red shirt. So adding more people other than you, even if it raises the probability of your existence, doesn’t actually make it more likely that you’d exist *and have the properties that you’d have*. If you could have been a shrimp, for instance, then a theory that says there are more shrimp makes it more likely that you’d exist, but proportionally less likely you’d exist as a non-shrimp. The two probabilities exactly cancel out.

(This is one cool feature of SIA—it has this elegant property where it doesn’t matter how you conceptualize who you could have been, which there’s probably not even a fact of the matter about).

Anyway, granting SIA, the case for theism is pretty straightforward. You should think that more people exist. But theism predicts that way more people exist than atheism. It’s good to create a person and give them a good life, so God would create, in his infinite power and goodness, some ungodly (Godly?) number of people. In contrast, on naturalism, one would expect there to be many fewer people.

Now you might think: isn’t infinite people enough? If an atheist believes in an infinite multiverse, or an infinitely big universe, then their theory predicts the existence of infinite people. How can theism have the advantage then?

The answer: because there are bigger and smaller infinities. This sounds weird but it’s uncontroversial in math. If the universe was infinite in size, it would have aleph null people—that’s the smallest infinite. But the number of possible people is at least Beth 2—that’s an infinite way bigger than aleph null. In fact, Beth 2 is infinitely bigger than something that’s infinitely bigger than aleph null. Beth 2 is more than the numbers of numbers that exist—and there’s no plausible atheistic account of reality on which Beth 2 people come to exist.

So even if the universe is infinite, it doesn’t have enough people. If SIA tells you to think that a theory with more people is better, a theory with infinitely more people is infinitely better. That means that the only thing we should look at in terms of a theory is which theory best predicts that the number of people is *the most that it could be*. Theories on which the number of people is the most it could be infinitely outperform other theories.

But theism has a much better explanation of that than naturalism. On naturalism, there’s no plausible account of why Beth 2 people—if not more—would exist. On theism, there is such an account: God in his perfection would create all possible people who he could give a good life to (there might be weird exceptions, but they don’t matter, for the number of people that aren’t weird exceptions is at least Beth 2).

So the basic idea is: theories that predict more people are better. Theories that predict infinitely more people are infinitely better. Given this, if L is the biggest number of people that there could be, whichever theory best predicts L people existing infinitely outperforms alternative theories. Theism best predicts L people existing, because God would create all possible people who he could give good lives. So, therefore, theism gets a big probabilistic boost. Or, to make things more formal (don’t worry if you don’t follow this bit, it’s a bit technical and not super relevant to the essentials, and this is one of those arguments like fine-tuning where things go awry sometimes if you try to formalize it, rather than keeping it as an abductive argument):

If SIA is true, and the collection of all people has some magnitude, if M is the magnitude (magnitude is either cardinality or size of the collection if there's no cardinality) of the collection of all people, then if a theory predicts that M people exist with probability N times greater than another theory, then your existence favors the first theory over the second by a factor of N.

Theism predicts that if there is a collection of all people of some magnitude, M people would exist with much higher probability than atheism.

SIA is true.

Therefore, if the collection of possible people has some magnitude, your existence favors theism by many times.

If the collection of possible people has no magnitude then if one theory predicts a much higher probability that the collection of actual people would be any very large infinity than another theory, if SIA is true, your existence favors the first theory by many times.

If the collection of possible people has no magnitude, theism predicts a much higher probability that the collection of actual people would be a very large infinity.

Therefore, your existence favors theism by many times.

Or, to summarize things in one sentence: SIA says theories that say there are more people are better, but theism says there are more people than naturalism, because God would want to create all possible people, so theism’s better.

# 2. 4 strong arguments for the self-indication assumption

Now that we know what SIA is and why it supports theism, why think it’s true? I think there are a bunch of good arguments for it—too many to list—but I’ll list 4 of the strongest arguments for it.

## 2.1 Doomsdayish arguments

Suppose at the dawn of history, there is to be one person. They will flip a coin. If it comes up heads, they’ll die off without having any offspring. If it comes up tails, they’ll have 999 offspring. Each of the people don’t know their birth rank for the first bit of their lives—they don’t know if they’re the first person or one of the later people. However, after a while—say, halfway through their lives—they’re told their birth rank.

Consider the situation before you’ve found out your birth rank. If you reject SIA, you don’t think it’s more likely that the coin—whever it’s flipped—comes up heads than that it comes up tails. Tails is only more likely if you accept SIA, and think that your existence is more likely if more people exist (because if it comes up tails, 999 extra people get created). So one who rejects SIA should think, prior to finding out their birthrank, that the odds of tails is 50% and the odds of heads is 50%.

Let H=the hypothesis that the coin, whenever it’s flipped, comes up heads. ~H is the idea that it comes up tails. Notice that I say comes up rather than has come up or will come up, because the person who doesn’t know their birthrank doesn’t know if the coin has been flipped yet or will be flipped in the future—they don’t know if they’re the first person or one of the later people.

Conditional on H, one is guaranteed to be the first person. Conditional on ~H, one has a 1/1000 chance of being the first person. So one should reason: if I find out that I’m the first person, H will be 1,000 times likelier than ~H.

Now suppose that one finds out that they’re the first person. If they reject SIA, they should think H is now 1,000 times likelier than ~H. But wait—H is the proposition that the coin when it’s flipped is to come up heads. Now the person knows, the coin hasn’t been flipped yet, because they’re the first person. So if one rejects SIA, they must think that the odds of the coin coming up heads is 1,000 times greater than the odds of the coin coming up tails—even before the coin is flipped—just because they’ll have a bunch of kids if it comes up tails.

This shouldn’t be possible. Before flipping the coin in that scenario, you shouldn’t really think that coin is 1,000 times likelier to come up heads than tails. It has a 50% chance of coming up heads, clearly.

Here’s one way to see this: imagine there’s an event with an objective probability of N%. That means that if the process is repeated, the event will happen N% of the time. For example, a coin has an objective probability of coming up heads of 50%, because if you keep flipping it, it will come up heads half the time. In such a case, if the only difference between the event happening and it not happening will be in the future, then your credence in it happening should be N%.

More precisely: if some event has an objective probability of N%, and the only difference between it happening and it not happening will be in terms of what will happen in the future, your credence in the event happening should be N%.

For example, suppose someone is going to flip a coin. If it comes up heads, he’ll blow up a city. I should think that there’s a 50% chance it will come up heads—because the city blowup will be in the future, it shouldn’t influence my judgments, so I should just revert to the default 50% odds.

This principle rules out thinking the odds of the coin coming up heads, in the original scenario described, are more than half. Because the only differences between it coming up heads and tails are in terms of how many future people will be created, one’s credence in it coming up heads should be 50%.

There are two main ways to go if one rejects SIA. The first is to bite the bullet and just say that the odds of the coin coming up heads really are 1,000 times the odds of it coming up tails. But this implies even crazier things. Consider two more results that follow from it.

First experiment: Serpent’s Advice Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery. One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, one the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’s theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!”

If they discovered their birth rank, on such a picture, they’d get extremely strong evidence that they won’t have kids, because the theory according to which humanity doesn’t have many people makes it likelier that they’d be so early. For this reason, then, people who bite the bullet in the coin case will also have to bite the bullet in the Adan and Eve case. This gets even crazier in the second case, called Lazy Adam:

Assume as before that Adam and Eve were once the only people and that they know for certain that if they have a child they will be driven out of Eden and will have billions of descendants. But this time they have a foolproof way of generating a child, perhaps using advanced in vitro fertilization. Adam is tired of getting up every morning to go hunting. Together with Eve, he devises the following scheme: They form the firm intention that unless a wounded deer limps by their cave, they will have a child. Adam can then put his feet up and rationally expect with near certainty that a wounded dear – an easy target for his spear – will soon stroll by.

Once again, assume that they didn’t know that they were early and found it out. On such a picture, they get ridiculously strong evidence they won’t have many kids, so can be confident that whenever they agree to reliably procreate unless a deer drops dead at their feet, a deer really will drop dead at their feet.

Such a position is, I think, absurd. One doesn’t get magical powers to make whatever they want to happen happen by discovering that they’re early and then agreeing to have a bunch of kids unless the thing happens.

The second way to go is to reject the probabilistic principle behind the argument. On this picture, prior to finding out one’s birth rank they should think the odds conditional on ~H of being the first person are 1/1,000, while they’re 100% conditional on heads, once you discover that you’re the first person, H doesn’t get confirmed. This has two problems.

First, it just straightforwardly violates Bayes theorem and similar kinds of probabilistic reasoning. If there’s some event that’s more strongly predicted on some hypothesis than its negation, then it happening gives evidence for the hypothesis. Me being at the scene of the crime raises the likelihood that I committed the crime, because it’s more likely I’d be at the scene of the crime if I committed the than if I didn’t. This view is thus a complete perversion of probability.

Second, it’s a strange kind of heads I win tails we’re even reasoning. From finding out one’s birth rank, their credence in ~H can only go up. If one learns they’re the first person, then neither theory becomes more likely. If they learn they’re not, then ~H is confirmed. This violates something called the conservation of expected evidence—you shouldn’t expect after observing some evidence to get, on average, more confident in some proposition. If you know that in the future you’ll on average be more confident in some proposition after being exposed to more information, then you should be more confident now, because you know there’s convincing evidence that you’ll see in the future.

I think this argument is a pretty decisive proof of SIA. If you want to read more about it, I have a paper coming out probably in a few weeks for Synthese where I defend it in more detail. I also smash the main objection to SIA—the dreaded presumptuous philosopher result.

# 2.2 Birth control

Remember, SIA is the view that says that more people existing is more likely. Here’s one case where that’s a bit weird—coming from Joe Carlsmith’s Ph.D thesis:

God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” Conditional on the message being true, what should your credence be that the coin landed heads?

SIA answers that tails is a million times likelier than heads because it predicts a million times more people existing. Other views disagree, for they deny that more people existing makes your existence more likely. Given that SIA is the only view that says tails is a million times more likely than heads in God’s extreme coin toss—and is the only one that could be that way in principle, for only it says that if there are more people, your existence is more likely—if this judgment is correct, then SIA would be proved. Fortunately, I think we can show that this judgment is correct (I have a paper under review at analysis arguing this). Consider first:

Single-case contraception: You wake up alone in a white room. There’s a message written on the wall saying “I am your father. Your mother and I were the only two people in existence prior to you. I flipped a coin before having sex. If the coin came up heads, I used extremely effective contraception with a failure rate of one in one million. If it came up tails I used no contraception. All sex acts without contraception produce children.” Conditional on the message being true, what should your credence be that the coin landed heads?

The obvious answer is that heads is a million times less likely than heads. From the fact that you exist, you get very strong evidence that your parents didn’t use super-effective contraception. If the contraception only has a failure rate of 1 in 1 million, then you get 1 million to one evidence against them using that contraception.

Here’s another way to see this: *you know* that your mom got pregnant. The odds of that are 1 million times higher conditional on the coin coming up tails than heads. So, therefore, you get 1 million: 1 evidence for tails over heads.

Next consider:

Contraception worldwide: You wake up alone in a white room. There’s a message written on the wall saying “The generation before you had one million people. A fair coin was flipped. If the coin came up heads, they all used extremely effective contraception with a failure rate of one in one million and had sex. If it came up tails they all used no contraception. All sex acts without contraception produce children.” Conditional on the message being true, what should your credence be that the coin landed heads?

Intuitively, this one is the same as the one before. The odds you’d be born are a million times greater conditional on tails than conditional on heads. The odds your mom would get pregnant are, likewise, a million times greater conditional on tails than conditional on heads. This one is like the previous case—the odds of you being born from parents who use contraception doesn’t depend on whether *other people* are using contraception. But the only difference between this and the last case relates to whether other people use contraception. Next, consider:

Single prolific contraception: You wake up alone in a white room. There’s a message written on the wall saying “The generation before you had two people, including me, who had sex a million times. A fair coin was flipped. If the coin came up heads, for each sex act, we used effective contraception with a failure rate of one in one million and had sex. If it came up tails we used no contraception. All sex acts without contraception produce children.” Conditional on the message being true, what should your credence be that the coin landed heads?

This case is importantly like the last case. In both cases, a million sex acts happen either with or without contraception. The only difference is that in this case, one pair has all the sex, while in the last case, the sex was split across a million people. But surely that shouldn’t be relevant to probabilistic reasoning. Additionally, one can reason in the same way as they did in the last case: let sex act N denote whichever sex act produced the person in the room. For example, if they were produced by the 150th sex act, N is 150. The probability of one’s mom being impregnated by sex act N is 1 in 1 million if the coin came up heads, while it’s 100% if the coin came up tails. So one thereby gets 1 million to 1 evidence for tails.

Finally, consider:

God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” Conditional on the message being true, what should your credence be that the coin landed heads?

This case is relevantly like the last one. In both cases, the first people will flip a coin and then create a million people if the cion comes up tails and 1 person if the coin comes up heads. They only differ in how the people are made—but surely that’s irrelevant to probabilistic reasoning.

Thus, it’s rational in this case to, like in the previous cases, think tails is 1 million times likelier than heads and that SIA is confirmed.

# 2.3 Elga’s argument

In the first major paper on anthropic reasoning, Adam Elga presented an argument for SIA that is, I think, decisive. Here’s a simplified example, though the same point applies to every case where SIA is applied: imagine a coin gets flipped and if it comes up heads, only Jon gets created. If it comes up tails, Jon and Jack get created. Suppose you get created but you’re not sure if you’re Jon or Jack. What odds should you give to the coin having come up heads? SIA answers: 1/3. Other views answer: 1/2.

The argument Elga gave for SIA’s verdict—well, he was talking about a different problem called the sleeping beauty problem, but the same point applies Mutatis Mutandis—is that the probability that you’re Jon and the coin came up heads = the probability that you’re Jon and the coin came up tails = the probability that you’re Jack and the coin came up tails. Thus, because there are three equally probable outcomes and two of them require that the coin came up tails, tails must be twice as likely as heads.

It’s obvious that the odds that you’re Jon and the coin came up tails = the odds that you’re Jack and the coin came up tails. You have no special evidence either way, so you’re equally likely to be either of them. The controversial bit is that the probability that the coin came up heads and you’re Jon = the probability that the coin came up tails and you’re Jon.

But this part is pretty intuitive. Here’s one way to see this—suppose that heads and you’re Jon is likelier than tails and you’re Jon. Well, that would mean that if you find out that you’re Jon, you should think the coin came up heads, with 2/3 probability. But how could this be—whether the coin comes up heads or tails, Jon exists, so it’s hard to see why finding out that you’re Jon should make you think heads is probably true.

Here’s another way to see it: imagine that Jon gets created on day one. Then, at the end of the day, a coin gets flipped. If it comes up tails Jack gets created. You wake up, not sure either what day it is or whether you’re Jack or Jon. If you reject SIA, prior to finding out what day it is, you should think there’s a 50% chance that you’re Jon and the coin, whenever it’s flipped, comes up heads, 25% chance that the coin comes up tails and you’re Jon, and 25% chance that the coin comes up tails and you’re Jack.

Then, suppose you find out that it’s the first day. Well, the odds that now would be the first day conditional on the hypothesis that the coin comes up tails is 50%, while it’s 100% if the coin came up tails. So now, if you reject Elga's logic, you should think that there’s a 2/3 chance that the fair coin that hasn’t been flipped yet will come up heads. But then weirdly if you learn that the coin won’t create Jack if it comes up tails, then you should be back to 50/50. This is very strange.

This same point can be applied to vindicate SIA more broadly. Anytime there’s a theory that says there are more people, by this reasoning, that theory will be favored. For example, suppose that a coin gets flipped—a million people will be created if it comes up tails, one person if it comes up heads. Elga’s reasoning vindicates SIA’s verdict that tails is 1 million times likelier than heads—the odds the coin came up heads and you’re the first person=the odds the coin came up tails and you’re the first person=the odds the coin came up tails and you’re the second person…=the odds that the coin came up tails and you’re the millionth person. So from this basic, correct reasoning pattern, SIA is vindicated.

# 2.4 Hypersensitivity

Here are two principles that together entail SIA:

If there are a bunch of people who are not people you might be, you should think, all else equal, that there are more people you might be.

The presence of other people who are not like you doesn’t affect whether all else equal you should think that there are more people like you.

Here’s the basic idea: suppose that there are 10 people with red shirts. A coin gets flipped. If it comes up heads, 1 person with a blue shirt gets created. If it comes up tails, 2 people with blue shirts get created. I get created with a blue shirt. It’s intuitive that tails is twice as likely as heads. So if I can show that the presence of the other 10 people with red shirts doesn’t matter to how I should reason about this, then SIA would be vindicated.

You might reject that in this case tails is twice as likely as heads. But I think it’s pretty intuitive—it’s for the same basic reason that if you find out that you’re 5’8 but have no idea how tall other people are in the population (maybe you’re in a room for your entire life), all else equal you’ll get evidence that more people are 5’8.

Here’s one other way to see that this is right: imagine in the coinflip case, you’re in a dark room, so you haven’t seen your shirt color. If the coin came up heads, then you being in a blue shirt is 10 times less likely than being in a red shirt, while it’s only 5 times less likely if the coin came up tails. So, because you being in a blue shirt is less likely if the coin came up tails, if you discover you’re in a blue shirt, then you’ll get evidence that it came up tails. But surely how you should reason about this case shouldn’t depend on whether, before finding out your shirt color, you’re in a room where you don’t know your shirt color.

The other premise that needs to be defended is that the presence of other people doesn’t affect how you should reason about this, that, for instance, the presence of other red-shirted people doesn’t affect how you should reason about the odds of you being blue-shirted if various numbers of blue-shirted people are created.

The argument here is one that I laid out in this recent article. I won’t repeat it because I just made it, but I think it’s pretty decisive.

# 3. Objections

At this point, we’ve established that the self-indication assumption is right. From the fact that you exist, you get infinitely strong evidence that the number of people that exist is the most that there could be. Theism predicts that, because God would create all possible people, while atheism doesn’t.

# 3.1 Would God create all people?

Here’s a first objection worry: would God make all people? Maybe you think that God wouldn’t create. Or maybe he’d only make the best kinds of beings. These arguments are all subtly different, and worth a response, but there are two meta points that I think gut these objections.

First, suppose you’re 99% sure that God wouldn’t create beings like us. This seems wildly overconfident as it hinges on various contentious philosophical issues. But even if you’re this confident, because the odds—as we’ll see—of your existence conditional on naturalism are so astronomically low—very near zero—even a 1% chance that God would create beings like us is enough for your existence to massively favor theism. Given that there is *no naturalistic model of reality* that predicts a big enough infinite people exist without undermining induction, and anthropic reasoning gives you *infinitely strong evidence* that a very large kind of infinite people exist, if SIA is true, then naturalism is toast.

Second, remember my claim in this article—suppose that you hadn’t thought about SIA, but were pretty sure atheism was right. I claim that thinking about SIA should make you into a theist, providing massively strong evidence for the existence of God. If God wouldn’t create you, then theism is already false, so this wouldn’t undermine the force of the argument—rather, it would just be a separate argument.

If we’re going to count the reasons you might not exist under theism as part of the argument, then we should also count the reasons you might not exist under atheism as part of the argument. But then we’ll be thinking about all sorts of other ancillary issues—like the contingency argument, psychophysical harmony, and fine-tuning. Instead, we should take things one by one and address the implications that SIA would have if you’d already considered the rest of the evidence, so that we’re not led down random other rabbit trails.

This reminds me of when people respond to the problem of evil by saying things like “evil doesn’t actually favor atheism because evil is contingent—and contingency points to God.” Maybe it’s right, maybe it’s not, but it’s a separate enough point that when considering evil, you should consider the facts of evil while having in the background the rest of the evidence for theism. When considering an argument, one should try to consider it in a vacuum, rather than introducing all sorts of other arguments, even if they tangentially bear on the data point.

When thinking about the problem of evil, for instance, in evaluating its force, you should imagine that you had considered most of the evidence for and against theism already. Then you remember—"oh wait, shit sucks.” Evil is a big problem for theism if remembering that shit sucks after you’d considered the rest of the evidence for and against theism would make you a lot less confident in theism. Similarly, the anthropic situation pertaining to SIA is good evidence for theism if after considering most of the evidence both for and against theism, learning about the truth of the self-indication assumption would massively boost your credence in theism.

Here’s a good indication of whether a response is changing the subject and just raising a new argument or whether it’s a legitimate response: if the argument succeeding would mean the other view was false, then it’s changing the subject. For example, talking about the contingency argument, on the grounds that evil is contingent would be changing the subject, because if the contingency argument works, theism would have to be true.

Okay, now that we have those out of the way, let’s address the specific reasons you might think God wouldn’t create us.

First, you might think that God wouldn’t create at all. God is a maximally great thing already, so why would he need to create? Well the answer is that even if there’s already a perfect being, adding more good beings makes things even better. God might be the best single thing, but the world is improved by adding more good things.

Here’s a way to see this—imagine that there are infinite happy people. There would already be infinite goodness. But nevertheless, having a happy kid would be a good thing (if, as I’ll argue, having a kid in normal circumstances is good—the goodness of an extra person being created doesn’t depend on whether there are already infinite people). Even after there’s infinite goodness, adding more goodness without taking any away is still good.

Here’s another way to see it: imagine that there’s a maximally horrible being, as bad as any being could possibly be. This guy *sucks*! It would be worse if he added new beings to torture, as that would introduce additional badness—even though things are already infinitely bad.

Second, you might think creating people isn’t good. A popular slogan is the following: it’s good to make people happy, not to make happy people. If this is so, then God creating wouldn’t make things better, so he’d have no reason to do it.

The problem here is that the procreation asymmetry—the idea that creating happy people isn’t good—is ridiculously implausible. There are a bunch of arguments against it which are, in my view, completely decisive. There are like 20 problems for the view—including an original one I have a paper under review at Utilitas raising—which are enough to make the view totally crazy. That’s one reason it’s widely rejected by philosophers.

Third, you might think that God wouldn’t make beings like us. Why would he, when he can instead make really amazing beings instead of us? God might make amazing angels, for instance, and beings with immense glory and moral goodness, but why’d he make beings as bad as us.

This is the only one of these arguments that I think poses any sort of a challenge, but it’s nowhere near enough to move me.

First of all, this is just the problem of evil. The problem of evil is that it’s hard to see why so much sucks if there’s a God. But this also applies to creatures like us—if there’s a God, why do we suck so much, rather than being awesome? Whatever explanation there is for why there’s evil will explain why we are not better.

For example, my explanation of evil is that making temporarily imperfect creatures serves a greater good—say, providing free will, or putting us in a broadly indifferent universe to strengthen our relationship with God. If this is so then it would explain why we’re not better. Perhaps, as many Eastern Orthodox suppose, we eventually ascend to become godlike beings, but our tribulations on Earth help this process.

Second, there might be a value in diversity. Perhaps God wants to make both really awesome beings and less awesome beings, just as you might want to have kids and a dog. Humans are better than dogs, but the diversity might have some value.

Third, perhaps God creates all possible angels. Then, when he gets through creating those beings, he creates us. He only creates us after he’s run out of beings better than us to make.

For these reasons, I think the odds of God creating beings like us are decently high. In fact, I think they’re much higher than on atheism, because if there is no God, while maybe some creatures would exist, it’s unlikely that humans would ever come to exist. The arguments for why God wouldn’t create beings like us both change the subject and fail!

# 3.2 Can atheism explain the anthropic data?

Suppose you buy the reasoning up until this point. Is there a naturalistic explanation of there being Beth 2—if not more, as we’ll see in the next section—people. I think the answer is clearly not.

Of all the theories ever proposed by atheists, there are, to the best of my knowledge, only two that naturally predict the existence of every possible person. They are modal realism, according to which every possible world exists, and Tegmark’s view according to which every possible mathematical structure is physically instantiated.

The problem—and this is a more general problem for atheistic views—is that both of these views undermine induction. Induction is the drawing of inferences about the future from the past. For example, I expect the laws of physics to continue working because of induction—they’ve worked in the past.

But if modal realism is true, then for every world where the laws of physics keep working, there are an infinite number of ones where people exactly like me have induction stop working in violent ways—where the world gets replaced by a cucumber or a giraffe or a ball of fluff. The same point applies to Tegmark’s view—there are an infinite number of more complicated mathematical structures where the laws of our physics stop working one second from now.

This is a general problem. If there’s a God, our inductive processes are reliable—God would make sure there are stable laws and that everyone’s reasoning is mostly reliable over the course of their lives when they’re in a situation like ours. But if there is no God, then there are infinite people in worlds where induction doesn’t work—as many at least as in worlds where induction does work—so we shouldn’t trust induction.

Other than these two, there’s no viable naturalistic explanation of why at least Beth 2 people would exist. And there’s a good reason for that: you need really weird and creative mathematics to get Beth 2 people to exist. To have more people than there are numbers, shenanigans have to be done—there’s no multiverse model, for instance, proposed by physicists that could do that.

You could, of course, posit that there are just Beth 2 universes that exist for no reason. But that’s a terrible theory, not at all parsimonious. If you posit Beth 2 different things exist *for no reason*, then you don’t have a good theory!

# 3.3 What if I don’t buy SIA?

I’ve explained earlier why I think SIA is by far the best view. I’ve elsewhere discussed the objections to it—the main presumptuous philosopher objection is totally bunk. But even if you’re not sure about SIA, anthropic reasoning should favor theism.

The main rival of SIA is the self-sampling assumption (SSA). The idea behind SSA is that you should reason as if you’re randomly drawn from the people in your reference class. Your reference class is the collection of people enough like you that you reason as if you’re randomly drawn from them.

Let’s illustrate this with an example. Suppose a coin is flipped. If it comes up heads, 1 person with a red shirt gets created. If it comes up tails, 1 person with a red shirt gets created and 9 people with blue shirts. If you get created with a red shirt, SSA says that tails is 10x less likely than heads, because there’s only a 1 in 10 chance you’d be the one with the blue shirt.

The problem is, on SSA we’d expect the world to look very different. SSA majorly favors solipsism and other theories on which there are few people. If the world is just you, there’s no one else in your reference class. Thus, on SSA, it’s weird that you exist in a world with so many other people.

For this reason, given that theism + SIA make your existence likely, while your existence is unlikely on SSA, your existence gives you a big update in favor of theism + SIA. So, therefore, if you’re not sure about which anthropic theory is correct, because theism and SIA fit together so well, you should believe the pair of them.

But we can imagine the situation is worse. Maybe you’re not just uncertain about SIA—you positively reject it. You’re confident that it’s false. If this is so, I still think the anthropic situation favors theism.

Theories of anthropics will generally favor either big or small worlds. SIA favors a big world—it gives you infinitely strong evidence that there are infinite people. SSA favors a small world—it tells you to think your reference class is small. The third most popular anthropic view, compartmentalized conditionalization (I won’t bore you with the details) also favors big worlds.

Theism is compatible with either big or small worlds. As I explained earlier, I think theism naturally predicts a big world. But most theologians seem to think we’re alone. Let’s say there’s a 1% chance we’re right. Well that means that there’s a 1% chance conditional on theism that there’d be a small world.

Thus, on every theory of anthropics theism does at least okay. It predicts decent odds of a huge world and decent odds of a tiny world. Whether anthropic reasoning favors big or tiny worlds, theism does okay—though better if it favors big worlds.

In contrast, if naturalism has people at all, it predicts a medium-sized world. It predits a world with either aleph null or finite people. But that’s an unhappy medium—the worst of both worlds. It’s too big to be viable on SSA—it bloats the reference class too much—but too small to be viable on SIA and compartmentalized conditionalization. For this reason, with little exception, whatever view of anthropics one has, theism is good news.

# 3.4 Maybe there are only aleph null people

Content warning: technical.

I discussed earlier that some infinites are bigger than others. If the universe is infinite in size, it would only have aleph null people—that’s the smallest infinite. But maybe you think that there are only aleph null possible people, so every possible person would come to exist if the universe is infinitely big, or if there’s an infinite multiverse. Atheism can account for an infinitely big universe!

Let me note first of all that to have such a view, you absolutely have to believe in souls. If you think people are just arrangements of stuff, the number of possible arrangements of stuff is at least Beth 2 (there are Beth 1 points, and the number of different arrangements of Beth 1 points is Beth 2, so the number of possible arrangements of stuff is Beth 2).

Then, after you believe in souls, you have to think that there’s a weird arbitrary limit on the number of souls. For some reason, the number of souls just runs out when you get to aleph null. Why would this be? Unclear.

This view is really weird. It implies that it would be possible to just run out of souls in an infinitely big universe. On such a picture, if the universe is infinite in size, there might be people who try to have a baby but can’t because each of the souls is taken. Such a view should be rejected.

The view has another big problem—as I’ll argue, there are strong grounds for thinking that the number of possible people is not just aleph null or Beth 2 but unsetly many—too big to be a set. The infinites mathematicians talk about are infinite sets—but I think the number of possible people is a collection too large to be a set.

To explain what this means, I’ll have to introduce a bit of terminology. A set is a specific kind of collection. For example, if I have an apple and a banana, there’s a set of an apple and a banana. If you take the power set of a set, you get a bigger set—the power set is the number of subsets you can get from the subset (a subset is a smaller set containing only members of the original set). For example, the number of subsets of the set {1, 2} is 4—there’s the empty set containing nothing, the set with just 1, the set with just 2, and the set with 1 and 2.

This point about the power set of a set being bigger than the set also applies to infinites. Aleph null—also called Beth 0—is the smallest infinite set. It’s equal to the number of natural numbers: 1, 2, 3…. Beth 1 is the powerset of Beth 0, Beth 2 is the powerset of Beth 1, etc. Incidentally, Beth 1 is the number of real numbers—that includes the integers like 1, 2, and 3, as well as all the infinite non-repeating decimals and fractions.

My claim is that the collection of possible people isn’t just aleph null—in fact, it’s too big to be any kind of set. What this means is that for any set of any kind, one can make a bigger set only using parts of the collection of all people.

Talking about a collection too big to be a set might sound a bit weird, there are some collections like this. Take the collection of all truths, for example. It’s too big to be a set for the following reason. Suppose it was a set with, for example, cardinality Beth 10 (cardinality measures the size of an infinite set). Well, because if A is a truth and B is a truth then A and B is a truth, every subset of the Beth 10 truths would be a truth, so there’d have to be Beth 11 truths. Thus, the number of truths is more than any set of any size. I claim possible people are like truths—the collection of them is too big to be a set.

(If this is true, it’s bad news for naturalism, because there’s no plausible story of how the number of people that exist, on naturalism, is too big to be a set).

There are two pretty good arguments for there being no set of all minds and one overwhelmingly decisive argument (I’ve given the arguments before, so I’m going to plagiarize myself and repeat the explanations).

There is no set of all truths. But it seems like the truths and the minds can be put into 1 to 1 correspondence. For every truth, there is a different possible mind that thinks of that truth. So therefore, there must not be a set of all possible people.

Suppose there were a set of all minds of cardinality N. It’s a principle of mathematics that for any infinity of any cardinality, the number of subsets of that set will be a higher cardinality of infinity. Subsets are the number of smaller sets that can be made from a set, so for example the set 1, 2 has 4 subsets, because you can have a set with nothing, a set with just 1, a set with just 2, or a set with 1 and 2. If there were a set of all minds, it seems that there could be another disembodied mind to think about each of the minds that exists in the set. So then the number of those other minds thinking about the minds containing the set would be the powerset (that’s the term for the number of subsets) of the set of all minds, which would mean there are more minds than there are. Thus, a contradiction ensues when one assumes that there’s a set of all minds!

These arguments are pretty good. But there’s an even more decisive argument that appeals to SIA. You see, there are two ways of formulating the SIA. One of them says that you should favor a theory if it says there are more people (this is called the unrestricted self-indication assumption—or USIA), the other says you should favor a theory if it says a greater share of possible people are created (this is called the restricted self-indication assumption or RSIA).

To see the difference, suppose that there are two otherwise equally good theories. If one of them is true, there are Beth 1 possible people and they all exist. If the other is true, there are Beth 2 possible people and they all exist. On RSIA the theories are equal—they both say that every possible person is created. On USIA, the second theory is way better, because it says there are many more people.

If USIA is true, then you shouldn’t think there are alelph null people only—the theory that there are unsetly many people is infinitely better than any theory that says that the number of possible people is only a set. Fortunately, we have two strong arguments for USIA.

First, Elga’s argument before supports USIA. Suppose that we’re considering two theories—one of them says there’s one possible person who exists, the other says there are two possible people who both exist. Assume they’re equally externally credible. By Elga’s reasoning, the odds you’re the first person and only one person exists = the odds you’re the first person and two people exist = the odds you’re the second person and two people exist. Thus, for the same reason, the theory on which there is a bigger infinite of people is infinitely more likely than the theory on which there’s a smaller infinite of people.

Second, RSIA has crazy implications. For example, suppose that one is a necessitarian—they think that nothing other than what happens is possible. On such a picture, from the fact that you exist, you’d get infinitely strong evidence for necessitarianism—even over a theory on which there are many more people, but a smaller share of possible people—from the fact that you exist. This is clearly absurd.

# 4. Conclusion

This article has been *long*! Sorry about that. It’s a complicated subject that requires a lot of thought. Hopefully I’ve explained why I think the SIA is true and why it’s truth provides overwhelmingly strong evidence for theism. I think if one accepts SIA, there isn’t really a way out—theism follows pretty inevitably.

Help me understand this.

Suppose I notice I am a human on Earth in America. I consider two hypotheses. One is that everything is as it seems. The other is that there is a vast conspiracy to hide the fact that America is much bigger than I think - it actually contains one trillion trillion people. It seems like SIA should prefer the conspiracy theory (if the conspiracy is too implausible, just increase the posited number of people until it cancels out).

You can get around this by saying that infinity is too big for it to matter - since there are an infinite number of Americans (across all possible worlds), I'm no more likely to live in the conspiracy universe than the normal universe. But I think you can't do weird stuff with infinities like this. Consider 1000 rare classes of person (dictator, billionaire, dwarf, polar explorer, etc). I notice I'm not in most of these classes. But it seems like if there are an infinite number of Earths, then there are an infinite number of polar explorers and an infinite number of non-polar-explorers, and these aren't two different levels of infinity (there are about 1 million times more non-explorers than explorers). So I should be exactly as likely to be an explorer as a non-explorer. But the fact that I'm not in more than a handful of these rare classes proves that this isn't true. Therefore, you can't sweep the first problem under the rug by playing with infinities.

What am I missing?

edited Jun 22I might be wrong about this, but I don't find this argument convincing. I have reservations about the SIA and theism's predictive ability, but I mostly disagree because I don't think atheism does such a bad job of predicting the existence of large numbers of people. To illustrate this, I've tried to list out some hypotheses that are alternatives to theism and still predict the existence of large numbers of people.

1. Tegmark's multiverse and modal realism

1.1. I don't think induction is a big problem for these worldviews, certainly relative to theism.

1.1.1. Suppose modal realism is true. The mere fact that there are infinitely many concretely existing possible worlds where induction fails does not imply that I am probably in such a world, or that induction fails in nearly all worlds. I'm not even sure it's possible to make such statements in a meaningful way - you would need to have a measure over all possible worlds, which is impossible because there is no set of all possible worlds.

1.1.2. Whatever the best way of thinking about uncertainty over possible worlds is, the idea that it would change if those possible worlds existed concretely is exceedingly strange.

1.1.3. Theism doesn't explain why induction works any better than atheism. The morally sufficient reasons God has for allowing the massive amount of evil that have existed throughout history are surely sufficient to allow him to create a world where induction doesn't work.

2. Alternative theisms: for any property that people existing can entail, just postulate a God who values that property rather than goodness. There are infinitely many of these, including

2.1. Evil God

2.2. God who just likes qualia, in general

2.3. God who mostly cares about aesthetic (but not necessarily moral) value

2.4. God who really likes Luxembourgers (maybe there's a reason they have such high GDP per capita)

2.5. and so on.

3. Generalized value selection hypotheses: take theism or any of the 'alternative theisms' and postulate a mindless force which tends to instantiate the corresponding values

4. Physical multiverse theories

4.1. I'm don't think it would be that hard to create mathematical models of multiverses containing Beth-2 or more people. It might be that those models wouldn't be well-motivated by empirical data in the conventional sense, but maybe we shouldn't expect them to be if most of the other universes are causally isolated from ours. And they would at least be motivated by anthropic considerations, which are empirical! (at least if SIA is correct).

4.2. Specifying the exact nature of this model shouldn't even be necessary! It seems *extremely* unfair to complain that physicists haven't proposed a mathematical multiverse model that would predict enough universes when philosophers and theologians haven't proposed a mathematical model of God and his act of creation.

5. Other multiverse theories: Suppose, for example, that there's a simple universe-generating monad which generates lots and lots of universes.