ℵ0
The self-indication assumption says that you should think that there are more people like you based on the fact that you exist. The idea behind this is roughly the following: if there were lots of people, the odds you’d be one of them would be higher than if there are only a few people. Just as drawing more numbers from a hat increases the odds of any particular number being drawn, and the existence of many planets makes it more likely that there would be a planet like Neptune, so too does the presence of more people make it more likely that you’d be one of the people to exist.
Some don’t adopt the self-indication assumption (SIA). The most common alternative is the self-sampling assumption (SSA), according to which one should, upon discovering the splendid fact that they exist, reason as if they’re randomly selected from all the people like them. Thus, you should think there probably aren’t lots of people like you on Neptune, for instance, because if there were then it would be quite odd that you ended up on Earth rather than Neptune.
I’ve elsewhere argued for SIA. One who rejects SIA must believe some quite extraordinary things—for instance, that the world is likely to end rather soon, that one can guarantee that they won’t get pregnant by making sure that their pregnancy, were it to ensue, would cause a great civilization to arise, and that one can guarantee a royal flush by creating many clones of them only if they don’t get a good poker hand. It also is simply not the way to reason about probability. When discovering some piece of evidence, in determining whether it supports or opposes a hypothesis, one should look at whether the occurrence of that thing is likelier if the hypothesis is true than false. There’s simply no reason for one to imagine being randomly selected from among the actual people, and SIA simply follows the patterns of probabilistic reasoning that are widely accepted in non-anthropic contexts.
The truth of SIA would be a very fortunate thing. It would mean, contrary to the claims of proponents of the doomsday argument, that we’re not doomed. In fact, SIA, if true, provides quite strong evidence for God. And regardless of whether one thinks God truly exists, everyone should agree that if he does, that would be quite a nice thing indeed, for it would mean that everyone’s life will go infinitely well on the whole
There are, however, two different ways of thinking of SIA. The first one we might call the restricted self-indication assumption (RSIA). On this view, one should think there are more people because that would mean that a larger share of possible people exist. The other is the unrestricted self-indication assumption (USIA), which says that one should simply think there are more people, not merely because it means a larger share of possible people are created.
This is a rather confusing distinction, and must be illustrated with a slightly complicated example. Let’s begin by noting that some infinites are bigger than others. The smallest infinite is aleph null, Beth 1 is bigger than that, and Beth 2 is bigger than that. Suppose that there are two theories that are otherwise supported by identical evidence. The first claims that there are aleph null possible people and all of them exist. The second claims that there are Beth 2 possible people and all of them exist.
RSIA would be indifferent between these two views. After all, both of them predict every possible person existing. USIA, in contrast, should think that, because the second hypothesis means more people will exist, I gain very strong evidence for the existence of Beth 2 people.
For a while I adopted RSIA. But in recent days, I’ve come to think that USIA has quite a convincing case for it. If this is right, then Huemer’s objection—that there may only be aleph null people or even a finite number of people is wrong, for upon discovering that I exist, I should think that a very large number of people exist—far more than even aleph null.
2^ℵ0
If some view gives one reason to expect something to be the case, and that thing turns out independently to be the case, that is evidence for the view. It is stronger evidence if that thing would be quite unlikely were the view false but quite likely if the view is true. For example, if a detective suspects John committed a murder, and then later coincidentally discovers John had bloodstains on his car, this is strong evidence for John being a murderer, for if John is a murderer then bloodstains on his car are quite likely.
USIA gives one some reason to think that the number of possible people is very large. Remember what I said before about Beth 1 being more than Aleph Null, Beth 2 being more than Beth 1, Beth 3 being more than Beth 2, and so on. USIA gives one reason to think that the possibility, and also the presence, of Beth 3 people is more likely than Beth 2 people which is more likely than Beth 1, and so on. Thus, if there was a largest infinity, USIA would give one strong reason to think that that is the number of possible people.
For a while I was puzzled by this verdict. How could there be a largest infinity? One can always take an infinite set and make it bigger by doing some procedure called taking the power set of it. But strangely, I discovered something consistent with a prediction of USIA—probably the number of possible people is the largest infinite.
It’s true that you can always make an infinite set larger by taking the powerset of it. But what if the number of possible people doesn’t comprise a set? What if it’s too large to be a set? This is what one would expect on USIA—for a number of people too large to be a set is indeed larger than any other infinity, for the infinities mathematicians talk about are all sets of some sort. I have given two arguments for this thesis in this article of mine:
There is no set of all truths. But it seems like the truths and the minds can be put into 1 to 1 correspondence. For every truth, there is a different possible mind that thinks of that truth. So therefore, there must not be a set of all possible people.
Suppose there were a set of all minds of cardinality N. It’s a principle of mathematics that for any infinity of any cardinality, the number of subsets of that set will be a higher cardinality of infinity. Subsets are the number of smaller sets that can be made from a set, so for example the set 1, 2 has 4 subsets, because you can have a set with nothing, a set with just 1, a set with just 2, or a set with 1 and 2. If there were a set of all minds, it seems that there could be another disembodied mind to think about each of the minds that exists in the set. So then the number of those other minds thinking about the minds containing the set would be the powerset (that’s the term for the number of subsets) of the set of all minds, which would mean there are more minds than there are. Thus, a contradiction ensues when one assumes that there’s a set of all minds!
So we have some reason to think that the number of minds is exactly what we’d expect it to be if USIA is true. Coincidence? Perhaps. Still, this will be strong evidence for USIA. Indeed, outside arguments ended up “confirming” a prediction of USIA.
I realize at this point a reader may be lost? USIA just says you should think that there are quite a lot of people that really truly exist. So what’s all this talk about possible people? Well, for one to exist, they must be possible. USIA gives one extremely strong reason to think that a number of people too large to be a set exists, which in turn gives one extremely strong reason to think that the number of possible people is too large to be a set.
2^2^ℵ0
Here’s a plausible principle: suppose there are lots of different possible beings you might be. One of them has a 50% chance of existing, a second has a 25% chance of existing, and the last also has a 25% chance of existing. It seems that in such a case, you should split your credence in your being each of them relative to their odds of existing. You should think you’re twice as likely to be the first as the third, and as likely to be the first as the second and third combined.
But if one applies this process, they can deduce USIA. For if there are lots more possible people I might be on one theory, then that theory will get a big boost, for I should think I’m probably one of them, and consequently, that the greater number of possible people exist.
To illustrate this, to make things simple, suppose that one hypothesis holds that there’s a 50% chance of 1 person existing (and being the only possible person) and a 50% chance of 2 people existing (and being the only possible people). Each of these people has a 50% chance of existing, so I split my credence across the possible people I might be and conclude there’s a 2/3 chance that I’m one of the two people who could exist.
This might seem slightly suspicious. But I think we can see that this is perfectly kosher reasoning. To see this, we must consider a bad argument for necessitarianism. Necessitarianism, for those who don’t know, is the view that everything that actually happens is necessary. Right now, I am typing this sentence. Necessitarians claim that this couldn’t have been otherwise, that me not doing this is impossible.
Necessitarianism is, I think, a very improbable view. But suppose one were to support it in the following way. Suppose that what actual happens is necessary. Well then it would be guaranteed that I’d exist, for every possible person exists. So then the fact that I exist is quite strong evidence for necessitarianism.
Something, I take it, has gone awry. The necessitarian has cheated. How? Well, even if necessitarianism is true, and whatever happens to exist can’t be otherwise, it still seems quite remarkable that I’m one of the people that has to exist rather than one of the infinite number of people that doesn’t exist. Because there are so many ways that we could conceive of the world working on necessitarianism, the odds that I’d be one of the guaranteed people are very low.
But then it can’t be merely that theories that predict the creation of a large share of possible people get anthropic support. For necessitarianism predicts the real existence of all possible people, yet is not supported on anthropic grounds. In a similar way, just as necessitarianism makes it unlikely I’d be one of the lucky few, if there is only an infinite set of possible people, it’s odd that I’m one of the lucky few. The moves that defenders of RSIA would make here are the same as those made by necessitarians—and in both case, such moves are wrong.
2^2^2^ℵ0
Elsewhere I’ve argued that alternatives to SIA of some sort are wholly unworkable. I won’t repeat those proofs, for I’ve provided them at the beginning of this article. But if this is right then establishing USIA requires little more than refuting RSIA. But RSIA seems to have a quite significant problem.
Remember, RSIA implores one to think there are more people because then a larger share of possible people will be created. But then RSIA either runs into a mathematical tangle or implies the doomsday argument succeeds.
The doomsday argument, and in fact certain even nastier implications, follow, for reasons I’ve explained here, unless one thinks that a world with twice as many people is twice as likely to contain me. But the number of possible people is infinite. And 1/infinity is equal to 2/infinity if it is even defined. So therefore to avoid the doomsday argument, defenders of RSIA must employ creative mathematics to explain why 1/infinity is less than 2/infinity. I am not sure such a project can succeed—trying to imagine percentages of infinity gets quite complicated quite fast. But for RSIA to succeed, some creative mathematics must be employed.
2^2^2^2^ℵ0
This one’s a doozy, so strap in. Also, this is the most technical section, so feel free to skip it because it will be complicated. Note that the notation ~USIA just means something other than USIA is correct. The basic idea here is that if you bet in accordance with USIA then you’ll win because there will be more opportunities to get money. It’s thus a bit similar to the betting argument I provide in this post.
If one should bet in accordance with USIA rather than ~USIA, USIA is correct.
~USIA instructs one to, if they’re 50% sure that the number of people both actual and possible is Beth 1 and 50% sure that they’re both Beth 2, and if they’re given a choice between getting 2 dollars if there are Beth 1 people or 1 dollar if there are Beth 2 people, accept the offer that gets them 2 dollars if there are Beth 1 people.
If there are two possible events each with some probability, your betting between them shouldn’t depend on whether they’re impossible or merely non-actual if that doesn’t affect your payouts.
Therefore, ~USIA instructs one to, if they’re 50% sure that the number of actual people is Beth 1 and 50% sure it’s Beth 2, and if they’re given a choice between getting 2 dollars if there are Beth 1 people or 1 dollar if there are Beth 2 people, accept the offer that gets them 2 dollars if there are Beth 1 people.
If the case described in premise 4 were repeated over and over again, people would get less total money by accepting the offer that gets them 2 dollars if there are Beth 1 people than accepting the other offer.
Following the right betting advice would not result in getting less money, if the situation were iterated, than following the wrong betting advice.
If ~USIA is true then one accepting the offer that gets them 1 dollar if there are Beth 2 people in the scenario described in premise 4 is following the wrong betting advice.
So USIA is true.
The first premise says “If one should bet in accordance with USIA rather than ~USIA, USIA is correct.” The idea here is that the true view will give you correct betting advice. If it’s true that getting a 1 through 5 on a die is likelier than getting a 6, then bets that get payouts if you get a 1 through 5 are better than bets that get payouts if you get a 6.
The second premise says “~USIA instructs one to, if they’re 50% sure that the number of people both actual and possible is Beth 1 and 50% sure that they’re both Beth 2, and if they’re given a choice between getting 2 dollars if there are Beth 1 people or 1 dollar if there are Beth 2 people, accept the offer that gets them 2 dollars if there are Beth 1 people.” So if USIA is false and you’re deciding between two theories—one says all possible people exist and there are Beth 1 of them, the other says only that there are Beth 2 people that are possible and exist, then, because ~USIA makes you indifferent between the two theories, you should assign them equal probability. So therefore if there’s one deal that gives you a dollar if the first is true and another that gives you two dollars if the second is true, then you should take the deal that gives you two dollars if the second is true, because you think they’re equally likely.
The third premise says “if there are two possible events each with some probability, your betting between them shouldn’t depend on whether they’re impossible or merely non-actual if that doesn’t affect your payouts.” The basic idea here is that, if you’re 50% sure that, say, Fermat’s last theorem is false, your betting shouldn’t be affected by whether you think if it’s false it’s necessarily false or contingently false.
Premise 4 says “Therefore, ~USIA instructs one to, if they’re 50% sure that the number of actual people is Beth 1 and 50% sure it’s Beth 2, and if they’re given a choice between getting 2 dollars if there are Beth 1 people or 1 dollar if there are Beth 2 people, accept the offer that gets them 2 dollars if there are Beth 1 people.” This follows from premises 3 and 4.
Premise 5 says “If the case described in premise 4 were repeated over and over again, people would get less total money by accepting the offer that gets them 2 dollars if there are Beth 1 people than accepting the other offer.” The reason for this is that if the experiment were repeated, one would find themselves in the world with Beth 2 people far more than the other one.
Premise 6 says “Following the right betting advice would not result in getting less money, if the situation were iterated, than following the wrong betting advice.” This is straightforward: if you follow the right advice over again, you’ll gain money rather than losing it.
Premise 7 says “If ~USIA is true then one accepting the offer that gets them 1 dollar if there are Beth 2 people in the scenario described in premise 4 is following the wrong betting advice.” This follows from the previous premises.
I think there might be an easier way to get at the core intuition but I’m not sure what it would be. The basic idea is that if USIA is true then across the epistemically possible worlds where one could be placed, one following USIA would get richer because most of the epistemically possible people are in worlds where there are a lot of betters.
2^2^2^2^2^ℵ0
In my view, the most convincing argument against alternatives to SIA is that they imply something very much like the doomsday argument. I’ve pressed this charge here. But RSIA implies something somewhat similar, though more benign.
Suppose that we think there are probably only Beth 1 possible people and there are Beth 1 people that exist. Now suppose that there’s some device that could create Beth 2 people, but only if it’s possible. Suppose we get very strong evidence that it’s possible. RSIA would claim that I have very strong reason to think that the machine can’t work because then it would be unlikely I’d be so early if I could be one of the possible people the machine could make.
Even more counterintuitively, suppose that there are Beth 1 people. We think that all possible people have been created. A machine, however, will be constructed that will create Beth 1 more beings. On this view, we have very strong reason to think that those beings are not like us, for if they were, then it would be odd that I’d be so early. I find this very counterintuitive. RSIA gives one reason to think, no matter how strong the evidence that those beings are like us, that they are not like us.
2^2^2^2^2^2^ℵ0
Sorry, I know this has been a long and rather technical one. But hopefully this has convincingly laid out why, I think, USIA is the best view of anthropics. If this is true, then the anthropic argument for theism is quite convincing!
> Some don’t adopt the self-indication assumption (SIA). The most common alternative is the self-sampling assumption (SSA), according to which one should, upon discovering the splendid fact that they exist, reason as if they’re randomly selected from all the people like them. Thus, you should think there probably aren’t lots of people like you on Neptune, for instance, because if there were then it would be quite odd that you ended up on Earth rather than Neptune.
I haven’t finished the article yet but my understanding is that this is slightly incorrect. SIA and SSA are not alternatives, they are orthogonal to one another.
SSA forms a bridge that connects theories proposing a distribution of observers, and hypotheses predicting with hypotheses predicting what someone should exist to see if such theories were accurate.
SIA on the other hand informs our priors about those theories, causing us to prefer theories that have more observers.
There is no reason we can’t have both, and there are some reasons to. When choosing between multiple hypotheses with different numbers of observers when you’re in a group that would exist in either case (an often confusing scenario in anthropics, eg doomsday), SIA and SSA cancel each other out. Having both also removes the need for the poorly understood concept of a “reference class” of observers, as the number of observers appears both in the numerator and denominator of most judgements and so becomes arbitrary.
Ok Bulldog. Assuming you’re right, how many people exist?