36 Comments
Sep 2Liked by Bentham's Bulldog

To really make this argument you need a narratological Bayesian framework

Expand full comment

But it seems "more people = better theory" may have some inherent problems. Suppose Theory A predicted there would be exactly one person, and suppose Theory B predicted millions of people (along with a 0.0001% chance there would only be one person). If you looked around and found you were the only person on earth, which Theory would you favor?

Also, regarding prior probabilities of the coin flip, you mentioned elsewhere an example where Heads = "Your parents had sex" and Tails = "Your parents did not have sex". You suggested Heads would be a pretty safe bet, even if you had good reason to believe the coin is biased towards Tails. However, suppose it could be shown that the coin's bias is so strong that a Virgin Birth is more likely than flipping Heads. Would you still favor Heads?

Expand full comment
author

I'm not sure if I get the scenario. Do you know there are no other people, or do you just not see any other? If you know there are no other people, A is better, if you don't, B is better, because probably there are many other people.

In the second case, no then you should favor tails.

Expand full comment

Regarding the first scenario, I'm saying you just look around and see no other people in your particular world. Are you saying B is still better in this case? I mean, if A's a nice, clean, intuitive theory that predicts exactly one world and exactly one person (which fits nicely with your observation in this scenario), then why jump the gun, and say, "Well, B's gotta be true, there must just be a bunch of different worlds to account for all these millions of extra people B predicts"?

Expand full comment
author

Yes B is still better. Well the reason to think that B is better is exactly the argument I gave in the above article--as well as a bunch of other arguments. (If you want to know why it's not enough to say "well both theories predict a world like ours," see section 2.3.

Expand full comment

I mean, you're basically left with two options in this scenario:

1. A, a nice clean theory that explains why there is one world and one observer

2 B, a curious theory which, if it is to be accepted as the theory of your origins, requires you to assume there are other worlds that you can probably never observe

That is to say, both theories predict your current situation, but B shovels in a bunch of unfalsifiable extras. Furthermore, given how clean and intuitive A (hypothetically) is, it's likely to give you not only some sense of your past, but also your future. It therefore seems like it's more relevant to your experiential life. But what do we do with a theory like B? It's so heavy-laden with fudge factor worlds and people that it's likely too unwieldy for any practical use

Expand full comment
author

Well as I explain in the linked post in section 2.3, they don't both predict your existence!

Expand full comment

To put this another way, suppose you're a mobster who woke up gagged and tied up in a warehouse somewhere. You don't remember how you ended up there, but you narrowed things down to two theories:

A: You've ticked off a rival, and he's tied you up here to extract information and probably kill you afterwards

B: There's a magical goose that takes 1,000,000 random people to warehouses while they're sleeping

Theory A would probably make good intuitive sense to a mobster. Furthermore, as a good intuitive theory, it has good relevance to his experiential life. That is, it says he should hurry up and escape ASAP. Who knows what to say about this magical goose and alternative-world warehouses?

In other words, A is useful, B is "abstract nonsense"

Expand full comment

What do you mean? Both theories predict "people", and "your existence" is just an instantiation of this prediction

Expand full comment

Under the SIA framework, one might expect that if God exists and desires a universe with conscious beings, the number of such beings would be vastly larger than what we observe. If God’s intention were to maximize the number of observers, the current number of people in the universe seems surprisingly low.

If the universe were truly designed with a purpose to maximize conscious life, it might be reasonable to expect a far greater number of observers, either across space (with many populated planets) or across time (with a much longer history of sentient life).

How is the above argument flawed?

Expand full comment

I’m not sure I totally get the link between acceptance of the SIA and proof of God. Can this be explained in simple terms, please?

Expand full comment

I am not sure I follow: how does theism predict a multiverse? If so, how is its prediction better than the string theorists (in other words, naturalism) who've been banging away about multiverse to no effect?

Following on that, I am still not sure using what's more likely in a strictly probabilistic manner is the correct way to form theories. Surely it can be the case that something that is less likely using your logic turns out to be what is actual. The only way to know is to demonstrate it through observation, experiment or both. Otherwise, ideas like the multi-verse are pure theory, no matter how much math and logic is used to show how probable it is.

Expand full comment

Good that you are broadening your horizons beyond SIA and SSA! I wish you finally engaged with less bad anthropic theories and/or didn't strawman them while forgiving SIA every absurdity it implies, but hopefully we will get there eventually.

> If, in the real world, you didn’t know for a while that you were the ~110 billionth human, and then learned that you were the 110 billionth human at some point, then you should be confident humanity would die out soon, because otherwise it would be unlikely you’d be so early.

Not at all. The fact that anthropic reasoning attempts to treat completely different scenarios, such as being born according to the rules of real world and participating in any mind experiment that includes creation of people or memory loss is the source of lots of problems.

Here I'm leaving a link about how one should reason about there existence in real world, for anyone interested:

https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic

> You might reject 3A by thinking that the odds that you’re various people depends, for instance, on how simply you can describe a person

The actual reason to reject 3A is that the odds to have any particular birth rank depends on the setting of probability experiment. 3A is correct if ranks are assigned to people randomly. But it doesn't have to be the case.

This is most obvious case is in real world, where you couldn't have been born before your parents which couldn't have been born before their parents and so on, therefore there is a strict order among humans and therefore you couldn't have different birthrank than you currently have regardless of how many people there will be in the future.

> Most concerningly, this doesn’t even really avoid the challenge. Let’s assume you’re equally likely to be any of the people whose experiences are consistent with your evidence

Here you've slipped back to arguing agains SSA. CC doesn't assume that you are equally likely to be any of people whose experience are consistent with your evidence. It just claims that this experience was observed, without any speculations about "who you are".

Expand full comment

>The first problem is that it’s just intuitively crazy. If you learn that you’re the 110 billionth human, and then discover that there will be infinite more people unless you get 100,000 consecutive royal flushes in poker, you should not expect to get 100,000 consecutive royal flushes.

I don't really see why being able to predict future events based on anthropic arguments is intrinsically crazier than being able to predict past events based on such, and SIA is happy doing the latter. What exactly is the asymmetry between past and future here? We wouldn't automatically look favorably on, say, a theory of cosmology that boasts of its ability to infer information from present-day clues about galactic evolution in the past but not in the future.

Expand full comment

The SIA is narrowly useful as a rebuttal to solipsism. Or maybe the universe is a factory farm of brain in vats. That’s also consisted with the SIA

Expand full comment
author

It's not just useful for that--it has other important implications.

Expand full comment

You posted a wojak meme about coins coming up heads half the time, yet you’re a thirder.

Curious!

(No I did not read any other part of this article)

Expand full comment
author

The meme was about unflipped coins!

Expand full comment

In Sleeping Beauty first awakening may happen before the coin was flipped, yet thirder Beauty is already confident that Tails outcome is more likely.

Also, consider this:

A fair coin is tossed until the first Heads outcome. Beauty is awakened 2^n times where n is the number of Tails outcomes the coin produced.

H: One awakening

TH: Two awakenings

TTH: Four awakenings

TTTH: Eight awakenings

TTTTH: Sixteen awakenings

and so on.

On awakening the Beauty is asked for her credence that the result of k-th coin toss in this experiment is Tails. According to SIA this probability approaches 1 for *any* k.

Expand full comment
author

Yes but Beauty doesn't know that the coin hasnt' been flipped yet. It only applies if you know that any coin hasn't been flipped yet.

Yes SIA implies that, but any plausible view will imply something similar. Suppose infinite people are put to sleep. A coin is flipped until it comes up tails. If it comes up heads once 2 people wake up, twice 4 people, etc.

Same result that if you wake up, for any number N, you're certain that more than N coins have come up heads.

Expand full comment
Sep 3·edited Sep 3

> Yes but Beauty doesn't know that the coin hasnt' been flipped yet. It only applies if you know that any coin hasn't been flipped yet.

So it's not the simple and elegant "Coins come up Heads half the time".

It's not even "Unflipped coins are equally likely to come up Heads and Tails when flipped".

Its: "Unflipped coins are equally likely to come up Heads and Tails when flipped, unless I'm unsure whether the coin was flipped already or not, then I can be arbitrary confident that it's Tails". Right?

> Yes SIA implies that, but any plausible view will imply something similar.

There is nothing plausible here. It's not even some impossible to implement scenario requiring actual infinity of people. This is an experiment that more or less can be already performed and if you bite the SIA bullet you will lose all your money in crazy wagers.

> Suppose infinite people are put to sleep. A coin is flipped until it comes up tails. If it comes up heads once 2 people wake up, twice 4 people, etc.

There is no particular reason why reasoning under these two scenarious has to work according to the same rules.

In this case, granted that you were not meant to be awakened regardless of the outcomes of the tosses, you indeed should be extremely confident that more than N tosses has happened, conditionally on being awakened. This is because under such setting there is only infinitesimal prior probability to be awakened at all in any particular instance of the experiment, therefore being awakened is an extremely strong evidence.

This is the key difference between the two scenarious. In mine, the Beauty is always certain that she will be awakened at least once in any instance of the experiment., and so no update has to be made when she indeed notices herself awake.

Therefore, a plausible view should not update in my case and should update in yours.

Expand full comment
author

//So it's not the simple and elegant "Coins come up Heads half the time".//

Well, SIA agrees with this. On average, coins come up heads half the time. But it doesn't follow from this that your credence in any particular coin coming up heads should be 1/2. That would be crazy--if a coin is flipped and then I'll be killed unless it comes up heads, I shouldn't think there's a probability of .5 that it comes up heads.

//It's not even "Unflipped coins are equally likely to come up Heads and Tails when flipped".//

Well, SIA agrees with this as well. But if you don't know that a coin is unflipped then you won't always have a credence of .5 in it coming up heads. The principles you have given don't have anything to do with credences. It's only in cases where you know the coin is unflipped that it makes any sense to talk about a universal rule surrounding what your credence should be.

Well, the thing that seems weird about the scenarios is that no matter how many flips come up heads, you predict with probability 1 that it will be more than that. But the case I give is the same in that respect! Of course, you might still think the SIA result is weird, but if it's weird in *exactly the same way as the scenario I gave* then it's not a very decisive counterexample.

Expand full comment
Sep 4·edited Sep 4

> The principles you have given don't have anything to do with credences. It's only in cases where you know the coin is unflipped that it makes any sense to talk about a universal rule surrounding what your credence should be.

I blame your meme format, which is not optimized for clarity. It's understandable why you use it while arguing against neoreactionaries whose level of discussion is already in the dumpster. But I'd advise you against using it here, as it makes you the one who lowers the sanity waterline.

Let's explicitly describe what this has to do with credences: Credence is probability. When you have no additional information about the event then your credence is prior probability of the event. When you have some additional information, then its probability of the event, conditionally on this additional information.

Probabilities have frequentist meaning, captured by the law of large numbers. Sample mean converges to the mean and frequency of the event converges to its probability. So if on a repetition of a probability experiment event happens half the time, your credence has to be 1/2. This is true for both conditional and unconditional probabilities. Consider:

A coin is tossed and the outcome is not shown to you. On Heads you are given a red ball, on Tails you are given either red or blue ball. 1) What should be your credence that the coin will be Heads before it's tossed? 2) What should be you credence that the coin is Heads after it's tossed but before you were given a ball? 3) What should be your credence that the coin is Heads after you were given a red ball?

1) On a repetition of the experiment, half the times before the coin was tossed, it's outcome will be is Heads so your credence has to be 1/2

2) On a repetition of the experiment, half the times before you were given a ball the coin is Heads so your credence has to be 1/2

3) On a repetition of the experiment 2/3 of times after you received a red ball the coin is Heads, so your credence has to be 2/3

As you can see the actual rule isn't really about whether the coin is flipped or unflipped at the moment of asking. The rule is about what information you have about the outcome of the coin toss be it in the future or in the past.

> Well, SIA agrees with this.

So does everyone else. The issue is that some theories have weird loopholes and SIA is no exception as it allows you to manipulate your own credence in any way you want by precommiting to give yourself extra awakenings with memory loss based on the result of the coin toss. And if your theory has such a weird loophole, sorry, but you do not deserve a simple slogan on your flag.

> Well, the thing that seems weird about the scenarios is that no matter how many flips come up heads, you predict with probability 1 that it will be more than that.

It's a standard thing with infinities, which is only weird if you didn't engage with calculus at all. There is a well known way how such situations add back to normality. You are *supposed to* have infinitely strong update on evidence which you have infinitisimal chance of observing. As a result this *weird* update simply doesn't happen.

In your version that's exactly what is going on. You absolutely do not expect to be awakened during this experiment. No matter how many times you participate in it, with all likelihood you will not be awakened and will not find yourself in this weird situation where you are certain that every coin toss happened to be Heads. This is an impossible counterfactual world, which you do not need to worry about. You are not going to lose all your money this way. Additionally, there is also a whole extra level of impossibility, because no human will be able to perform such probability experiment on you, as it will require actual infinity of people.

On the other hand, in my version, you are absolutely certain that you will be awakened in the experiment at least once. Even a single participation in this probability experiment with absolute certanity will put yourself in a state where you are confident that all the coin tosses are Tails, agree to absolutely ridiculous bets and lose all your money. You are *not supposed* to have infinitely strong updates when receiving evidence that you had non-infinitesimal probability of receiving, nevermind certanity, but this is exactly what SIA recommends you doing. And this is a type of experiment that actually can be performed in a real world, as it doesn't require actual infinities

> Of course, you might still think the SIA result is weird, but if it's weird in *exactly the same way as the scenario I gave* then it's not a very decisive counterexample.

I think I explained very explicitly how its weird in a completely different way. It's not some subtle change, either. In one case you are completely fine while following SIA, in the other you've lost all your money. It's not even a Dutch book, it's literally "become broke with one single trick". I'm not sure what, in principle, can be more decisive than that.

Expand full comment
deletedSep 2
Comment deleted
Expand full comment
author

Right, we know how many people there are narrowly on our planet earth. But theism, I argue, predicts a multiverse with many different earths. My claim is that theories that predict more versions of you are more likely.

Here's a quick way to see this. Let's say a coin is flipped and if it comes up tails, earth is cloned (the cloned people know about the setup but don't know they're clones). I claim:

1) The odds you're on the original earth and the coin comes up heads = the odds you're on the original earth and the coin comes up tails

2) the odds you're on the original earth and the coin comes up tails = the odds you're on the cloned earth and the coin comes up tails

But from these it follows that you should think the hypothesis that there are two duplicate earths is twice as likely as the hypothesis that there's only one earth.

By the same logic, the hypothesis that there are infinite earths is infinitely more likely than the hypothesis that there are finite earths, and theism is the best explanation of that!

I claim that this is the right way to reason about probability involving multiple people. As the article I've given shows, we're going off more than intuition--if you reason in a different way, your view ends up utterly absurd.

Expand full comment