56 Comments
Jul 4Liked by Bentham's Bulldog

Distinguished Professor of Anthropics when?

Expand full comment
Jul 4Liked by Bentham's Bulldog

Congratulations, Matthew!

Expand full comment
author

Thank you!

Expand full comment

The coin flip example made me realize that the entire argument for halferism in the SB problem is self-refuting. The halfer says that the probability of the coin having landed heads after you wake up is 1/2 because you haven't gained any new information since before the coin was flipped. Their reasoning for why you haven't gained any new information is that you would've been woken up regardless of whether the coin landed heads or tails. But the halfer is told what day it is when they wake up on Monday, they update their probability of heads to 2/3 even though, by their own lights, they haven't gained any information - they knew they would be woken on Monday no matter what.

Expand full comment
author

Some people double half--thinking that after being told it's monday you should still be at 1/2

Expand full comment

Yeah, I know about that position, but I think if you get to the point where you have to abandon the rules of probability just to protect halferism, you're getting pretty desperate.

Expand full comment

Congratulations, Matthew.

Expand full comment
author

Thank you!

Expand full comment

Can you explain why anthropic reasoning does not rely on personal identity not being an illusion?

Expand full comment
author

Well, every view will have to have some way of assigning probabilities in anthropic scenarios. For example, the fact that you exist gives you evidence your mom got pregnant.

Expand full comment

Sorry, I should have been more specific. So in God's coin flip, when people use phrases like "I could have not existed" or "I could have woken up with a red suit instead of a white one" it seems like that reasoning posits that there is an "I" that exists independent of your situation (something like a soul). Am I just being tricked by the language, or does SIA actually rely on there being a me that isn't just an illusion? They seem fundamentally different than the "I can assume my mother was pregnant scenario."

Expand full comment
author

You can have a reductionist or revisionary account of what the you is that we're referring to. But we'll all have to do some kinds of anthropic reasoning. For example, if I see I have a red shrit, and a coin was flipped which would create 1 red shirted people if heads and one if tails as well as 10 blue shirted people if tails, I'll need some way to reason about the probability that it came up tails.

Expand full comment

However, if I'm a reductionist, I can't reason about possible cases where I have a blue shirt because "I" am contingent on my epistemic situation. On reductionism, my existence implies that I have a red shirt because I am emergent from the red shirt (among other things). So I don't think you are able to reason about the probability that is comes up tails. Either way, there is a 100 percent chance that "I" have a red shirt. Btw, I'm very new to anthropics so I'm probably wrong--just trying to figure out why.

Expand full comment
author

But now you're giving a theory of anthropics! I don't know exactly what the theory is, but you're having a theory of some sort. The point is, you can't avoid the puzzles just by saying that selves don't really exist.

Expand full comment

Congratulations on you publication.

Incidentally, I've just finished a post that shows how to deal with Doomsday Argument without SIA.

https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic

Expand full comment
author

Thanks, I might check it out!

Expand full comment

You give a pretty good argument against SSA here, though I don't know enough about alternatives besides SIA to know if things like the Adam examples are an issue for other views of anthropics, any reading recommendations on that are appreciated.

In the modification to the sleeping beauty problem starting with "Let's modify the numbers", I think you are modifying the problem in an extra unmentioned way. In the original sleeping beauty problem, it's my understanding that the coin is flipped before he first waking, but in your example it's after the first. This seems like a fine modification to make, but I was left scratching my head for a couple minutes before realizing what you'd done. Still a good example like the Adam problems that SSA can cause you to have unintuitive (and I think unreasonable) credences in future events.

Looking at your wojak meme, it strikes me that SSA is gives unusual credences to things that will happen, and SIA gives unusual credences that have happened. Just as your meme mocks SSA for predicting things about a future toss, someone could mock SIA for predicting things about a prior toss, as in this image: https://i.imgur.com/VyJpOt5.png But It seems much less weird for me to have anthropics tell us about the past than the future, so I agree with your augments in that section. But then later, you argue the following:

> They think that from the fact that you’re alive now, you can be confident that there weren’t many prehistoric people. I think that’s crazy—the number of people other than you shouldn’t be relevant to probabilistic reasoning.

But I just convinced myself in the previous section that making assumptions about the past seems okay, why is being confident in the past coin flip in the modified sleeping beauty problem okay, but being confident about the number of past humans not?

Expand full comment
author

The argument I give is an argument against any alternative to SIA, not just SSA. It's also an argument against halfing!

Expand full comment

> I think you are modifying the problem in an extra unmentioned way. In the original sleeping beauty problem, it's my understanding that the coin is flipped before he first waking, but in your example it's after the first. This seems like a fine modification to make, but I was left scratching my head for a couple minutes before realizing what you'd done.

It doesn't really matter when the coin is tossed for the Sleeping Beauty experiment before the first awakening or after. Nothing really changes, in any case the awakening routine is determined by the state of the coin. If it Heads - then there is only Monday awakening. If it Tails - then there are both Monday and Tuesday awakening. Ensuring that the coin is tossed after the first awakening simply highlights that SSA claim that P(Heads|Monday) = 2/3 is wrong. Of course, as you notice yourself, SIA has similar failure, claiming that P(Tails|Awake) = 2/3, despite the fact taht awakening happens in every iteration of experiment.

> Looking at your wojak meme, it strikes me that SSA is gives unusual credences to things that will happen, and SIA gives unusual credences that have happened.

Not *have happened* but *is happening*. Under SIA Beauty believes that the coin is likelier to be Tails just from the fact that she has awakened. So when the coin is thrown after the first awakening and SIA following Beauty awakens for the first time and is not told that it's Monday, she also has unreasonable confidence about a future coin toss. She perfectly knows that her awakening is either happening before the coin toss or after the coin toss and yet she is already more certain about the result of this toss than chance

But it's even worse than that. There is also a fun way to retrocausally manipulate probabilities under SIA. For example if the Beauty conspires with one of the workers of the lab to give her some extra number of awakenings after the experiment has already ended she can make herself arbitrary confident in any state of the coin during the experiment.

The actual solution to the Sleeping Beauty problems evades both of the presumptiousness of SSA and SIA. According to the correct model:

P(Monday) = 1

P(Tuesday) = 1/2

P(Heads) = P(Heads|Monday) = P(Heads&Monday) = 1/2

If you are interested, here are a couple of posts exploring the problem in great details and showing how to arrive to the correct model:

https://www.lesswrong.com/posts/SjoPCwmNKtFvQ3f2J/lessons-from-failed-attempts-to-model-sleeping-beauty

https://www.lesswrong.com/posts/gwfgFwrrYnDpcF4JP/the-solution-to-sleeping-beauty

Expand full comment

From "No Country For Old Men"

Carla and Anton:

The coin don't have no say, it's just you", he laughs and says, "I got here the same way the coin did" ...

Expand full comment

I disagree with a few things here, but will only highlight one thing for brevity. I think a better objection to SIA than the ones you address is that it's not continuous in the priors, and this makes most realistic anthropic reasoning pretty much impossible. Because when we try to reason probabilistically, it's rare that the numbers we plug in are supposed to be exactly correct; rather, they're just estimates. The hope is that if the estimates of our priors aren't too far off from what a more optimal agent would assign, the posteriors we end up with won't diverge too severely from what that agent would conclude. But there's no such hope with SIA. The tiniest positive probability of, for example, a sufficiently heavy-tailed (in terms of the numbers of hypothetical people with your evidence) outcome will completely dominate everything else, no matter how fanciful the outcome is described to be.

I think you bite this bullet, but it seems like the upshot is that we should just not reason anthropically whenever we're not in a toy thought experiment where only a small handful of mathematically modest hypotheses occupy literally 100% of the probability mass. And, more ambitiously, induction should probably be regarded as problematic. I don't think divine hypotheses come to the rescue here, and if they do then they come to the rescue *too* much and render inductive inference probabilistically certain (since the cardinality of observer-moments with your evidence who are inducing reliably will be larger than those who induce unreliably).

Expand full comment
author

I think we should give infintesimal probabilities to being the first person. But I don't think this is so problematic. Imagine that infinite people were put to sleep and then various numbers were woken up: SIA says that reasoning about creation is like reasoning about being woken up in that scenario.

Expand full comment

I'm not 100% sure I understand what this is in response to. My point was that, in a real anthropic situation, you shouldn't waste any of your thought wondering about (say) the physical odds that God tossed heads based on some cute setup that God wrote on the wall. It doesn't matter what they are, because all of the probabilistic air is going to be sucked out of the room by tiny-probability insane-sounding hypotheses that posit infinite-in-expectation numbers of people with your evidence, and which have nothing to do with the setup, because you haven't truly learned the setup with 100% certainty (or even 100 - infinitesimal percent certainty).

Expand full comment
author

I agree with that, but in the thought experiment setups we're imagining that the only options you have any credence in in finite cases. My point was that even though the math is weird, we'll all have to have something to say about an analogous case, so SIA is no worse off in terms of having to think there's some weird way to do probabilistic reasoning.

I think the argument you're giving about certainty of an infinite world just is the presumptuous philosopher argument inflated, and what I have to say about it is similar.

Expand full comment
Jul 4·edited Jul 4

>I agree with that, but in the thought experiment setups we're imagining that the only options you have any credence in in finite cases.

I had that in mind as well. The problem comes up for SIA in strictly finite cases, as long as the relevant expectations are infinite (which is different from any of the individual outcomes themselves being infinite).

SSA, unlike SIA, doesn't suffer from this problem in finite cases, because it's continuous in one's priors and thus isn't ever massively derailed by arbitrarily tiny changes thereto. No matter what thought experiment you come up with, as long as every outcome is finite, a small enough change to your priors will yield a correspondingly small change to your posteriors under SSA.

>I think the argument you're giving about certainty of an infinite world just is the presumptuous philosopher argument inflated, and what I have to say about it is similar.

I definitely agree that this is similar to the presumptuous philosopher issue. I would say it's a little sharpened, in that SIA implies that you shouldn't actually use SIA in practice for anything in any conceivably realistic scenario other than perhaps concluding there's a large number of people with your evidence. That's basically the one and only thing it licenses you to conclude outside of toy thought experiments where we can assume any mathematically inconvenient hypothesis is false with fully 100% certainty. So this seems a bit unnerving. We generally dislike it when our principles consistently recommend against themselves. The problem isn't just "SIA says there's lots of people," it's "SIA almost never lets you use SIA except for this one, singular claim about there being lots of people."

And if you try to insist we can still charge ahead and use SIA but stick with infinitesimals for just about everything empirical (because they've been swamped by tiny probabilities of lots of people being deceived by a Cartesian demon or whatever), that's also really bad! Most people don't want to say they're infinitely less certain of the sky being blue than (say) the Riemann Hypothesis, for example.

Expand full comment
author

In practice you should have some anthropic uncertainty and so not have your credences precisely track SIA.

Expand full comment
Jul 5·edited Jul 5

Sure, but that's even more in accordance with "never using SIA in practice, except to say the world is large."

Expand full comment