Contra Philosophy Bear and Aaron Bergman on Sleeping Beauty
1/3>1/2 (this is not true as a purely mathematical point)
I tend not to have especially strong views about the various philosophical paradoxes that draw extreme controversy. I lean towards two-boxing in Newcombe’s, for example, but I’m not so sure about that. I still often feel the pull of one-boxing, for instance. But when it comes to sleeping beauty, I feel extremely confident that thirding is the correct answer. This is one of the philosophical views that I hold very strongly, which I’d be pretty surprised to find out that I’m wrong about. For those who don’t know what I’m talking about when I describe thirding in sleeping beauty, see here.
Here’s an argument for why one should third:
If one should bet in accordance with thirding rather than halving, one should third.
If betting in accordance with thirding rather than halving results in one reliably gaining money if iterated, then one should bet in accordance with thirding rather than halving.
Betting in accordance with thirding rather than halving results in one reliably gaining money if iterated
Therefore, one should third.
To illustrate one, suppose that upon waking up, one is given the following bet—if the coin turned up heads they’ll have to pay six dollars if it turned up tails they’ll have to pay four dollars. Should one take the bet? Halvers would say no, thirders would say yes—for upon waking up you should have 2 to 1 odds that you got tails. Thus, thirders think there’s a 66.66666666666666% chance that you’ll get four dollars and a 33.33333333333333333333% chance that you’ll pay 6 dollars.
1 is pretty trivial. It would be silly to think that X is true with some probability but that it would be irrational to bet in accordance with that belief.
2 is also very plausible. If one who repeatedly bets in accordance with halving will lose money over time then having is clearly not the right way to bet. The correct credence will result in you winning money consistently by betting in cases where the events are independent. Given that the events are probabilistically independent, one who follows the correct advice will win infinitely in infinite trials. Notably, the only reason one would win by betting in accordance with thirding over halving is that in iterated trials they’d be right most of the time which seems to give us good reason to accept that thirding is rational.
3 is trivial—if you keep betting in accordance with thirding you’ll win over time. There will be two of you to win if you get tails and only one to lose if you get heads.
A second argument is well-summarized here:
The Principle of Indifference states that, when we have n different possibilities and no reason to expect any one of these possibilities over any other, we should assign a credence of 1/n to the claim that any one possibility will occur. Think about a fair die: we have no evidence that any one of the six identical die faces is more likely to result from a die roll. Thus, we should assign a credence of 1/6 to the claim “the die will land six on the next roll.”
The Principle of Indifference at first seems to favor the halfer since we have no reason to believe that the coin is any more likely to land heads than tails or vice-versa. But, thirders argue, what we should really be indifferent over is Beauty’s three indistinguishable possible waking events (waking Monday when the coin landed heads, waking Monday when the coin landed tails, and waking Tuesday when the coin landed tails), and so we should assign a credence of 1/3 to H.[5] The thirder’s way of dividing up the world into indistinguishable states is more detailed and fine-grained than the halfer’s and so, they argue, represents a better application of the Principle of Indifference.[6]
I’ll probably write a bunch of articles on the subject, but here I’ll just briefly provide replies to two articles by my fellow Substackers on the subject. The first comes from my friend Aaron Bergman, whose blog is very worth checking out, and also, less notably, from some obscure philosopher named David Lewis:
By assumption, P(Heads) = P(Tails) = 1/2, so Sleeping Beauty would answer “1/2” before being put to sleep for the first time.
Sleeping Beauty gains no new information upon waking.
Therefore, she should not change her answer upon waking.
Aaron defends two by noting “waking is not evidence that the coin landed tails.” This is true. The mere fact that you woke up is not any evidence for the coin having come up tails. But when evaluating evidence one should take into account the most specific version of the piece of evidence. If you saw someone with a butter knife it would be irrational to count this as evidence they committed a murder based on the fact that they had a knife because, even though the mere fact that they had a knife is more strongly predicted on the hypothesis that they carried out a murder, the fact that they had a butterknife is not.
In this case, the most specific version of the evidence that Beauty has is not “I’m awake at some point,” but “I’m awake now.” Her being awake now is more strongly predicted on the hypothesis that the coin come up tails because now could either be Monday or Tuesday. If it’s Tuesday, then the fact that the coin come up tails is guaranteed because if the coin came up heads she’d only be awake on Monday.
One way to see that Bergman’s reasoning is wrong is to imagine a scenario like the original where Beauty wakes up, looks outside, and infers from the weather that it’s a Monday. Upon finding this out, it seems she should be 50/50 by Bergman’s logic—she knew she’d be awake on a Monday so she learned nothing new. But if she should be 50/50 conditional on finding out that it’s a Monday and conditional on finding out that it’s a Tuesday she should be 100% sure that the coin came up tails, then being unsure which of the three days it is she should spill her credence three ways and thus third.
You might reply that she learns something, namely that on the day she woke up it was Monday. But the halver can’t make this reply in the dialectic context:
Both theories predict she’d wake up on Monday. Tails just also predicts she’d also wake up on Tuesday. It’s not that she wakes up on a random one of the days, it’s that she wakes up on both days.
Pointing out that she’s awake now is something she learns is exactly the line of reasoning employed by thirders pointing out that she updates on the new information that she’s awake at the particular time that she’s awake.
Aaron has another intuition pump. Imagine that if Beauty gets tails she’ll get woken up infinite times while if she gets heads she’ll be woken up once. Aaron claims it’s unintuitive that upon being woken up she should give infinity to one odds that the coin came up tails. But I don’t agree.
For one, I don’t find this unintuitive. She’ll be awake infinitely more if the coin came up tails, so upon finding out that she’s awake now she gets great evidence that the coin came up tails. If she bets in accordance with this advice and assigns 100% probability to the coin having come up tails, she’ll win infinitely more than if she doesn’t if the experiment is iterate.
Second, we need to distinguish between confidence from within an argument and confidence outside of an argument. I’m pretty sure that thirding is rational. But I’m not 100% sure, so it makes sense to, given uncertainty, not assign 100% credence to my reasoning being right. As a result, upon waking up I’d really only be about 90% confident that the coin came up tails, in the condition where the person will be woken up infinitely if the coin comes up tails.
Aaron isn’t sure why people third, and consequently gives a similar case that is supposed to explain where the intuition came from:
Suppose you’re one of the experimenters, and divide the days of Monday and Tuesday into half-hour chunks. You then randomly select one, and find out that during this block Sleeping Beauty happened to be woken by your colleague.
Since this time you could have observed otherwise (unlike Sleeping Beauty in the thought experiment discussed), you’d be correct to conclude that that P(Tails)=2*P(Heads), which implies in this case P(Heads)=1/3.
I claim that this scenario is relevantly like the original sleeping beauty case. One other way to see this is the following. Halvers would say that upon finding out that it’s Monday you should think the odds you got heads is 1/2. But this means that what you should think on Monday depends on what could have happened on Tuesday, because if you would never have been born on Tuesday conditional on the coin coming up tails, then you should have a 50% credence in coin having come up tails upon finding out that it’s Monday. But surely after finding out that it’s Monday, to decide upon your credences, you don’t have to know what will happen on Tuesday conditional on the coin coming up tails! It hasn’t happened yet so what will happen conditional on the coin coming up tails can’t be relevant evidence.
Here’s another way to see that Bergman’s diagnosis is wrong. Bergman claims that the relevant difference between the sleeping beauty and the experimenter case is that the experimenter would have been around the time slice is unoccupied while beauty wouldn’t be. But surely this can’t be right. To see this, imagine the following scenario:
You’re one of the experimenters, and divide the days of Monday and Tuesday into half-hour chunks. You then randomly select one, and find out that during this block Sleeping Beauty happened to be woken by your colleague. However, you find out that if you had picked one in which beauty wasn’t around, you would have been quickly assassinated and thus unable to make the observation.
In this case, the person wouldn’t have been able to make the observation if they would have chosen one where beauty wasn’t around. Nonetheless, this gives them evidence that beauty is around for longer. So it can’t merely be the impossibility of being around to observe contradictory evidence that makes the difference.
The second article I’ll discuss comes from Philosophy Bear, whose blog is excellent, widely described as the third best philosophy blog on the internet after Good Thoughts and one I cannot name due to humility. PB says:
Suppose you think that, on the scientific evidence, there is a 50/50 chance that we are in a many-worlds interpretation universe. This is in some sense equivalent to God flipping a coin to decide whether the universe will be many worlds or single world. Now you wake up in the morning. You know that if you live in a many-worlds interpretation universe, an infinity of you will make this observation. You know that if you don’t, one of you will make this observation. There is of course nothing special about waking from sleep, any moment will do.
It seems to me that if you’re a thirder in the sleeping beauty case, you should update your beliefs to be infinitely near certain that the many worlds’ interpretation is true if you started out thinking that the probability is anything greater than zero.
I agree. But I don’t think this is much of an objection. Taking anthropics seriously means taking it seriously when it says weird things, which it does on every view of anthropics. The odds that this version of me would exist is higher conditional on many worlds being true because there are more copies of me—this is just a straightforward probabilistic inference.
But this leads to interesting problems even if you believe the reasoning in the many worlds case. I’m not up on transfinite probabilities, but, for example, suppose there were some highly unlikely theory T (let us say one billionth as likely as the many worlds theory) which suggests there are even more worlds than in the many worlds interpretation, a higher cardinality of infinity. Now observe:
Guilty as charged. Again, this just seems like straightforward probabilistic reasoning—if there are more people it’s likelier that I’d be one of them. This gives us a good reason to think that every possible agent was created because that makes it more likely that I in particular would be created. There are at least Beth 2 possible agents, so we should think that there are at least Beth 2 actual agents too!
Philosophy Bear says that this can keep continuing with the numbers getting even bigger such that the odds that there are Beth actual agents is lower than the odds that there are Beth n-1 actual agents. But I don’t think so. The reason to 1/3 is that you’re not sure which of the possible agents you are. But between all the hypotheses where all the possible agents are created, we don’t have any reason to suspect which one is right.
You might worry that this entails strong necessitarianism, wherein the number of possible agents would be small. But this will be ruled out on priors—the odds that there is a small number of necessary agents and I’d be one of the lucky few is exceedingly low given that I’m not special relative to the many possible clones and other agents.
Finally, PB says this results in concluding that there is eternal recurrence. This is a bit trickier because the cardinality of the number of person moments infinitely recurring is still Beth 2. But it’s not out of the question that we should get a big update from eternal recurrence. We might not update wholly in favor of eternal recurrence because there’s no possible scenario with a non-zero probability that entails that there are Beth 2 agents. You might think it’s weird to assign zero probability to a scenario like this, but there are infinite possible scenarios, so average prior has to be 1/Beth 2 which is still 0.
Accepting SIA gets really weird results. But so does accepting SSA and every other view of anthropics. It would be surprising if anthropics wasn’t weird and didn’t push us in weird directions. But the bullets the SIAers have to bite seem to me like good bullets that aren’t too improbable. If one bets in accordance with SIA in any of these scenarios, they’d win in expectation. When I think about them, they strike me as plausible results one has derived rather than costs to the theory.
> Here’s an argument for why one should third
I've just finished a post showing that thirders actually have more problems with betting in Sleeping Beauty and its derivatives than halfers, which follow the correct (not Lewis's) model.
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets
> The Principle of Indifference
You can't appeal to indifference when you already have some knowledge about the events. The Beauty knows that her awakening routine is determined by a fair coin toss. That means that she is not indifferent between the three awakenings anymore. If there were no coin toss - just three possible awakenings only one of which can happen in the experiment, or at least the Beauty didn't know about the coin toss and the order between awakenings, then yes, she should follow the indifference and be a thirder - this is the No-Coin-Toss problem for which Elga's model applies.
> One way to see that Bergman’s reasoning is wrong is to imagine a scenario like the original where Beauty wakes up, looks outside, and infers from the weather that it’s a Monday. Upon finding this out, it seems she should be 50/50 by Bergman’s logic—she knew she’d be awake on a Monday so she learned nothing new. But if she should be 50/50 conditional on finding out that it’s a Monday and conditional on finding out that it’s a Tuesday she should be 100% sure that the coin came up tails
There is nothing wrong here. This is exactly how one is supposed to reason about the problem while sticking to probability theory. And in the comment below I explain why.
> then being unsure which of the three days it is she should spill her credence three ways and thus third.
What three days? You mean which of three awakenings? It would be true if the awakenings were mutually exclusive and therefore could be treated as outcomes for a sample space. But as there is order between them and the Beauty knows about it, she can't lawfully spill her credence between them.
> You might reply that she learns something
She didn't and that's the whole point. She knew that she is to be awake on Monday regardless of the outcome of the coin and thius she doesn't learn anything new when she is told that she indeed was awakened on Monday
P(Heads) = P(Heads|Monday) = 1/2
> But this means that what you should think on Monday depends on what could have happened on Tuesday, because if you would never have been born on Tuesday conditional on the coin coming up tails, then you should have a 50% credence in coin having come up tails upon finding out that it’s Monday. But surely after finding out that it’s Monday, to decide upon your credences, you don’t have to know what will happen on Tuesday conditional on the coin coming up tails! It hasn’t happened yet so what will happen conditional on the coin coming up tails can’t be relevant evidence.
Ehm... What? Sorry, I don't understand what are you talking about here. No one is being born in the experiment. Could you maybe rephrase your argument here differently?
> Here’s another way to see that Bergman’s diagnosis is wrong. Bergman claims that the relevant difference between the sleeping beauty and the experimenter case is that the experimenter would have been around the time slice is unoccupied while beauty wouldn’t be. But surely this can’t be right.
Bergman might have formulated the actual principle poorly. The relevant difference is whether the person might expect not to observe some evidence. The experimenters on a random day might not observe that the Beauty is awake, either because they see her asleep or because they are killed. And therefore observing the Beauty awake is relevant evidence that updates them in favor of Tails. The Beauty always observes herself awake so it's not relevant evidence in favor of Tails. This is just how the conservation of expected evidence works.
> Accepting SIA gets really weird results. But so does accepting SSA and every other view of anthropics.
And that's why we should accept neither SIA nor SSA, nor any other anthropic theory that produces weird results and keep looking for an approach that produce correct results in every case.
> In this case, the most specific version of the evidence that Beauty has is not “I’m awake at some point,” but “I’m awake now.”
No, the Beauty doesn't have such evidence. Probability theory doesn't allows to deal with time moments unless they are randomly sampled in some manner. Monday and Tuesday awakenings on Tails are not random, they happen in ordered manner. So the Beauty can't lawfully reason about them separately.
The Beauty observes event "I'm awaken during the experiment at least once". This is what she expected to happen so she doesn't update her probability estimate for Heads in any way. Likewise, if she is told that it's Monday she observed event "I'm awake on Monday in this experiment", which also is something she expected regardless the outcome of the coin toss, so she doesn't update.
P(Heads|Monday)=P(Heads&Monday)=P(Heads|Awake)=P(Heads&Awake)=P(Heads)=1/2
P(Monday)=P(Awake)=1
P(Tuesday)=P(Tails)=1/2