Weintraub Disproves The Main Halfer Argument
Sometimes your credence tomorrow should be different from today
Weintraub has the best three-page paper on the sleeping beauty problem by far. She nicely summarizes it:
Sleeping Beauty (Elga 2001) is told that she will go to sleep and be woken up once if a fair coin lands heads, twice if it lands tails. After the first waking, she will be made to forget it. Upon waking up, what probability ought she to assign to the coin's having landed heads.
According to first line of reasoning, the rational probability is 1/2. Before going to sleep the probability is 1/2: the coin is fair. And no new information has been received: she knew all along she was going to be wakened. The correct probability is 1/3, goes the second response. For every heads-waking, there are two tails-wakings.
Weintraub, like me, thinks that the solution is to say that Beauty does learn something. Namely, she learns she’s awake now. Now might be Tuesday, in which case her current state is incompatible with tails. But this might seem suspicious—how can you take something to be evidence if you know you’ll be in the same state whichever theory is right?
Weintraub gives a great example showing why this talking point is wrong:
I propose to uphold the second response by rebutting the premiss of the first. Sleeping Beauty, I suggest, has received new relevant information. True, she knew all along that she would be wakened. But now she knows she is awake now.
Clearly, two such statements, 'It will at some point be p' and 'It is p now' have different implications for action. For instance, the belief that it will rain sometime doesn't motivate me to take an umbrella, whereas the belief that it is raining now does.
Of course, one's opponent (Lewis 2001) might question the relevance of this information to the case in hand. But I think we can meet the challenge by slightly altering the story. This time, Sleeping Beauty is told she will see three lights flashing (one after the other), being made to forget what she has seen after each flash. If the (fair) coin lands heads, one of the three flashes will be red and two will be green. If the coin lands tails, one will be green and two will be red.
Upon seeing a red flash, she should obviously assign probability 1/3 to the coin's having landed heads. But here, too, we may be challenged to justify the change in probabilities. She knew all along she would see a red flash! Here, the argument isn't even tempting. She believes a red light is flashing now, and that clearly makes a difference.
The cases are analogous. In both of them, what she knew all along would happen she now knows to be actual. That is the only new information. And if it makes (as everyone will agree) a difference in the first case, why not in the second.
Halfers often claim that you shouldn’t be in an epistemic state where tomorrow you predict your credence will be lower than it is today, where you’re aware tomorrow of everything you’ll know today. But we can modify the case slightly to show that this is wrong. Imagine that if the coin come up heads she’ll see a red flash tomorrow, a green flash the day after, and a red flash the day after. If the coin comes up tails, she’ll see a green flash the tomorrow, a red flash the next day, and a green flash the day after.
Tomorrow, if she sees a green flash, she should think the odds are 2/3 that the coin came up tails. This is so even though at the time, she’ll be able to remember that she was indifferent between tails or heads today. If one doesn’t know what time it is, and they have misleading evidence, it makes sense to update on it.
I think this totally dismantles the main halfer talking point. And it came at the right time: I was trying to think of an example like this but couldn’t—then, I just happened to stumble upon Weintraub’s paper. Very clever stuff.
I don't really have an opinion on Sleeping Beauty (even if I'm highly skeptical of the various anthropic principles enlisted to support the two answers), but this doesn't seem like enough. It may be true that you have self-locating evidence once the experiment starts that you lacked beforehand. But you essentially always have different self-locating evidence of this sort from moment to moment, in any context. If my opinion on the probability of George Washington being the first President suddenly flipped from 99% to 90%, it obviously wouldn't do to appeal to this new "evidence" that I lacked yesterday. Of course, in this example, the problem is that there's no reason to expect my self-locating evidence to be at all relevant to the matter at hand, whereas it's much more intuitive in Sleeping Beauty-like cases. But much more ought to be said as to why it's relevant there over and above fairly ambiguous intuition.
> Clearly, two such statements, 'It will at some point be p' and 'It is p now' have different implications for action. For instance, the belief that it will rain sometime doesn't motivate me to take an umbrella, whereas the belief that it is raining now does.
This is just a semantic confusion. Suppose I for sure know that it will rain whole day tomorrow. Then tomorrow comes and it indeed rains whole day. Was I surprised? Did I learn something new? No! I made a prediction and this prediction was completely correct.
My knowledge state about the weather at the particular day D, we are talking about, didn't change at all. What changed is that semantically at some moment D was called "tomorrow" and then it was called "today".
> This time, Sleeping Beauty is told she will see three lights flashing (one after the other), being made to forget what she has seen after each flash. If the (fair) coin lands heads, one of the three flashes will be red and two will be green. If the coin lands tails, one will be green and two will be red.
> Upon seeing a red flash, she should obviously assign probability 1/3 to the coin's having landed heads. But here, too, we may be challenged to justify the change in probabilities. She knew all along she would see a red flash! Here, the argument isn't even tempting. She believes a red light is flashing now, and that clearly makes a difference.
No! It doesn't make a difference! Just as it didn't make the difference in the initial version of the Sleeping Beauty. Weintraub even spells why exactly it doesn't make the difference, before discarding it without any argument.
It's just begging the question and appealing to the same intuition that makes people third in Sleeping Beauty to beguin with. If you believe that the Beauty learns something new on awakening - that she is awakened now - you would likewise believe that she learns learns something new on a light flash and vice versa. But if you don't buy this for initial version of Sleeping Beauty - neither you buy it for the version with flashing lights. Is the flashing lights version supposed to be more persuasive, because the colors are different?
The core halfer point that reasoning about "now" is unlawful in Sleeping Beauty, because there may be two different "nows" durning the same iteration of experiment - stays unaddressed. And when you try to approach it with mathematical rigor instead of imperfect human language, you see that it's actually true. Citing myself from here https://www.lesswrong.com/posts/gwfgFwrrYnDpcF4JP/the-solution-to-sleeping-beauty:
Consider the assumption that on an awakening Sleeping Beauty learns that "she is awoken today". What does it actually mean? A natural interpretation is that Beauty is awoken on Monday xor Tuesday. It's easy to see why it's true for [examples of different] problems. [Where] In every iteration of experiment if the Beauty is awakened on Monday she is not Awakened on Tuesday and vice versa.
But it doesn't stand true for Sleeping Beauty problem, where individual awakenings do not happen independently. On Tails both Monday and Tuesday awakening happen, so the Beauty can't possibly learn that she is awoken on Monday xor Tuesday - this statement is wrong in 50% of cases. What Beauty actually learns is that "she is awoken at least once" - on Monday and (Tuesday or not Tuesday).