7 Comments

I don't really have an opinion on Sleeping Beauty (even if I'm highly skeptical of the various anthropic principles enlisted to support the two answers), but this doesn't seem like enough. It may be true that you have self-locating evidence once the experiment starts that you lacked beforehand. But you essentially always have different self-locating evidence of this sort from moment to moment, in any context. If my opinion on the probability of George Washington being the first President suddenly flipped from 99% to 90%, it obviously wouldn't do to appeal to this new "evidence" that I lacked yesterday. Of course, in this example, the problem is that there's no reason to expect my self-locating evidence to be at all relevant to the matter at hand, whereas it's much more intuitive in Sleeping Beauty-like cases. But much more ought to be said as to why it's relevant there over and above fairly ambiguous intuition.

Expand full comment
author

No you don't always have self-locating evidence. If it's day 2, you have evidence that you woke up on day 2, which is incompatible with heads. It works just like the case with the red and green lightss.

Expand full comment
Apr 4·edited Apr 4

I don't understand this response. I'm not denying that self-locating evidence can be relevant. I'm saying that it's not clear what the general principle is governing its relevance.

(Also, in your example in your comment, it's not really the self-location that's decisive there. "Someone in the experiment woke up on day 2" is just as incompatible with heads as "I woke up on day 2.")

Expand full comment

> Clearly, two such statements, 'It will at some point be p' and 'It is p now' have different implications for action. For instance, the belief that it will rain sometime doesn't motivate me to take an umbrella, whereas the belief that it is raining now does.

This is just a semantic confusion. Suppose I for sure know that it will rain whole day tomorrow. Then tomorrow comes and it indeed rains whole day. Was I surprised? Did I learn something new? No! I made a prediction and this prediction was completely correct.

My knowledge state about the weather at the particular day D, we are talking about, didn't change at all. What changed is that semantically at some moment D was called "tomorrow" and then it was called "today".

> This time, Sleeping Beauty is told she will see three lights flashing (one after the other), being made to forget what she has seen after each flash. If the (fair) coin lands heads, one of the three flashes will be red and two will be green. If the coin lands tails, one will be green and two will be red.

> Upon seeing a red flash, she should obviously assign probability 1/3 to the coin's having landed heads. But here, too, we may be challenged to justify the change in probabilities. She knew all along she would see a red flash! Here, the argument isn't even tempting. She believes a red light is flashing now, and that clearly makes a difference.

No! It doesn't make a difference! Just as it didn't make the difference in the initial version of the Sleeping Beauty. Weintraub even spells why exactly it doesn't make the difference, before discarding it without any argument.

It's just begging the question and appealing to the same intuition that makes people third in Sleeping Beauty to beguin with. If you believe that the Beauty learns something new on awakening - that she is awakened now - you would likewise believe that she learns learns something new on a light flash and vice versa. But if you don't buy this for initial version of Sleeping Beauty - neither you buy it for the version with flashing lights. Is the flashing lights version supposed to be more persuasive, because the colors are different?

The core halfer point that reasoning about "now" is unlawful in Sleeping Beauty, because there may be two different "nows" durning the same iteration of experiment - stays unaddressed. And when you try to approach it with mathematical rigor instead of imperfect human language, you see that it's actually true. Citing myself from here https://www.lesswrong.com/posts/gwfgFwrrYnDpcF4JP/the-solution-to-sleeping-beauty:

Consider the assumption that on an awakening Sleeping Beauty learns that "she is awoken today". What does it actually mean? A natural interpretation is that Beauty is awoken on Monday xor Tuesday. It's easy to see why it's true for [examples of different] problems. [Where] In every iteration of experiment if the Beauty is awakened on Monday she is not Awakened on Tuesday and vice versa.

But it doesn't stand true for Sleeping Beauty problem, where individual awakenings do not happen independently. On Tails both Monday and Tuesday awakening happen, so the Beauty can't possibly learn that she is awoken on Monday xor Tuesday - this statement is wrong in 50% of cases. What Beauty actually learns is that "she is awoken at least once" - on Monday and (Tuesday or not Tuesday).

Expand full comment
author

//This is just a semantic confusion. Suppose I for sure know that it will rain whole day tomorrow. Then tomorrow comes and it indeed rains whole day. Was I surprised? Did I learn something new? No! I made a prediction and this prediction was completely correct.//

You're getting very hung up on something very simple. Her point is just there's a difference between knowing X is true now and X has been true at some point.

//No! It doesn't make a difference! Just as it didn't make the difference in the initial version of the Sleeping Beauty. Weintraub even spells why exactly it doesn't make the difference, before discarding it without any argument.//

Well then your view is just crazy. The odds that she'd see a red flash now are higher if it comes up heads than tails. It's not begging the question--almost all halfers would agree with Weintraub.

Expand full comment

> The odds that she'd see a red flash now are higher if it comes up heads than tails.

Once again, there is no coherent way to talk about "now" in Sleeping Beauty.

You can only talk about "seing a red flash in this iteration of experiment". And it's 100%, whether the coin is Heads or Tails.

> almost all halfers would agree with Weintraub.

I seriously doubt it. Not that it matters much, anyway.

Expand full comment

The argument for 1/2, if taken seriously, does not actually support assigning a credence of 1/2 in any reasonable situation, nor does it support the self-sampling assumption or the strong self-sampling assumption. It actually supports full non-indexical conditioning, which has some really crazy implications and also violates all of the principles that halfers say thirders are violating (e.g. "Don't expect your future self to have a different credence").

For instance, imagine Beauty looks out her window and sees that it's raining. Now, the halfer can no longer say, "She has no new information because this would have happened either way." Beauty was more likely to see rain at some point if the coin landed tails, since then she would see rain if it rained on Monday or Tuesday, whereas, if the coin landed heads, she would only see rain if it rained on Monday. By the thirder's arguments, seeing the rain is irrelevant because she's just as likely to see rain *now*, given that she's awake now, whether the coin landed heads or tails. But the halfer rejects using indexical information about what's happening *now* to update their probabilities, so they must think that seeing the rain really does make it more likely that the coin landed tails. There's nothing else they can use to screen off the evidence. So the halfer must believe that observing something completely irrelevant, like rain, updates the probability towards tails. Even worse, observing that it's sunny outside would also update the probability towards tails. In fact, any observation that could have been different between Monday and Tuesday updates the probability, with P(heads) converging to 1/3 the more the "halfer" observes.

Expand full comment