The Closed Eyes Argument For Thirding
"In her house at R'lyeh sleeping beauty waits dreaming."
Don't you know when your eyes are closed
You see the world from the clouds along with everybody else?
Don't you know when your eyes are closed
You see the world from the clouds along with everybody else?
—Close Your Eyes by The Midnight Club
The sleeping beauty problem is one of the most hotly debated topics in decision theory. It’s one of the topics like Newcomb’s problem where everyone seems to find their answer obvious, yet people don’t agree about it. The first paper on it (which settled the issue) was by Adam Elga, and described it thusly:
The Sleeping Beauty problem: Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?
There are two main answers: 1/2 and 1/3. Halfers say that you should start out 50/50 before waking up, and because you’ll wake up either way, you’ll remain 50/50 (this is false for a rather subtle reason). Thirders say that, because there are twice as many wakings if the coin comes up tails as heads, upon waking up you should think tails are twice as likely as heads.
Now imagine that we modify the scenario slightly. You’re put to sleep. A fair coin is flipped. If it comes up heads, you’ll wake up in the laboratory once, be put back to sleep, be put in your bed, have your memory erased, and be woken up again. If the coin comes up tails you’ll be woken up, put back to sleep, have your memory erased, and woken up a second time in the lab. Suppose that you wake up in the lab—what should your credence be in the coin having come up tails?
I submit that halfers in the original sleeping beauty problem should say 1/2. After all, both theories predict with equal confidence that you’ll wake up in the lab—you haven’t learned anything new. Furthermore, in the original sleeping beauty problem, presumably after you aren’t woken up again, you’ll be sent home. So halfers in the original sleeping beauty problem think that if you will be sent home on the second day, then after waking up in lab conditions, you should think there’s a 50% chance the coin came up tails. The only difference between that and this case is that in this case, when you are sent home, after you go to sleep, your memory is erased. But surely that shouldn’t make a difference—whether you wake up in your bed with memories or without on the second day, you still have experiences incompatible with the coin having come up tails. If heads means you’ll wake up twice, one of the times in a way incompatible with the coin having come up tails, it shouldn’t matter what the second wakeup looks like as long as it remains incompatible with the coin having come up tails.
So from the halfer view, it follows that in the scenario where you’re put asleep in your room without memories on the second day if the coin comes up heads, if you wake up in the experiment day, you should think there’s a 50% chance the coin came up tails. Now let me show why you shouldn’t think that, and so the halfer view must be false.
Imagine that when you wake up, before you know which room you’re in, you think about anthropics while your eyes are closed. You reason: both theories predict I’ll wake up twice. Awakening gives me no evidence for either theory, and because my eyes are closed, I don’t know if I’m in my room or not. So, therefore, right now I should think there’s a 50% chance that the coin came up tails. However, if the coin came up tails, I must be in the lab room, while if it came up heads, there’s only a 50% chance I’m in the lab room now, so if I am in the lab room, I should think there’s a 2/3 chance that the coin came up tails.
Then you open your eyes and find yourself in the lab room. By the above reasoning, your credence in the coin having come up tails should be 2/3. So, therefore, in the case where you’re woken up twice in the lab room if the coin comes up tails, if you have time to think about anthropics before finding out what room you’re in, you should think the odds that the coin came up tails are 2/3.
Here’s my last claim: the odds you should give to the coin having come up tails in this scenario shouldn’t depend on whether you think about anthropics with your eyes closed! Surely whether you happened to think about anthropics when your eyes were closed isn’t relevant to the rational credence in some event given some anthropic evidence. Anthropic data that you update on shouldn’t be sensitive to whether you actually thought about the anthropic situation before observing that data.
But from this it follows that in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads, if you wake up in the lab, you should think that there’s a 2/3 chance the coin came up tails. But that’s the claim that halfers in sleeping beauty should deny, for the reasons I gave before. So halfing is the wrong answer in sleeping beauty.
This chain of reasoning is a bit tricky to spell out. I’ll model it with arrows where A—>B means B follows from B.
Halfing being right in sleeping beauty—>halfing being right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads—>halfing being right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads and where, before knowing which room you’re in, when your eyes are closed, you think about anthropics. However halfing is not right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads and where, before knowing which room you’re in, when your eyes are closed, you think about anthropics, so therefore halfing in sleeping beauty is wrong.
> However, if the coin came up tails, I must be in the lab room, while if it came up heads, there’s only a 50% chance I’m in the lab room now, so if I am in the lab room, I should think there’s a 2/3 chance that the coin came up tails.
This is wrong.
As I've explained here
https://www.lesswrong.com/posts/gwfgFwrrYnDpcF4JP/the-solution-to-sleeping-beauty
Beauty can't lawfully reason about the problem, while treating awakenings as individual outcomes. "This awakening" is not "random awakening" - awakenings in the experiment do not happen at random, they have order: Tails&Monday is always followed by Tails&Tuesday. Neither you can say that "this awakening" is "any awakening" because for first and second awakenings probability that the coin is Heads is different.
To reason correctly about the problem you need to talk about events that happen in this experiment, not in illdefined "this awakening". Aplying the same principle here we get:
P(Lab) = 1; P(Heads|Lab) = P(Tails|Lab) =1/2 - regardless of the outcome of the toss, in every experiment you will be awakened in the lab, so finding yourself in the lab in this experiment doesn't tell you anything about the state of the coin.
P(Darkness) = 1 P(Heads|Darkness) = P(Tails|Darkness) =1/2 - regardless of the outcome of the toss in every experiment you find yourself with closed eyes thinking about anthropics and it tells you nothing about the state of the coin.
P(Home) = 1/2; P(Heads|Home) = 1 - you find yourself home after a memory loss in every experiment only when the coin is Heads so you update in favor of it.
And so everything adds up to normality.
I [am a thirder, but] have flirted with halfing and can clearly see the attraction. As such, I don’t quite see how the conclusions you reach in your version of the experiment transfer over to the original version?
If I understand your version, you are giving the subject an equal number of observations (wake-ups) regardless of the outcome of the coin flip? That seems to materially change the experiment (whether or not the subject’s eyes are open). To me at least, the thing that makes the problem strain my intuition is exactly the mismatch between the uneven number of samplings/observations and the 50/50 coin flip. Even it out, and there’s nothing more to solve. I guess that’s maybe the point… To create a scenario that’s easier to parse, but I think something more is lost along the way.
[EDIT: I deleted the rest of this comment, as I had my mind challenged / changed enough by Ape in the coat, below, that I didn’t want to leave it up.]