This is quite interesting, I was unfamiliar with CC. Like with other views in anthropics, I have no idea whether I should believe it. You are right that is has some intuitive appeal, though some of your arguments against it are also convincing.
- In section 0 you say "even if two theories both predict some event will be experienced at some point, one still might make it likelier that I’d experience it". But unless closed individualism is correct there's no fundamental distinction between the experience existing and it being experienced by some given person, right? Or am I misinterpreting your statement?
- I don't see why the exact sequence of experiences is relevant in section 1. The experience I'm having right now is consistent with a multitude of possible past paths of experience. CC doesn't need to condition on the probability that my exact experience sequence would exist, just the probability that my present experience would exist. This might still change over time causing the betting issue you highlighted, but it would change idiosyncratically based on the uniqueness of your current experience.
- Your reasoning in section 2 makes sense to me but it makes me think of another concern. If there are an infinite number of possible experiences, then will a finite world have any experiences for which the a probability they should be expected to exist with is greater than 0? The answer to this seems like to would depend on whether experience is discrete or continuous or something like that.
- The critiques in points 3/4 seem very strong to me.
- I'm kinda lost in section 6. You say "Before 3, you reason that it being Monday today is twice as likely if the coin came up heads." But if the coin came up heads, it is definitely Monday! Did you mean "Before 3, you reason that it the coin having come up heads is twice as likely if it is Monday."?
I could be wrong because I don't feel like I fully understand it, but my sense is it's saying the "probability a person is you" isn't a well-founded proposition, and so you can't meaningfully make probability distributions on which first-person perspective is yours. I'm not so comfortable with it, even if it seems to resolve some anthropic paradoxes, because it sounds like it's arguing that there are cases where Baysianism breaks down. Taking your own existence as evidence feels like something you should be able to do, but IDK. Anthropics is confusing.
I haven't heard of CC before this post so I haven't spent that much time thinking about it. How does it answer this problem? Because apparently perspective-based reasoning says it's impossible to give an answer because it's not something you can assign a probability to.
"Imagine during tonight’s sleep, an advanced alien would split you into 2 halves right through the middle. He will then complete each part by accurately cloning the missing half onto it. By the end, there will be two copies of you with memories preserved, indiscernible to human cognition. After waking up from this experiment, and not knowing which physical copy you are, how should you reason about the probability that 'my left side is the same old part from yesterday?' "
Thanks for the article. I probably didn't get much of it, but it since BB liked it, I assume its fairly good.
My follow-up questions to you are
A. What's your credence in Bayes' Theorem being true, and
B. What's the evidential probability of Bayes' Theorem being true
Additionally, you say that
> if we neglected to think about outright beliefs, it would be hard to understand how I could condition on a subjectively impossible proposition: the conditional credences are undefined, as they would require dividing by zero.
I'm not sure why you are treating belief as necessarily binary. Wouldn't a much simpler solution to the paradox be to acknowledge that you can't/shouldn't have beliefs that anything is 100% or 0% likely to be true?
(A) and (B) are both one, since Bayes’ theorem is part of (and so entailed by) our knowledge. Of course, we can provisionally set aside our knowledge of Bayes’ theorem, and ask how likely it is given our other knowledge; but since it’s a logical truth, this still ends up being one.
Here’s one way to get around this: what’s my credence in the thing (de dicto) which looks and smells like Bayes’ theorem? I would guess that this is below one, but high enough that any number I try to put on it won’t be meaningful.
For the second part of your question: treating beliefs as binary does end up being simpler! If you aren’t willing to give substantive claims probability one, you can’t update on anything (since the probability of anything conditional on itself is one, if defined). This leaves you with no well-motivated way to revise your beliefs, which was what we wanted out of the formalism the first place! (If you’re absolutely committed to putting confidence on a 0-1 scale where each end means infinite dogmatism, you may want to look into Jeffreys Conditionalization; but I think it’s not well-motivated and ultimately gives a worse theory.)
I mean, the concept of a thing called "you" with a thing called "priors on anticipated future world states" might just not be a coherent concept, and all this talk about theories of anthropics might just be averting our eyes from this fact. I don't know if I agree with this, but I think more people should consider it. I'm pretty sure that SIA is better than SSA, though.
This doesn't rebut the overall thrust of your argument, but in example 1, I think the universe does become larger over time if I understand the way "large" is being used here. This is trivially true because of cosmic expansion, but also since we're talking about the universe, we can talk about size in spacetime, and this would get larger by expanding the time dimension.
The size of the universe in spacetime doesn't increase over time! That assumes the growing block theory of temporal ontology, which is definitely false. (It is actually logically impossible, but even ignoring that, it's inconsistent with modern physics.)
This is quite interesting, I was unfamiliar with CC. Like with other views in anthropics, I have no idea whether I should believe it. You are right that is has some intuitive appeal, though some of your arguments against it are also convincing.
- In section 0 you say "even if two theories both predict some event will be experienced at some point, one still might make it likelier that I’d experience it". But unless closed individualism is correct there's no fundamental distinction between the experience existing and it being experienced by some given person, right? Or am I misinterpreting your statement?
- I don't see why the exact sequence of experiences is relevant in section 1. The experience I'm having right now is consistent with a multitude of possible past paths of experience. CC doesn't need to condition on the probability that my exact experience sequence would exist, just the probability that my present experience would exist. This might still change over time causing the betting issue you highlighted, but it would change idiosyncratically based on the uniqueness of your current experience.
- Your reasoning in section 2 makes sense to me but it makes me think of another concern. If there are an infinite number of possible experiences, then will a finite world have any experiences for which the a probability they should be expected to exist with is greater than 0? The answer to this seems like to would depend on whether experience is discrete or continuous or something like that.
- The critiques in points 3/4 seem very strong to me.
- I'm kinda lost in section 6. You say "Before 3, you reason that it being Monday today is twice as likely if the coin came up heads." But if the coin came up heads, it is definitely Monday! Did you mean "Before 3, you reason that it the coin having come up heads is twice as likely if it is Monday."?
The perspective-based reasoning theory of anthropics also seems interesting. https://www.sleepingbeautyproblem.com/
That’s just cc
I could be wrong because I don't feel like I fully understand it, but my sense is it's saying the "probability a person is you" isn't a well-founded proposition, and so you can't meaningfully make probability distributions on which first-person perspective is yours. I'm not so comfortable with it, even if it seems to resolve some anthropic paradoxes, because it sounds like it's arguing that there are cases where Baysianism breaks down. Taking your own existence as evidence feels like something you should be able to do, but IDK. Anthropics is confusing.
Yeah that view is the one this article is about I think
I haven't heard of CC before this post so I haven't spent that much time thinking about it. How does it answer this problem? Because apparently perspective-based reasoning says it's impossible to give an answer because it's not something you can assign a probability to.
"Imagine during tonight’s sleep, an advanced alien would split you into 2 halves right through the middle. He will then complete each part by accurately cloning the missing half onto it. By the end, there will be two copies of you with memories preserved, indiscernible to human cognition. After waking up from this experiment, and not knowing which physical copy you are, how should you reason about the probability that 'my left side is the same old part from yesterday?' "
There are obviously *some* cases where Bayesianism breaks down. What's your Bayesian credence that Bayes' Theorem is true?
It's one, Bayesianism doesn't break down here.
(https://offhandquibbles.substack.com/p/probability-for-philosophers)
Thanks for the article. I probably didn't get much of it, but it since BB liked it, I assume its fairly good.
My follow-up questions to you are
A. What's your credence in Bayes' Theorem being true, and
B. What's the evidential probability of Bayes' Theorem being true
Additionally, you say that
> if we neglected to think about outright beliefs, it would be hard to understand how I could condition on a subjectively impossible proposition: the conditional credences are undefined, as they would require dividing by zero.
I'm not sure why you are treating belief as necessarily binary. Wouldn't a much simpler solution to the paradox be to acknowledge that you can't/shouldn't have beliefs that anything is 100% or 0% likely to be true?
(A) and (B) are both one, since Bayes’ theorem is part of (and so entailed by) our knowledge. Of course, we can provisionally set aside our knowledge of Bayes’ theorem, and ask how likely it is given our other knowledge; but since it’s a logical truth, this still ends up being one.
Here’s one way to get around this: what’s my credence in the thing (de dicto) which looks and smells like Bayes’ theorem? I would guess that this is below one, but high enough that any number I try to put on it won’t be meaningful.
For the second part of your question: treating beliefs as binary does end up being simpler! If you aren’t willing to give substantive claims probability one, you can’t update on anything (since the probability of anything conditional on itself is one, if defined). This leaves you with no well-motivated way to revise your beliefs, which was what we wanted out of the formalism the first place! (If you’re absolutely committed to putting confidence on a 0-1 scale where each end means infinite dogmatism, you may want to look into Jeffreys Conditionalization; but I think it’s not well-motivated and ultimately gives a worse theory.)
I mean, the concept of a thing called "you" with a thing called "priors on anticipated future world states" might just not be a coherent concept, and all this talk about theories of anthropics might just be averting our eyes from this fact. I don't know if I agree with this, but I think more people should consider it. I'm pretty sure that SIA is better than SSA, though.
This doesn't rebut the overall thrust of your argument, but in example 1, I think the universe does become larger over time if I understand the way "large" is being used here. This is trivially true because of cosmic expansion, but also since we're talking about the universe, we can talk about size in spacetime, and this would get larger by expanding the time dimension.
Even if the universe doesn't expand, the CCer becomes more confident in a bigger universe just because of the passing of time.
The size of the universe in spacetime doesn't increase over time! That assumes the growing block theory of temporal ontology, which is definitely false. (It is actually logically impossible, but even ignoring that, it's inconsistent with modern physics.)
Further evidence for the Distinguished Professor of Anthropics hypothesis