Compartmentalized Conditionalization Considered Confused
It might be better than SSA, but still...
0
There are two main theories of anthropic reasoning: one called the self-indication assumption, the other called the self-sampling assumption. In the interest of non-partisanship, I won’t tell you that the self-indication assumption is obviously correct and infinitely more plausible than the self-sampling assumption. But there’s a third somewhat popular theory that has so far escaped by fire—it’s called compartmentalized conditionalization.
Compartmentalization says that when doing anthropic reasoning, you should treat the relevant data, when you have some experiences, as simply “X was experienced at some point by someone.” So if, for example, you see a red wall for ten minutes, and that’s your only experience, theories will be made more probable proportional to the probability that someone would see a red wall if they were true. In other words, theories get a boost in probability proportional to the odds that your exact experiences would be had by someone at some point if they were true.
Compartmentalized conditionalization has a certain intuitive allure. There’s something attractive about it. When I was first thinking about anthropics, I think I was sympathetic to it. But I now think that it is utterly, absurdly, off-the-wall crazy. It might even be worse than SSA and relies, I think, on conceptual confusion (not understanding de se evidence—even if two theories both predict some event will be experienced at some point, one still might make it likelier that I’d experience it).
1
Like SIA, compartmentalized conditionalization loves big universes, albeit not as much. If the universe is very large, it becomes increasingly likely that someone would have my exact sequence of experiences. More troublingly, it likes them more and more over time. As you have more experiences, CC gives you reason to think that the universe is bigger, because only a bigger universe will likely have your extra experiences.
Suppose that for every year that a person has been alive, the universe has to be googol meters across to be likely to have their exact experiences. A compartmentalized conditionalizer who is one years old will have a decent credence in a universe that’s only googol meters across (this is a very precocious one-year-old). After two years, however, they’ll have a credence of near-zero in it—there’s only a 1 in googol chance that it would have their richer combination of experiences.
But this is super weird. It shouldn’t be that an inevitable consequence of getting older is that you think the universe is bigger! What? It also leaves the compartmentalized conditionalizer open to a money pump—assume their non-anthropic prior in a universe with googol people is 1/2. You sell them, for 30 cents, a ticket that pays out a dollar if the universe has only googol people. Then, after a year passes, you buy it back for one cent. Both of those deals are good according to their preferences, because their preferences predictably change! But this means—and this is a money pump one could actually set up with a bit of math—you could, without too much difficulty, offer a series of bets to a CCer that are guaranteed to result in them losing money.
You might say that you should be a compartmentalized conditionalizer in terms of what you expect your life to be like in the future. Thus, if you expect to live to 100, you should favor theories that make it likelier that the universe would have people with your exact 100 years of experience. This view, however, is problematic. It implies that after being diagnosed with a terminal illness, you should think the universe is smaller than before, because it means you’ll likely have fewer experiences. It’s also circular—suppose that someone will execute you unless the universe is sufficiently big. In this case, you believe that you’ll survive only if you believe the universe is very big, but you believe the universe is very big only if you believe you’ll survive. The view thus results in judgments being underdetermined and circular in a problematic way.
We’re going to see a lot of this. CCers burn through money like they’re gambling at vegas. Only in this case, unlike the vegas gambler, they have no hope of winning.
2
Suppose that there are two theories. On the first, the universe is googolplex meters across. On the second, the universe is Rayo’s number meters across (Rayo’s number is a stupendously large number that will be the largest of the numbers that will be involved in this thought experiment—it is arguably the biggest sort of number any human has ever named (obviously you can add one to it, but that would be the same sort of number)). Up until this point, you’ve had experiences that are very likely to be had in a universe whether it’s googolplex meters or Rayo’s number meters. However, you have a device in your room that will generate a random number between 1 and Tree 3 (Tree 3 is way bigger than googolplex but much smaller than Rayo’s number).
Suppose that you start out with a prior of 1 in a trillion on the theory that there are Rayo’s number people. Then you look at the random number generator. Because it’s very unlikely someone would have the experience of seeing that particular number if the universe had only googoplex people, after viewing the random number generator generate a random number, your credence in the universe having Rayo’s number people should go up by a factor of roughly Tree 3 (Tree 3 over googolplex is roughly googolplex). Because Tree 3 is vastly greater than googolplex, whichever number you see is unlikely to be seen in the infinite universe.
This is very weird! It shouldn’t be that your credences change wildly just because you looked at a random number generator. If you look at the random number generator, the view implies that had you not really looked at it, your credence should be lower factor of well over a googolplex. A rational person shouldn’t be able to change their credences simply by looking at a random number generated by a random number generator. A rational agent shouldn’t be almost certain—at a trillionish to one odds—that the universe has only googolplex people if they know that in two minutes when they look at a very powerful random number generator, they’ll be almost completely certain that it’s not!!
This is, of course, vulnerable to the same sorts of money pumps as before.
Note that this doesn’t just apply in this weird far-off case. If you somehow found a completely random number generator, you should after rolling it expect to become infinitely certain that there’s an infinite multiverse, even if you previously thought the odds of it were only 1 in Rayo’s number.
3
Suppose that there are infinite people in Hilbert’s hotel (that’s an infinitely big hotel—if you don’t like infinity, replace it with just sufficiently big numbers to do the trick). Each of them will roll a die with a million sides (that’s singular of dice, for the record—no need to worry, everyone’s fine).
There are two hypotheses with equal prior probability. The first is that every die is rigged to definitely get 438,947. The second is that every die is fair. You roll a die and it comes up 438,947. What should your credence be that the die is fair?
CC answers: still 1/2. After all, whether or not the dice are rigged, someone will have your exact sequence of experiences that culminate in rolling a 438,947. Before you’ve rolled the die, on CC, you reason that the odds you’d get a 438,947 if the dice are rigged is 100%, while it’s one in a million if the dice aren’t rigged. But then, after getting 438,947, CC says (crazily!) that your credence shouldn’t change.
CC gives you strong reason to think the universe is big so that it has every possible experience. But once you think that, it tells you nothing else about the universe. Theories on which a greater share of people have some particular experience aren’t favored at all! Once you’re in Hilbert’s hotel, getting an improbable dice roll gives you no evidence that the die was rigged to get that value. This is very strange!
4
Ruth Weintraub has very convincing article that argues for thirding in sleeping beauty. We can make a similar argument against CC. Suppose that a coin is flipped. If it comes up heads, a person wakes up once in a red room and then an infinite number of times in a green room. Assume that each day in the green room, their experiences are identical. If the coin comes up tails, they wake up once in a green room and then an infinite number of days in a red room, on each day having the exact same experience.
On CC, upon waking up in a red room, you should remain indifferent. Both hypotheses predict that experience would be had at some point, so waking up in a red room doesn’t give you any evidence either way. This is crazy.
It gets even worse. Suppose that prior to seeing your room color, you’re in a dark room. Upon waking up, when it’s dark, a CCer will reason that if the coin came up tails the odds are zero or infinitesimal that they’re currently in a green room and 100% that they’re in a red room; if the coin came up heads, the odds are zero or infinitesimal that they’re in a red room and 100% that they’re in a green room (for tails means infinite red wakeups and heads means infinite green wakeups).
Then, after the light turns on and you see you’re in a green room, on CC, though you being in a green room is infinitely more strongly predicted on heads than tails, you should remain 50/50. This is very crazy!!!! (!!!!)^5↑↑↑↑5 (REALLY, IT IS VERY CRAZY!!!)
The same thing can be applied across lives not just to a single life. Suppose a coin is flipped. If it comes up heads, one copy of me wakes up in a red room and infinite copies wake up in a green room, with identical experiences. If it comes up tails, one copy wakes up in a green room and infinite people in a red room. If they’re in the dark, they start out indifferent, and then remain indifferent after they learn that they’re in a green room. This is very crazy!
It’s also vulnerable to a money pump. If you sell each person for 30 cents a ticket that gives them a dollar if their room color is not the one that has infinite copies, you’ll lose a dollar and gain infinite dollars. Pretty good deal!
5
I think CCers should probably be modal realists. Modal realists think that every possible world concretely exists. Modal realism, according to CC, always becomes more probable as one gets older, for modal realism guarantees that someone will have your exact experiences. This is especially problematic because modal realism undermines induction.
6
CC is quite vulnerable to a money-pump. To see this, let’s consider the sleeping beauty problem. In it, a person is put to sleep, and then a coin is flipped. If it comes up tails, they wake up twice, each time with no memories. If it comes up heads, they wake up once with no memories. Assume that, up until 3:00 pm, their experiences will be exactly the same across all three days.
Then, at 3:00 pm, they’ll learn what the present day is. They’ll either learn that it’s day 1 or day 2. CC recommends being at 1/2 on the coin coming up heads before 3:00, because both theories predict your experiences would be had (on Monday at least). Then, at 3:00, if you find out that it’s day 2, you’ll be certain of tails, while if it’s Monday, you’ll remain indifferent.
Now, this is weird enough on it’s own. For one, it eggregiously violates Bayes theorem (and not in some weird nitpicky way—in a quite flagrant way). Before 3, you reason that it being Monday today is twice as likely if the coin came up heads. However, after finding out that it’s Monday, your credence doesn’t change. This violates Bayes theorem—if some event occurs that’s twice as likely on some theory than another, you should get a 2 to 1 update in favor of the first theory.
Second, it violates conservation of evidence. You shouldn’t expect your credences to predictably change. You shouldn’t, after observing some evidence, expect your credence to be higher on average than it is—for if you know you’ll have good future evidence of P, you should already be moved by that evidence to think P is likelier. This violates that constraint—your credence in the coin having come up tails either goes up or stays the same. It can’t go down.
Third, it’s vulnerable to money pumps. Suppose that the other person has tickets that pay out a dollar if the coin comes up heads. Before 3:00 pm, you can buy them for 49 cents. If it’s Tuesday, you’re guaranteed to win a dollar. If it’s Monday, you sell them back for 51 cents. The CCer takes this deal, which gives you money on average. If you repeat the process, you have some chance of losing two cents and a high chance of gaining 49 cents, meaning that you get money on average, that gets arbitrarily great as the process is repeated.
7
Suppose physicists discover two things. First, the universe is infinitely large (or big enough to do the trick if you don’t like infinites). Second, there’s a theory according to which every second 99.999% of galaxies are destroyed by powerful gamma ray bursts. That theory has a prior of 50%, absent considering the anthropic evidence.
Seems reasonable to think the theory is almost certainly false. After all, if 99.999% of galaxies are destroyed every second, it’s almost guaranteed that your galaxy would have been destroyed. But on CC, given that your experiences would still be experienced somewhere, you get no evidence for this. Similarly, if in Hilbert’s hotel 99.999% of people might die every second, the fact that you’ve lived 1 billion years without dying gives you precisely no evidence that the theory that almost everyone constantly dies is false.
This isn’t unique to dying, for the record. If one theory says that 99.9999% of people in an infinitely large universe start seeing purple bunnies in their visual fields every second, on CC, the fact that a billion years have passed without you seeing a purple bunny gives you no evidence at all that the theory is false. This is absurd!
8
As I said before, CC has a certain intuitive appeal. But it leads to ridiculous swings in judgment, violation of conservation of evidence, violation of Bayesian reasoning, insane claims about credences, and insane claims about the sorts of things that shouldn’t shift one’s credences. While there’s something to like about it, it’s infinitely less plausible than its cousin the self-indication assumption. While it’s defective in ways quite different from SSA, it’s still quite defective, and fails to be a remotely plausible alternative to SIA.
This is a good argument for SIA. Even if you find it implausible in certain ways, given how outrageously defective the alternatives are, there’s strong reason to accept it.
This is quite interesting, I was unfamiliar with CC. Like with other views in anthropics, I have no idea whether I should believe it. You are right that is has some intuitive appeal, though some of your arguments against it are also convincing.
- In section 0 you say "even if two theories both predict some event will be experienced at some point, one still might make it likelier that I’d experience it". But unless closed individualism is correct there's no fundamental distinction between the experience existing and it being experienced by some given person, right? Or am I misinterpreting your statement?
- I don't see why the exact sequence of experiences is relevant in section 1. The experience I'm having right now is consistent with a multitude of possible past paths of experience. CC doesn't need to condition on the probability that my exact experience sequence would exist, just the probability that my present experience would exist. This might still change over time causing the betting issue you highlighted, but it would change idiosyncratically based on the uniqueness of your current experience.
- Your reasoning in section 2 makes sense to me but it makes me think of another concern. If there are an infinite number of possible experiences, then will a finite world have any experiences for which the a probability they should be expected to exist with is greater than 0? The answer to this seems like to would depend on whether experience is discrete or continuous or something like that.
- The critiques in points 3/4 seem very strong to me.
- I'm kinda lost in section 6. You say "Before 3, you reason that it being Monday today is twice as likely if the coin came up heads." But if the coin came up heads, it is definitely Monday! Did you mean "Before 3, you reason that it the coin having come up heads is twice as likely if it is Monday."?
The perspective-based reasoning theory of anthropics also seems interesting. https://www.sleepingbeautyproblem.com/