You Need Self-Locating Evidence!
Probabilities aren't just in the world!
People often think probabilities and beliefs merely concern how the world is.
I think this is wrong.
Self-locating probabilities are probabilities that concern one’s place in the world, rather than what the world is like. For example, imagine that there is one clone of me in a dark and murky bunker in California and another in Paris (what rotten luck). I have no special evidence concerning which one I am (for example, there are no nearby croissants or people surrendering). I should think there’s a 50% chance that I am the one in California.
This evidence is self-locating because it’s not about what the world is like. I already know what the world is like. I know that there is one copy of me in California and another in Paris. What I’m uncertain about is which one I am. That’s what self-locating information concerns: which of the people you are, not what the world is like.
But a number of people have suggested that self-locating evidence is sort of fake. They claim that it doesn’t make any sense to wonder who I am once I know what the world is like. After all, there’s a copy of me in each situation. What can I possibly be wondering about if not what the world is like?
In this article, I’ll explain why self-locating evidence is real.
1 The core intuition
Return to the earlier example with California and Paris. But now let’s add a twist. Imagine that the copy in California will be brutally tortured at the end of the day, and the copy in Paris will be given cookies.
It seems perfectly reasonable to wonder whether I’ll be the person who is tortured! Even though I know that one copy of me will be tortured, and another copy won’t, I don’t know whether I am the one who will be tortured or not. It just seems like there’s clearly a fact of the matter, in this case, about whether I’ll be tortured later today. That’s something I can sensibly wonder about without being conceptually confused.
Now, the self-locating evidence skeptics will deny that this is a coherent question. But why? It’s never been clear to me what’s supposed to be off about wondering which person I am, even after I know what the world is like. To give an example from Lewis, suppose that there are two gods in the world: one on the left and one on the right. They each know every objective fact about the world. It seems they can coherently wonder: am I the one on the left or the one on the right?
It makes sense to wonder about something if the thing is capable of being true or false. It makes sense to wonder whether God exists because there is a fact of the matter. But there’s also a fact of the matter about which person you are. Either you’re the guy in California or the guy in Paris!
Similarly, it is sensible to assign credences to things that might be true. It is sensible to have a credence attached to the proposition “Vance will be the next president,” because he might be or might not be. But it makes no sense to have a credence attached to “shut the door,” because that can’t be true or false. But self-locating facts have truth values. So why can’t you assign credences to them?
2 Devastating examples
Vincent Conitzer has a paper called A Devastating Example for the Halfer Rule (though really he’s only talking about the version of the halfer rule that doesn’t assign self-locating credences). I think this example pretty clearly settles the debate and shows you have to take into account self-locating evidence.
Here’s the idea: suppose that you’re woken up on two consecutive days. Each day, a coin gets flipped, and you see its results. On the second day, you don’t have any memory of the first day. After waking up, you see the coinflip (let’s say it is heads). How likely should you think it is that the coin comes up heads once and tails once?
The obvious answer is 1/2. If you flip two fair coins, the odds of it coming up heads once and tails once are 1/2. Seeing the results of today’s coinflip doesn’t tell you anything new, because you knew you had to see either heads or tails, and neither one was special. So it seems you should think that there’s a 1/2 chance the coin came up heads twice, 1/4 that it came up tails then heads, and 1/4 that it came up heads and then tails.
The problem is thinking this requires self-locating evidence.
As a self-locating evidence affirmer, I have a very straightforward explanation of what happened. If the coin comes up heads twice, the odds that you’d see it coming up heads today are 100%. In contrast, if it only comes up heads once, then the odds are 50%. So heads started out with odds of 1/4, then you learned something twice as likely given heads than tails, and so now it’s probability 1/2. On this account, when you see that today’s coin came up heads, that raises the likelihood that both coins came up heads by a factor of 2, and lowers the likelihood that the coin came up tails twice to zero. Thus, we can naturally hold on to the obvious answer of 1/2.
But you can’t do this if you reject self-locating evidence. The four possibilities are: HH, HT, TH, TT. They each have the same prior probabilities. Seeing today’s coin came up heads eliminates TT. On the view that rejects self-locating evidence, it simply makes no sense to say that today’s coinflip is likelier to be heads if both coins come up heads. After all, it makes no sense to attach probabilities to today’s coinflip coming up some way, if you already know what the world is like—if, in other words, you already know that the coin comes up heads at least once.
Thus, absent self-locating evidence, TT is eliminated and each of the three options has 1/3 of the remaining slice of probability. That means that the odds of HH is 1/3, not 1/2. Because the view that rejects self-locating evidence can’t update in favor of HH based on today being heads, it has to revert to priors. But most of the worlds containing at least one heads waking, on priors, are worlds with one heads and one tails.
Now, maybe you just bite the bullet and say that the odds really are 1/3. What’s wrong with that?
The first big problem is that it’s totally crazy. It shouldn’t be that in the above case, you are guaranteed, after seeing the results of the coinflip, to think at 2/3 odds that the coin came up heads once and tails once. That means you are guaranteed to think at 2/3 odds something will happen, when you know it happens half the time.
It also has the problem that it violates a principle called reflection. Reflection says that your credence shouldn’t predictably go up. It shouldn’t be, for example, that your credence is 1/2 now but you know that later it will be 3/4. If you know there’s evidence out there that will raise it to 3/4 then it should be at 3/4 now, in anticipation of that evidence.
Figuring out exactly how to formalize reflection is a bit tricky, but this example seems pretty egregious. The person is guaranteed after waking up, no matter what day it is, to adopt a credence of 2/3 in the coin coming up heads once and tails once.
It also means that one can be money-pumped. Because their credences will inevitably be different after waking up than before, one after waking up has reason to pay to cancel an earlier bet they made at 1/2 credence (I’ll leave working out the exact contours as an exercise for the reader).
This also leads to the following even more insane result that Conitzer describes:
To make matters yet worse for the Halfer Rule, consider the following twist to the two-coins example. On both Monday and Tuesday, after Beauty has observed the coin toss outcome and been awake for a little while longer, the experimenter tells her what day it is. Say she observed Heads and was then told (a bit later) that it is Monday. Now only two worlds survive elimination: HH and HT. The Halfer Rule will assign each of them credence 1/2, resulting in a credence of 1/2 that both coins came up the same. But this is yet another violation of the Reflection Principle: after seeing the outcome of the coin toss but before learning what day it is, Beauty, if she follows the Halfer Rule, places credence 1/3 in the event that the coins came up the same, but she also knows that once she is told what day it is, in either case, she will shift her credence to 1/2. This is perhaps the most egregious violation of the Reflection Principle that we have encountered, because in this case she is not put to sleep and does not have memories erased as she transitions from one credence to another.
And there are other similar problems that are, if anything, even worse. Suppose that you wake up a million times. Each day, you roll a million-sided die. There are two theories (with equal initial likelihood):
The die is rigged so that it always comes up the same number (though note: this hypothesis doesn’t specify what number it will be rigged to always come up).
The die isn’t rigged, so that it’s really random.
Suppose you roll the die and it comes up 498,312. On this view, you should now think theory 2 is a million times likelier than theory 1. After all, given the first theory, the odds are one in a million that it would be rigged to always come up 498,312. In contrast, on theory 2, it is very likely that on one of the days it would come up 498,312. So this means that after rolling the dice, no matter what it comes up, you go from being about equally confident in both theories to being nearly certain in theory 2.
Believing in self-locating evidence gives you an extremely simple way around this. It’s true that the probability that the die is rigged to always come up 498,312 has a prior of one in a million given theory 1. But if it’s rigged to always come up 498,312, then the odds are 100% that today you’d see the die come up 498,312, rather than one in a million. So the probabilities exactly cancel out, and you’re left with a credence in theory 1 of 1/2.
For more examples like this, see the examples I give here plus the skepticism one I give here. They all show that you are in extremely serious trouble absent self-locating evidence. These are very serious costs to pay for a view that, as best I can tell, has almost no upside. For this reason, I think assigning self-locating credences is rational.
Appendix: Non-center indifference
There’s a principle called center indifference which says that you’re supposed to assign equal probability to being each of two people who you might be. You get a similar problem if you reject it. To see this, let’s imagine that there are two people: person 1 and 2. Each of them is an exact clone of you. However, after five minutes, person 1 has an 80% chance of getting a red shirt an a 20% chance of getting a blue shirt. Person 2 after five minutes has a 100% chance of getting a blue shirt.
Suppose that, to make the math simple, each person assigns 90% prior probability to them being person one. They each, on account of rejecting center indifference, think with 90% confidence that they’re the first person. Now let’s consider the following possible events:
They both have blue shirts (this has a 20% chance of happening). Each person, upon seeing they have a blue shirt, thinks there’s a 9/14 chance that they’re person 1 (started out 90%, then they got a 5:1 update against from having a blue shirt, which is 5x likelier given they’re person 2). Conditional on being person 1, there’s a 100% chance they both have blue shirts. Conditional being person 2, there’s a 20% chance they have blue shirts. So overall, both people end up concluding with odds of 71.42% that both people have blue shirts.
One has a red shirt, the other has a blue shirt (this has an 80% chance of happening). In this case, the one with a red shirt has probability 0 on them both having blue shirts. The other person, by the earlier logic, thinks at probability 9/14 that they are person 1, which entails that they both have blue shirts and at 5/14 odds that they’re person 2, in which case there’s only a 20% chance they both have blue shirts. So overall, they think on average at odds of ~35.71% that they both have blue shirts.
So let’s see the average odds they conclude: 80% of the time, average odds they conclude that they both have blue shirts, is 35.71% and 20% it’s 71.42%. Thus, on average, they conclude with odds of 42.85% that they both have a blue shirt. This is despite the fact that they both have a blue shirt 20% of the time. In fact, even if one has a red shirt and one has a blue shirt, on average they think at more than 20% odds that they both have blue shirts.
Thus, you get similarly errant calibration if you don’t follow center indifference.


Everything you're saying about the practical epistemological matters makes sense to me, but I'm curious as to whether or not you think there's some essential metaphysical dividing line between "facts about the world" and "self-locating evidence," or if it's just a conventional designation given our limited understanding of consciousness and personal identity. Like, presumably, there are facts about who is who, and those facts are facts about both minds and bodies in the world - do you think there's any way self-locating evidence, even in theory, could be recapitulated as evidence regarding the world? It seems weird to me to think that consciousnesses could be so property-free that there'd be no way to distinguish them externally, but I also have no idea what you could say to identify *this soul* versus that one, or whatever.
I'm not sure if this is sufficient to prove that probabilities aren't in the world; I think it just shows that people who don't believe in self-locating uncertainty have a poor model of personal identity.
Basically, I think in your examples one ought to be able to identify third-person describable facts about the world that identify which version of "you" you are--basically the idea behind fully non-indexical conditioning: "is the one whose immediate environment looks like (long tedious description of minute details) in California or Paris?"
You might argue that even if the immediate environments are fully identical in every measurable detail, the question of "which one am I?" can still arise... But I'm not sure about that. The natural counterexamples ("am I the one who'll be tortured?") involve future divergence between the two versions of you.... But I think a natural idea is, given that regions of spacetime are entangled with neighbouring regions in a way that scales with spacetime distance, two regions with different spacetime futures (for example, at one location a clone is tortured in the future light-cone and at another location the clone isn't tortured) those future differences ought to show up as slight differences in measurement outcome if you were to make a whole bunch of random measurements now.
That is: what makes it meaningful to say things like "*this* clone is in California but *that* clone is in Paris", or "*this* clone will be tortured but *that* one won't" is that there is something by which to define *this* and *that*--and there clearly must be such a thing otherwise one is asserting that someone can be in Paris and in California simultaneously, or that their future both will and won't contain torture. But whatever it is that let's us say that one clone is different from the other ought to be a physical difference, and IMO there are good reasons to think that those physical differences should be revealable by performing random measurements--any failure of correlation between the two then turns up a difference that can be used as a third-person description to disambiguate them.
And if there are no uncorrelated measurements they can make, I begin to doubt whether it's physically possible for them to have different futures, or for a description of one of them bring in one place and the other somewhere else to be accurate, in which case I'm not sure your counterargument are available.