Disproving The Self-Sampling Assumption Using Horrifying Human Lobster Combos
It is, indeed, disproved
Anthropic reasoning is the study of how one reasons about one's existence. From the fact that I exist, I get evidence that my mom got pregnant, for example. Similarly, from the fact that I exist in California, all else equal, I get evidence that more of the population of the world is in California—it would be a big coincidence if there were only a few thousand people in California and I happened to be one of the lucky (or, well, unlucky) ones. All of this stuff is within the purview of anthropics, though these are the easy cases: some of the cases get extremely off-the-charts weird.
There are a few major views of anthropics. One of them, which I’ve talked about at such great length that it bored all of my readers, is called the self-indication assumption, where your existence gives you evidence that there are more people like you—for example, if I wake up with a red shirt, a theory that says there are a billion red-shirted people predicts that 1 billion times as well as a theory that says there’s only a single red-shirted guy.
There’s another popular view called the self-sampling assumption (SSA—as Carlsmith has ably pointed out, this is ass backwards). On this view, you should think of yourself as in some way randomly drawn from the people that exist. For example, suppose that a coin gets flipped—if it comes up heads,1 person gets created with a red shirt and 10 with blue shirts. If it comes up tails, just one person gets created with a red shirt. If I get created with a red shirt, SSAers reason: I should think there are 10:1 odds that the coin came up tails. They think that I am, in some sense, randomly drawn from the actual people—and so I should favor theories on which a greater share of people are like me.
Now, I said that on SSA, one should reason as if they’re randomly drawn from the actual people, but that’s a bit oversimplified. Really it says you should reason as if you’re randomly drawn from your reference class, where your reference class is the entities that are enough like you to count for your anthropic reasoning (read: made-up bullshit). You need a reference class to figure out which beings you’re acting as if you’re randomly drawn from—for example, does my existence give me strong evidence that there aren’t many insects, because if there were I’d probably be an insect, the way SSAers say that my existence with a red shirt gives me strong evidence, in the above case, that there aren’t a bunch of people with blue shirts. Does it give me strong evidence that there aren’t a bunch of chimpanzees? Neanderthals?
Anyways, it’s old news that SSA’s entire theory is built on this completely artificial, made up from whole cloth reference class thing, and that there’s no principled way to decide upon a reference class. Here I want to highlight a bigger problem: SSA implies a bizarre sort of sensitivity to small changes.
First, consider the lobster. Now, you know how there’s this thing called evolution by natural selection, where things that are very different from us eventually, over the course of many generations, became us. From this, we learn that a sufficient number of small changes can make a lobster into a person. Let’s say it takes 1 billion changes.
Let L0 be the lobster, L1 be the lobster after 1 change made it more like us, L2 be the lobster after another change, L500,000,000 be a horrifying half-human half-lobster, and L1,000,000,000 be a human. The idea is that the number denotes how many changes have been made to the lobster, where the changes make it more like us, and after a billion changes, it becomes exactly like us.
What is the minimum L at which it’s in our reference class? I don’t expect SSAers to have a specific answer, but let’s just say L (700 million). Well now, suppose we discover deep underground that there are a bunch of entities somewhere between us and lobsters. Imagine they number in the sextillions—they’re very numerous (a sextillion is 1 and then 21 zeroes).
We think, in fact, that these creatures development is around L (700 million), though maybe not exactly. In fact, scientists come to the conclusion that based on the scientific evidence, that there’s a 99.9% chance that they’re at L (700 million) and a .1% chance that they’re L (699,999,999). Now, I think you should think the scientists are right about this—trust the science, right?
But SSAers must disagree with this. If the creatures are L(700 million), then they’re in our reference class. But then it’s vanishingly unlikely that we’d be humans, rather than these lobster hybrids, just as it’s unlikely we’d have red shirts if almost everyone has a blue shirt. So then the SSAers are almost certain that these creatures are L(700 million minus 1), rather than L (700 million).
But this is absurd. The L (700 million) groups aren’t very different from the L (700 million minus one) groups. They are physically indistinguishable. We can imagine that they only differ by literally one neuron. It’s one thing to think anthropics can tell you something from the armchair and quite another to say that you can confidently proclaim that, contrary to the scientific evidence, some group has 8,584,343 neurons rather than 8,584,342, because if they had 8,584,343 then they’d be in your reference class. Anthropic reasoning shouldn’t give you such confidence that other groups have N neurons rather than N+1 neurons. If you find yourself arguing with the scientists about whether creatures have some number of neurons on the grounds that if they had one more neuron then you might have been them, so clearly they have one fewer neuron, something has gone badly wrong!
Now, maybe the SSAers could modify their theory a bit. Perhaps the idea would be that reference class isn’t a binary thing but it comes in degrees. Maybe the L(700 million) people are only a bit more in our reference class than the L(700 million minus one) people. Now, I have three problems with this.
First of all, it just flatly doesn’t avoid the problems. If L (700 million) people are more in your reference class than L (700 million minus one) people, then if there are enough people who are either L (700 million) or L (700 million minus one), you should be super confident that they’re L (700 million minus 1).
Second, we can reconstruct the argument by making reference only to the boundary between those not in the reference class and those just barely in the reference class. Clearly lobsters aren’t in our reference class—your existence doesn’t give you evidence there aren’t many lobsters (and if you think they are, rerun the argument replacing lobsters with something not in our reference class, like amoeba). Assume that L(300 million) things are the minimum group in our reference class. Well, now if there are enough things near L(300 million) you get extreme evidence that they’re L(300 million minus one) rather than L(300 million).
Third, once you’re doing this, SSA just starts to seem really contrived and made up. It was bad enough that the reference classes were just invented out of whole cloth, with no principled justification, simply because they made sense of our intuitions. But now you’re saying that there are different degrees to which I could have been people? I can sort of understand saying something like “you could have been the one with a blue shirt.” But how does it make any sense to say I could have been a chimp, but only 5% as much as I could have been prince Andrew? When you start saying this, SSA starts to look very clearly like made-up nonsense.
I think this argument is pretty much decisive. Beyond biting the bullet, I have no idea what an SSAer would say about it. If anyone wants to coauthor a paper on this, and do things like read boring articles about Borin & Gass’s (2012) account of vagueness, let me know!
I fell asleep on my beach towel and when I woke I was a 6 foot long half lobster man.
Maybe I'm missing something here, but it just seems intuitive to me that the reference class should be all relevant beings who I could possibly be if I knew nothing more than the fact that I existed. And then in that case, isn't it just an empirical question as to whether or not a certain being like a chimp or a lobster or a lobster-human hybrid is a candidate for that? I guess it depends on your larger views about personal identity and relative sorts of cognition between species, but I definitely feel like, if I were to somehow wake up with no memories in a sensory deprivation tank that erased all knowledge of the outside world beyond my bare self-awareness, I would think it was possible that I was a woman, or an old man, or really any human being on the planet - whereas I'm pretty confident I could still be sure I wasn't, like, a fish. Couldn't a sufficiently advanced neurobiological model give us at least some guidelines for determining who is and isn't in our reference class in that case?