Freddie DeBoer's Temporal Copernican Principle Implies Early Humans Had Psychic Powers
Plus the doomsday argument, the possibility of coming up with a nice reference class, and that you should ignore scientific evidence if it tells you there were a lot of early humans
Freddie deBoer endorses something called the temporal copernican principle. It’s the anthropic version of “nothing ever happens.”
The idea, in short, is that humans have been around for a very long time (this part of the argument is right). With luck, we’ll be around for a long time more. If there are tons and tons of generations of humans, then what are the odds that we’d find ourselves in the generation that is the most important? This would be, by Freddie’s logic, a bit like guessing that you’re the first person in human history.
Scott Alexander wrote a reply to Freddie that was, in my view, fairly decisive. He noted that Freddie’s argument loses most of its force when you are more precise about the numbers. Furthermore, it relies on fairly controversial anthropic reasoning. Scott also generously mentioned my blog, in a hilarious passage:
…Freddie is re-inventing anthropic reasoning, a well-known philosophical concept. The reason why the hundreds of academics who have written books and papers about anthropics have never noticed that it disproves transhumanism and the singularity is because Freddie’s version has obvious mistakes that a sophomore philosophy student would know better than to make.
(local Substacker Bentham’s Bulldog is a sophomore philosophy student, and his anthropics mistakes are much more interesting.)
(I’m currently a rising Junior, but close enough).
(I’m not sure if Scott actually disagrees with the self-indication assumption. He seems sympathetic to it, and when we discussed my argument for God at Manifest, he argued that you could get enough people if you adopt Tegmark’s view that every mathematical structure is instantiated. I disagree. If this is his only objection (I’m not sure if it is), then this wouldn’t be disagreeing with the anthropic reasoning, but simply disagreeing about the right non-anthropic conclusion to draw from the anthropic reasoning).
(I’ll never get used to having some of my favorite writers write about my writing. It would be a bit like hearing the President mention you in a press conference!)
Scott’s view is that Freddie is, while acting like he’s just applying commonsense, relying on a very controversial bit of anthropic reasoning. It’s well encapsulated by the following meme:
I think it’s worse than that! My position, represented in the form of the same meme, is:
Freddie isn’t just assuming controversial bits of anthropic reasoning: no, it’s much worse than that. The truth is horrifying, appalling, grotesque, an affront to God and to beauty, to the good and the true, to the pure and saintly; a horrifying crime against being itself. Get your kids out of the room before I reveal it.
He’s assuming the self-sampling assumption.
(To new readers who migrated over here from Scott’s generous shoutout, over at Bentham’s newsletter, we treat accusations of a person adopting the self-sampling assumption a bit like how McCarthy treated accusations of communism.)
The self-sampling assumption says that you should reason as if you were randomly selected from among the people that exist. So, if three people get created with a green shirt and one with a blue shirt, and I can’t see my shirt color, I should think there’s a 3/4 chance that I have a green shirt (for I’m equally likely to be any of the people).
This sounds uncontroversial enough, and its implication in the scenario I just described is obvious. But the self-sampling assumption has all sorts of horrifying and counterintuitive consequences. Freddie’s argument inherits all of these problems!
A first nasty implication of SSA is the Doomsday argument. Suppose that humanity continues for a lot of generations, having many trillions of people. If I treat myself as randomly selected from the people that exist, then I’m very early. If there are 11 trillion people, there’s only a 1% chance that I’d find myself in the first 110 billion people—if there are 110 quadrillion people, there’s only a one in a million chance that I’d be in the first 110 billion (remember, SSA says you’re treating yourself as a randomly selected human). So then merely from the fact that I’m alive now, SSA says I should be pretty confident that humanity won’t last too much longer.
Suppose, for instance, that the end of the world would be determined by a coinflip. If the coin comes up heads, the world would be immediately destroyed. If it comes up tails, powerful aliens would safeguard us, making sure we don’t go extinct until there are at least 110 quadrillion people. SSA holds that before the coin is flipped, you should think, at 1 million to one odds, that it will come up heads. This is nuts!
Freddie’s view implies this as well! If you think that as long as there are a lot of generations then it’s surprising we’re in the most important generation, then Freddie’s view would give you a reason to think that heads is likelier than tails—only if the coin comes up heads would this be the most important century.
You might object that the mere existence of the coinflip makes this the most important century. Thus, as to the importance of the world, it doesn’t matter whether the coin comes up heads or tails—all the matters is whether the century is one in which important things can happen. This has three problems.
First, it doesn’t save Freddie’s argument. Freddie argues that it’s unlikely that our century will be so important given how many centuries there are. But in that sense, there have been a lot of centuries that have been important. Things could have gone wrong at the dawn of Democracy and during the last century when we, you know, almost blew up the world several times.
Second, it implies the same sort of presumptuousness. It implies that even if you got really strong evidence that the coin would have the effects described, you should stick your fingers in your ears and be confident that it wouldn’t do that—because it doing that would make this the most important century.
Third, this doesn’t actually affect the argument. Even if the mere existence of the coin makes it guaranteed to be the most important century, because it’s likelier that we’d be in the most important century, by Freddie’s logic, if there are fewer total centuries, we get strong evidence that there aren’t many future centuries.
But Freddie’s view doesn’t just imply the doomsday argument. Oh no, the doomsday argument has some much, much more horrifying cousins involving psychic powers, maximally effective contraception, and willing deer to drop dead.
To see this, imagine that Adam and Eve are in the garden of Eden. The snake has been making some of the same points that Freddie makes. If there are a bunch of future generations, then Adam and Eve end up being in the most important century, which grows more unlikely as there are more centuries. Adam and Eve therefore reason that they won’t have many descendants. In fact, they have a ~55 billion to one update against the hypothesis that there are 110 billion descendants, for it’s so unlikely that they’d be one of the first two people!
Adam and Eve thus become extremely confident that they won’t have tons of offspring. They become confident that they won’t give birth to a great and prosperous nation.
One day, they hear a prophecy from God that if they have children, they’ll have many, many offspring. God states that if he makes offspring from them he will “multiply [their] offspring as the stars of heaven and will give to [their] offspring all these lands.” Now Adam and Eve are really, really confident that they’ll never have kids, just from the fact that if they have kids, there will be a lot of them, and they’ll be the most important generation—running afoul of Freddie’s temporal Copernican principle.
This means—counterintuitively—that if you’re one of the first humans, you have a very safe form of birth control: just make sure that if you have offspring there will be a lot of them. If you do this, then you can be extremely confident that there won’t be many future generations—for that would make you an anomalously important generation. Furthermore, this could allow you to have psychic powers—as long as you agree to have a huge enough number of offspring unless a boulder rolls, or a deer drops dead at your feet, you could be confident that the boulder would roll and the deer would drop dead.
Note that this is exactly the same reasoning as Freddie’s. If you’re surprised that you’re the most important generation as long as there are a lot of generations, then you should be confident there won’t be many generations. By ensuring there will be many generations unless some event happens, you can guarantee that the event will happen.
(Note, these points aren’t original to me—they’ve been made originally (I think?) by Bostrom. I also have a published paper on them).
A third bad implication of Freddie’s view is that it implies that you should doubt that there were a lot of humans in the past.
Suppose archeologists uncover extremely convincing archeological evidence that there were quadrillions of prehistoric humans. Suppose additionally that we get extremely convincing evidence that we’re the most important generation—perhaps we start a nuclear war that obliterates all except 20 people, so that after the apocalypse, it’s just me and Freddie arguing about anthropics. Freddie’s view implies that we should be extremely confident that the archeologists are wrong.
If the archeologists are right, then out of quadrillions of people, we end up being one of the minuscule portion that is in the most important generation. This means that it’s very unlikely that we’re the most important generation and there are a lot of prehistoric humans. But we know we’re the most important generation, so we should come to doubt that there are a lot of prehistoric humans.
Something has gone deeply wrong! If you find yourself concluding that the fact that a nuclear war happened means that the archeologists must be wrong because it makes you unduly important, your anthropic reasoning has gone completely off the rails.
Finally, Freddie’s view requires a reference class. The problem: reference classes are made up bullshit epicycles. The theory of ether is rolling over in its grave at how desperate an attempt reference classes are to salvage the theory.
Freddie notes that there are have been a lot of generations of human, and so it’s unlikely we’ll be in the most important generation. But why humans as a whole?
Hominids as a whole have been around for much longer. Why am I reasoning as if I’m equally likely to be any of the human by not any hominids? Why not humans after 1950, or conscious beings as a whole? Does Homo erectus count? Where the firetruck are we getting the answer at all?
The Earth has been around for about 4.5 billion years. Should we, by Freddie’s logic, doubt that we’re the smartest beings in the history of the Earth (what are the odds that out of 4.5 billion years on Earth, the 300,000 that we’re around would have the smartest creatures? If you naively think it’s equally likely to be any of the years, the answer is one in fifteen thousand.)
These puzzles all evaporate if we adopt the self-indication assumption, according to which your existence is more likely if there are more people. While it is true that given that you exist, if there are more centuries, it’s less likely that yours will be the most important century, it’s likelier that you’d exist at all in worlds with more people. As a result, you get no overall update in favor of theories according to which civilization will end either more slowly or quickly.
Consider, for instance, the scenario where the coin will be flipped that will either destroy the world or make the world continue until there are 110 quadrillion people. It’s true that given that I exist, it’s likelier that I’d be in the most important century (the one where everything is destroyed) if there are fewer people, but it’s less likely that I’d exist at all. As a result, there’s no overall update!
(There are some more complicated scenarios where there might be an update, depending on the degree of correlation between the events of the various centuries—if you think, for instance, that the odds we’d go extinct this century correlate with the odds we’d go extinct in previous centuries, the fact that we haven’t gone extinct gives us evidence that we won’t go extinct now, but that’s not the point that Freddie was making. Also, because the property of being the most important century depends on facts about the other centuries, the fact that you’re here for a small slice of history gives you some evidence against this being the most important century but shouldn’t make you think this century is less likely to be important than it might otherwise be (compare: discovering a bunch of giants might influence your judgment about whether you’re the tallest person in the world, but it won’t cause your beliefs about how tall you are to change)).
Thus, Freddie’s argument assumes self-sampling style reasoning that is extremely controversial. Worse, it’s almost certainly wrongheaded! If you assume, without argument, an extremely tendentious theory without considering any of the objections to it, it’s not you who should be claiming others are making basic errors.
I really like the aliens who will protect us if the coin comes up heads and destroy us if it comes up tails because it does away with any nagging worry that retro causation or knowledge of the future is what gets the result.
"Freddie isn’t just assuming controversial bits of anthropic reasoning: no, it’s much worse than that. The truth is horrifying, appalling, grotesque, an affront to God and to beauty, to the good and the true, to the pure and saintly; a horrifying crime against being."