Beings and Time
Scope neglect and the aestivation hypothesis
The aestivation hypothesis is the idea that if you want to maximize value from space resources, you should wait till the very end of the universe, when something something cosmic microwave background radiation would make it easier to create happy people. I don’t know if the physics bit is true, but it raises an interesting ethical question: if we could create more happy minds by waiting until the very end of the universe, than by expending resources early, should we wait? A lot of people’s intuitions say no.
I think they’re (~provably) wrong.
Let’s consider a simpler case: which of the following two worlds is better?
World 1: The universe lasts 10 billion years. For all this period, until the last 100, there are zero people. Then, in the last 100 years, a quadrillion planets full of people spring up. Each planet has people who live a hundred years and then die. There are no interactions between any two planets.
World 2: The universe likewise lasts 10 billion years. For this entire period, there are 10 million planets full of people that spring up. Each planet has people who live for 100 years and then die. There are no interactions between any two planets.
So in total, across the two worlds, there are the same number of people, living lives of the same length. But in world 1, all the people are concentrated at the very end of the universe. No one exists until 100 years before the universe ends. World 2 has consistent flourishing, while world 1 has a brief flareup before the fading of the light.
I think most people would be inclined to prefer world 2 to world 1. In fact, even if we changed around the numbers, so that world 1 had ten times as many total people as world 2, most people would still prefer world 2. I asked some AI models and they mostly agreed with this.
I think it’s pretty clearly wrong though.
I mean, for one, the two worlds have the same number of people at the same level of welfare. The only difference is when in time they exist. If two worlds have all the same people, surely when the people exist shouldn’t affect the goodness of the worlds? We can even imagine that the people across the two worlds have all the same experiences. None of the planets ever interact. Literally the only difference is that they’re spaced out differently in time.
Suppose one thinks that world 2 is still better than world 1 even if world 1 has ten times as many people. That would mean that throwing the relevant planets forward in time—even though when in time they are has absolutely no impact on anyone’s welfare—would be worth a 90% reduction in the population of happy people. That is very hard to believe.
That is, suppose we consider two trajectories of planets. In the first trajectory, the planet has life earlier in time. In the second trajectory, it has 10x as much total life. Intuitively it just seems so clear that it is better for the planet to follow the second trajectory. If we learned that life on Earth existed barely before the end of the universe, it wouldn’t be better to move it earlier in time, at the cost of a 90% reduction in the population.
Another odd result of this view: it would seem to imply that it would be very good to time-travel planets further in time, even if it doesn’t affect any welfare. Suppose that in world 1—where the planets of life only spring up at the very end—you can use a time travel device to move the planets further in time. No one would notice the shift forward in time. It has no effect on anyone’s conscious experience. On this view, doing so would be a very good thing to do. But that is very implausible.
For a final consideration: the physicists tell us that time passes at different speeds depending on your speed. Thus, if when in time the planets exist was morally important, there might be very strong reasons to speed up or slow down the planets—so as to change when in time they exist. But again, it just doesn’t seem possible that there could be strong moral reasons to make planets move more quickly, even if this benefits literally no one.
So what’s going on? Why do we have this intuition? I claim the answer is scope neglect.
Scope neglect is a bias that people have where their brains aren’t good at tracking big numbers. People pay similar amounts to save 2,000 birds and 200,000 birds. Both register in their brains simply as “BIG NUMBER of birds.” Many of our moral intuitions are driven by attraction or aversion to the scenario under consideration. We are reluctant to accept the repugnant conclusion because when we vividly imagine the scenario, it doesn’t strike us as very good.
But if our imagination of the goodness of a scenario doesn’t scale proportionally with the number of beings in the scenario, then we’re likely to underestimate the goodness of scenarios with big numbers at play. Scope neglect explains why we think world 2 is better than world 1. In both cases, the number of people alive at each time registers to us simply as a big number. But the fact that people are around for longer inflates our assessment of the goodness of the scenario. There are two psychological forces pushing us to like world 2 instead of just one.
Now, it’s true that the claim that scope neglect infects our moral judgments in this way doesn’t automatically follow from the psychological experiments about scope neglect.1 While our intuitions about how much to pay to save birds doesn’t increase based on mere increases in the number of birds, this bias doesn’t survive reflection. If people reflect carefully, they see that it is better to save many birds rather than few. Their reflective judgments, considering both scenarios, no longer display the same bias.
But this is analogous to other biases. We know, for instance, that people disproportionately blame victims of random misfortune. Then, we also know, that people are, in general, weirdly positive about very long-standing features of the world like suffering in nature and human death. Their reasons for these tend not to make sense. So we infer: perhaps the just world bias explains their attitudes about these other cases, even if there are some differences between the simplified lab experiments and the real world cases.
We should do the same in the case of scope neglect. From the lab experiments we should infer that people’s intuitive assessment of the badness of a scenario is often wildly out of accordance with its actual badness when large numbers are involved. Then, in the real world, we notice that there are a lot of cases where people seem to have puzzling and inconsistent judgments that can be explained by underestimating the goodness of worlds where big numbers are at play. So we should infer: maybe the same thing is going on here.
And this hypothesis explains a lot of different surprising moral judgments. For example, lots of people have the judgment that saving one human’s life is better than freeing any number of chickens from a cage. People also have the judgment that extending a human’s life by one second is less good than freeing 100 chickens from a cage for a year. Yet you can’t have both. Saving a life just extends it by some number of seconds. So if, for each of these seconds of life extension, it is better to free some number of chickens, then freeing a very large number of chickens is better than extending a life.
The scope neglect hypothesis gives us a nice explanation of our intuitions about many cases. Most people think that no number of mild pains are worse than a single torture. However, imagine taking a torture, making it slightly less painful, and giving it 100,000 times more victims. Seems it has gotten worse. Yet if you keep doing this, lowering its badness and making it more common, eventually it gets down to being as bad as a dust speck. So by transitivity, at the end of the process, you have some vast collection of mild pains that is worse than a torture. We will have to give up one of the following:
No number of dust specks is worse than a torture.
If you take an item of pain, make it only a tiny bit less intense, and then inflict it on 100,000 times more people, it gets worse.
If A is worse than B and B is worse than C, then A is worse than C.
If you buy the scope neglect hypothesis, then you have a nice way out: you can simply give up 1. Our intuitions about how bad dust specks are don’t scale with their numerosity. That is why hearing about a quadrillion dust specks elicits the same intuitive response as hearing about 100 billion dust specks—even though a quadrillion dust specks is 10,000 times worse. If you really intuitively felt in your bones that 1 quadrillion dust specks is 10,000 times worse than 100 billion dust specks, it is much less clear that you’d have the intuition that no number of dust specks is worse than a torture.
This can also help to explain why people are so unsympathetic to strong Longtermism. If there were only going to be 1 trillion future people, then it would seem intuitive that making the future go well would be very important. But in fact, the number of future people could be on the order of 10^58 people, if not more. That is a difference by a factor of 10^46 people, which is almost the number of atoms on Earth. You should expect our intuitions to be screwed up if the intuitive salience of some event doesn’t increase even when the value goes up 46 orders of magnitude.
I can’t describe just how screwed up this means our intuitive judgments of salience are. To compare, imagine that you thought that there was a single alien planning to invade Earth. Then you learned that actually there was a giant invading alien army, so big that if for every ant on Earth, you made a giant group of ants equal to the number of ants on Earth, and then for every ant in that giant super ant group, you made another group equal to the number of ants on Earth, it would still be around the size of the invading alien army. After learning this news, your intuitive assessment of the invading alien threat doesn’t change at all.
That is the scale of the error.
Why don’t people regard the far future as the biggest deal in the world? I think a plausible answer is: our intuitions just don’t track scale. We don’t readily grok just how big it could be, and so underestimate to a literally incomprehensible degree the stakes of our actions.
This also can nicely explain our views about diminishing marginal value. Intuitively it seems to most people like value reaches a bound, so there being ten copies of planet Earth isn’t 10 times better than there being only one. Yet if we think about how good it is that our planet exists, the number of distant Earths doesn’t seem relevant to that assessment. This can be explained if you posit that our aggregative intuitions are inaccurate due to scope neglect, leading us to underestimate how good many flourishing planets are. Our most trustworthy intuitions are about the value added from particular planets.
All this is to say that while I don’t know if the aestivation hypothesis is true—that the best way to bring about value is to wait till the end of the universe—if it is, then we should wait. We ought to do what brings about the most value. The alternative judgment is not at all plausible and is likely a byproduct of scope neglect. Once one seriously accounts for scope neglect, it becomes apparent how many of our intuitions on the most important issues go wrong.
I don’t actually have access to the paper, so I’m going based on my memory of what it says.


I think the reason for not assigning appropriate weight to the future is that people just have an irrationally strong discount of future persons. Probably if all the people in the far future were presently alive people would be more (thought not nearly enough) sensitive to their interests. I mean, we don’t even take great care of *our own* future selves relative to our present selves
World 2 intuitively feels better because only in World 2 can your children and grandchildren have a future. Only in World 2 can you have long histories, traditions passed down from generation to generation, and the knowledge that after you die life will continue, for a long, long time.
I get that goes against the spirit of the thought experiment but that’s really all I could think about when considering which world I preferred. World 2 has more possible goods in it than World 1.