There are some radical deontologists who hold that you shouldn’t kill one person, even to save several thousands. However, most people hold that when the utility considerations are substantial enough, they overpower deontic considerations. If we accept some moderate form of threshold deontology, then the conclusion that we should care overwhelmingly about the far future follows, because the stakes are so high, in terms of utility.
As we’ve already seen, the future could contain a vast number of future people—perhaps infinite people with vast amounts of average utility. Even if we assume no significant changes to physics as we know it, there could be 10^54 years of future life, with unfathomable utility per year. This means that, if we give any weight to utility, it will dominate other considerations. Beckstead quotes Rawls saying
All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.
A view which ignores utility would have to hold that stealing to increase global utility a thousand fold would be bad. Given how vast the future could be, even if we stipulate that caring about the future entails violating some rights (it obviously doesn’t), the ratio of rights violated to utility would be such that any plausibly non absolutist deontological view would hold that we should violate rights to reduce existential threats, and improve the quality of the future in other ways.
I’ve argued for utilitarianism elsewhere in great detail. But utilitarianism doesn’t have to be accepted to accept that we should care overwhelmingly about the future. As long as one holds that millions of centuries of unfathomable bliss for unfathomable numbers of people is a good thing, and that it’s really important to bring about good things, the longtermist conclusion follows.
This means that, for example, existential threats should wholly decide ones political decisions. After all, if a politician had a low chance of making the world 10 quadrillion times better, it would be worth voting for them. The same is true of existential threats based considerations. The utility is too great for anything else to matter for practical considerations.
Thus, contrary to what’s commonly believed, it’s non-longtermists who have to take the extreme view. The reason longtermism seems extreme is because of the empirical details. However, when confronted with extreme factual considerations, a good theory should get extreme results. If there was a reasonable chance that everyone would be infinitely tortured unless some action was taken, then that action would be worth taking, even if it seemed otherwise undesirable. Longtermism is just a response to an extreme factual situation—with a future that is vaster than anything we could imagine.
Compare this to the view that the primary facts worth considering when deciding upon policy would be the impact of some policy upon atoms. This seems like an extreme view—atoms are probably not sentient and if they are, we don’t know what effect our actions have on them. However, now imagine that we somehow did know what impact our actions would have on atoms. Not only that, it turns out that our actions currently are causing immense harm to 100,000,000,000,000,000,000,000,000,000,000 atoms. Well, at that point, while the whole “caring primarily about atoms,” thing seems a bit extreme, if our current policy is bad for 100,000,000,000,000,000,000,000,000,000,000 atoms, who are similarly sentient to us, a good theory should say that this is the primary consideration of relevance.
The general heuristic of “don’t care too much about atoms,” works pretty well most of the time. But this all changes when our choices start fucking over 100,000,000,000,000,000,000,000,000,000,000 of them—a number far vaster than the number of milliseconds in the universe so far. The same is true of the future—even if you’re not a utilitarian, even if you think that they matter much less, when our choices harm 10^52 them (which is conservative!!)—which can be written out as 10, 000, 000 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, this becomes the most important thing. This number defies comprehension—if each person who had ever lived was a world that contained the number of people who had ever lived, that would still have fewer people than 10^52.
If there are going to be a billion centuries better than this one, the notion that we should mostly care about this one starts to seem absurd. Much like it would be absurd to hold that the first humans’ primary moral concerns should have been their immediate offspring, it would be similarly ridiculous to hold that we should care more about the few billion people around today than the 10^52 future people.
This also provides a powerful debunking account of contrary views. Of course the current billions seem more important than the future to us today. People in the year 700 CE probably thought that the people alive at that time contained more importance than the future. However, when considered impartially, from “the point of view of the universe,” this century is revealed to be obviously less important than the entire future.
Imagine a war that happens in the year 10,000 and kills 1 quadrillion people. However, after the war, society bounces back and rebuilds, and the war is just a tiny blip on the cosmic time-scale. This war would clearly be worse than a nuclear war that would happen today and kill billions of people. However, this too would be a tiny occurrence—barely worth mentioning by historians in the year 700,000—and entirely ignored in the year 5 million.
Two things are worth mentioning about this.
The future would be worthwhile overall even if this war happened. However, this was is worse than a global war that would kill a billion people now. Thus, by transitivity, maintaining the future would be worth a global war that would kill a billion people. And if it would be worth killing a billion people in a global nuclear war, it’s really really important.
It becomes quite obvious how insignificant we are when we consider just how many centuries there could be. Much like we (correctly) recognize that events that happened to 30 people in the year 810 CE aren’t “globally,” significant, we should similarly recognize that what happens to us is far less significant than the affect we have on the future. Imagine explaining the neartermist view to a child in the year 5 million—explaining how raising taxes a little bit or causing a bit of death by slowing down medicine slightly was so important that it was worth risking fifty-thousand centuries of prosperity—combined with all of the value that the universe could have after the year 5 million—with billions more centuries to come!
As I’ve said elsewhere “The black plague very plausible lead to the end of feudalism. Let’s stipulate that absent the black plague, feudalism would still be the dominant system, and the average income for the world would be 1% of what it currently is. Average lifespan would be half of what it currently is. In such a scenario, it seems obvious that the world is better because of the black plague. It wouldn’t have seemed that way to people living through it, however, because it’s hard to see from the perspective of the future.”
The people suffering through the black plague would have found this view absurd. However, to us it now seems obvious, and would seem more obvious a thousand centuries from now. The sacrifices longtermism demands currently aren’t even really sacrifices—it’s better for current people if we reduce existential threats which will kill millions of people in expectation, even ignoring the future.
So our case for prioritizing the present is infinitely weaker than the case that those living through the black plague could make for eradicating the black plague, even if it would lock in eternal feudalism. And if our position is weaker than the arguments for eternal feudalism, something has gone awry in our thinking.
Thus, even if we had to make immense sacrifices to reduce existential risks, it would be unambiguously worth it. And we don’t! The actions taken to reduce existential risks will be better for both current and future people. Those actions are thus no brainers.