Emile Torres has written an article criticizing longtermism. Like most articles criticizing longtermism, it does so very poorly, making arguments that rely on some combination of question begging and conceptual confusion. One impressive feature of the article is that the cover image managed to make Nick Bostrom and Will MacAskill look intimidating—so props for that I guess.
So-called rationalists have created a disturbing secular religion that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites.
This is the article’s subtitle. Lets see if throughout the article Torres is able to substantiate such claims.
The religion claim is particularly absurd. Religion in such contexts is just used as a term of abuse—it has no meaning beyond “X is a group that is doing things, that we don’t like.” As Huemer points out, to be a religion an organization needs to have many of the following: faith, supernaturalism, a worldview, a source of meaning, self support, religious emotions, ingroup identification, source of identity, and organization. Longtermism has very few of these—far fewer than the democratic party.
I won’t quote Torres’ full article for word count purposes—I’ll just quote the relevant parts.
In a late-2020 interview with CNBC, Skype cofounder Jaan Tallinn made a perplexing statement. “Climate change,” he said, “is not going to be an existential risk unless there’s a runaway scenario.”
So why does Tallinn think that climate change isn’t an existential risk? Intuitively, if anything should count as an existential risk it’s climate change, right?
Cynical readers might suspect that, given Tallinn’s immense fortune of an estimated $900 million, this might be just another case of a super-wealthy tech guy dismissing or minimizing threats that probably won’t directly harm him personally. Despite being disproportionately responsible for the climate catastrophe, the super-rich will be the least affected by it.
But I think there’s a deeper reason for Tallinn’s comments. It concerns an increasingly influential moral worldview called longtermism.
The reason Tallinn thinks climate change isn’t an existential risk is because climate change isn’t an existential risk—at least not a significant one. To quote FLI
An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living.
Global warming is likely to be very bad, but will not achieve that. It is thus not an existential risk. This is not to downplay it—crime, disease, and poverty are all also very bad but not existential risks. Thus, Torres’ first objection can be summarized as.
Premise 1: Longtermism claims that climate change isn’t an existential risk absent a runaway scenario
Premise 2: Climate change is an existential risk absent a runaway scenario
Therefore, longtermism is wrong.
However, both premises are suspect. Finding one longtermist saying a statement doesn’t mean it’s the position accepted by all. I accept climate change will likely increase international instability, leading to some greater existential risks, even absent cataclysm. However, it is not likely to end the world absent a runaway scenario.
Next, Torres says
At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.
The points about space colonization are wrong. A vast number of beings living rich and happy lives would be good—happiness is good, so unfathomable happiness would be unfathomably good. Torres concerns about space colonization have been devastatingly taken down here. Putting rich and happy lives in scare quotes doesn’t undermine the greatness of rich and happy lives—ones unfathomably better than any we can currently imagine.
An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm.
Torres has correctly described existential risks.
The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 10^23 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 10^23 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As the FHI longtermists Hilary Greaves and Will MacAskill—the latter of whom is said to have cofounded the Effective Altruism movement with Toby Ord—write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
A few points are worth making.
The correct view will often go against our intuitions. Pointing out that this seems weird isn’t especially relevant.
The ratio of future to current humans is greater than 1 trillion to one, if we accept Bostrom’s assumptions. If there were 2 people and they knew that whether 2 trillion people existed depended on what they did, it would seem pretty intuitive that their primarily obligation would be making sure the 2 trillion people had good lives. Presentism and status quo bias combined with our inability to reason about large numbers undermine our intuitions here.
The way to improve the future is often to improve the present. A world with war, poverty, disease, and violence will be worse at solving existential threats.
This brings us back to climate change, which is expected to cause serious harms over precisely this time period: the next few decades and centuries. If what matters most is the very far future—thousands, millions, billions, and trillions of years from now—then climate change isn’t going to be high up on the list of global priorities unless there’s a runaway scenario.
This is true, yet hard to see from our present location. Let’s consider past historical events to see if this is really unintuitive when considered rationally. The black plague very plausibly lead to the end of feudalism. Let’s stipulate that absent the black plague, feudalism would still be the dominant system, and the average income for the world would be 1% of what it currently is. Average lifespan would be half of what it currently is. In such a scenario, it seems obvious that the world is better because of the black plague. It wouldn’t have seemed that way to people living through it, however, because it’s hard to see from the perspective of the future.
In the same paper, Bostrom declares that even “a non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback,” describing this as “a giant massacre for man, a small misstep for mankind.” That’s of course cold comfort for those in the crosshairs of climate change—the residents of the Maldives who will lose their homeland, the South Asians facing lethal heat waves above the 95-degree F wet-bulb threshold of survivability, and the 18 million people in Bangladesh who may be displaced by 2050. But, once again, when these losses are juxtaposed with the apparent immensity of our longterm “potential,” this suffering will hardly be a footnote to a footnote within humanity’s epic biography.
Several facts are worth noting.
EA’s are working on combatting climate change, largely for the reasons Torres describes.
There are robust philosophical arguments for caring overwhelmingly about the far future. Just because they don’t exist yet doesn’t mean they shouldn’t factor into our moral consideration. This preference is as irrational as the preference for those who are close geographically, over those who are far away.
Consider one such argument. (Note, I’ll use utility, happiness, and well-being interchangeably and disutility, suffering, and unpleasantness interchangeably).
1 It is morally better to create a person with 100 units of utility than one with 50. This seems obvious—pressing a button that would make your child only half as happy would be morally bad.
2 Creating a person with 150 units of utility and 40 units of suffering is better than creating a person with 100 units of utility and no suffering. After all, the one person is better off and no one is worse off. This follows from a few steps.
A) The expected value of creating a person with utility of 100 is greater than of creating one with utility zero.
B) The expected value of creating one person with utility zero is the same as the expected value of creating no one. One with utility zero has no value or disvalue to their life—they have no valenced mental states.
C) Thus, the expected value of creating a person with utility of 100 is positive.
D) If you are going to create a person with a utility of 100, it is good to increase their utility by 50 at the cost of 40 units of suffering. After all, 1 unit of suffering’s badness is equal to the goodness of 1 unit of utility, so they are made better off. They would rationally prefer 150 units of utility and 40 units of suffering to 100 units of utility and no suffering.
E) If one action is good and another action is good given the first action, then the conjunction of those actions is good.
These are sufficient to prove the conclusion. After all, C shows that creating a person with a utility of 100 is good, D shows that creating a person with utility of 150 and 40 units of suffering is better than that, so from E, creating a person with utility of 150 and 40 units of suffering is good.
This broadly establishes the logic for caring overwhelmingly about the future. If creating a person with any positive utility and no negative utility is good, and then increasing their utility and disutility by any amount, such that their positive utility increases more than their disutility does, is good, then you should create a person if their net utility is positive. This shows that creating a person with 50 units of utility and 49.9 units of disutility would be good. After all, creating a person with .05 units of utility would be good, and increasing their utility by 49.95 at the cost of 49 units of disutility would also be good, then creating a person with 50 units of utility is good. Thus, the moral value of increasing the utility of a future person by N is greater than the moral disvalue of increasing the disutility of a future person by any amount less than N.
Now, let’s add one more stipulation. The moral value of causing M units of suffering to a current person is equal to that of causing a future person M units of suffering. This is very intuitive. When someone lives shouldn’t affect how bad it is to make them suffer. If you could either torture a current person or a future person, it wouldn’t be better to torture the future person merely in virtue of the date of their birth. Landmines don’t get less bad the longer they’re in the ground.
From these we can get our proof that we should care overwhelmingly about the future. We’ve established that increasing the utility of a future person by N is better than preventing any amount of disutility for future people of less than N and that preventing M units of disutility for future people is just as good as preventing M units of disutility for current people. This would mean that increasing the utility of a future person by N is better than preventing any amount of disutility of current people of less than N. Thus, bringing about a person with utility of 50 is morally better than preventing a current person from enduring 48 units of suffering, by transitivity.
The common trite that it’s only good to make people happy, not to make happy people is false. This can be shown in two ways.
It would imply that creating a person with a great life would be morally equal to creating a person with a mediocre life. This would imply that if given the choice between bringing about a future person with utility of 5 and no suffering or a future person with utility of 50,000 and no suffering, one should flip a coin.
It would say (ironically given Torres’ pitch) that we shouldn’t care about the impacts of our climate actions on future people. For people who won’t be born yet, climate action will certainly change whether or not they exist. If a climate action changes when people have sex by even one second, it will change the identities of the future people that will exist. Thus, when we decide to take climate action that will help the future, we don’t make the future people better off. Instead, we make different future people who will be better off than the alternative batch of future people would have been. If we shouldn’t care about making happy people, then there’s no reason to take climate action for the future.
These aren’t the only incendiary remarks from Bostrom, the Father of Longtermism. In a paper that founded one half of longtermist research program, he characterizes the most devastating disasters throughout human history, such as the two World Wars (including the Holocaust), Black Death, 1918 Spanish flu pandemic, major earthquakes, large volcanic eruptions, and so on, as “mere ripples” when viewed from “the perspective of humankind as a whole.” As he writes:
“Tragic as such events are to the people immediately affected, in the big picture of things … even the worst of these catastrophes are mere ripples on the surface of the great sea of life.”
In other words, 40 million civilian deaths during WWII was awful, we can all agree about that. But think about this in terms of the 1058 simulated people who could someday exist in computer simulations if we colonize space. It would require trillions and trillions and trillions of WWIIs one after another to even approach the loss of these unborn people if an existential catastrophe were to happen. This is the case even on the lower estimates of how many future people there could be. Take Greaves and MacAskill’s figure of 1018 expected biological and digital beings on Earth alone (meaning that we don’t colonize space). That’s still a way bigger number than 40 million—analogous to a single grain of sand next to Mount Everest.
This is true. It is only unintuitive if one considers things from their own perspective rather than from the standpoint of humanity broadly. When considered from the perspective of humanity broadly, it becomes clear that our impacts on whether we go extinct have a bigger impact on the future than the first humans’ actions had on us up until that point. It seems pretty intuitive that the first humans should have avoided getting wiped out, even if they didn’t desire to do so, given the vast positive potential of civilization.
If pushed, the first would save the lives of 1 million living, breathing, actual people. The second would increase the probability that 10^14 currently unborn people come into existence in the far future by a teeny-tiny amount. Because, on their longtermist view, there is no fundamental moral difference between saving actual people and bringing new people into existence, these options are morally equivalent. In other words, they’d have to flip a coin to decide which button to push. (Would you? I certainly hope not.) In Bostrom’s example, the morally right thing is obviously to sacrifice billions of living human beings for the sake of even tinier reductions in existential risk, assuming a minuscule 1 percent chance of a larger future population: 10^54 people.
Torres just goes on and on about how unintuitive this is, without giving any arguments against it. Remember: we are but the tiniest specks from the standpoint of humanity. If humanity were a person’s lifetime, we wouldn’t even be the first hour. So, much like it makes sense to plan for the future—to delay an hour to reduce risk of dying by a small amount, it makes sense to undergo sacrifices (if they were necessary to reduce existential risks, which they aren’t really!) to improve the quality of the future.
Additionally, even if one is a neartermist, they should still be largely on board with reducing existential threats. Ord argues in his book risk of extinction is 1 in 6, Bostrom concludes risks are above 25%, Leslie concludes they’re 30%, Rees says they’re 50%. Even if they’re only 1%—a dramatic underestimate, that still means existential risks will in expectation kill 79 million people, many times more than the holocaust. Thus, existential risks are still terrible if one is a neartermist, so Torres should still be on board with the project.
If this sounds appalling, it’s because it is appalling. By reducing morality to an abstract numbers game, and by declaring that what’s most important is fulfilling “our potential” by becoming simulated posthumans among the stars, longtermists not only trivialize past atrocities like WWII (and the Holocaust) but give themselves a “moral excuse” to dismiss or minimize comparable atrocities in the future.
Longtermists don’t dismiss such attrocities. We merely say that the future is overwhelmingly important. Pointing out that there are things that are much bigger than earth doesn’t dismiss the size of the earth—it just points out that there are other things that are bigger.
If future people matter at all—even if their well-being matters only .001% as much as current people, the far future would still dominate our moral considerations. We live in a morally weird world—one which makes our intuitions often unreliable. If our intuitions lead to the conclusion that we should ignore the billions of years of potential humans, in favor of caring about what affects current people only, then are intuitions have gone wrong. It would be awfully suspicious if the correct morality happened to justify caring unfathomably more about what happens in the 21st century than in the 22nd, 23rd, 24th…through the 10 billionth century.
Torres has no argument against this thesis—all they have is a series of cantankerous squawks of outrage. Their intuitions are not surprising and unlikely to be truth tracking. American’s care much more about American domestic policy, even though America’s effect on other countries is much more significant than the impact of our policies domestically. Given that we cannot talk to future people, it’s very easy to prioritize visible suffering. This is, however, unwise. There is no way to design a successful version of population ethics that does not care about the existence of 10^52 future people with excellent lives.
EA’s don’t dismiss attrocities in the future. Every single historical atrocity has come from excluding beings from our moral circle. Caring about everyone is what has prevented atrocities—not ignoring 10^52 possible sentient beings. Utilitarians like Bentham and Mill have tended to be opposed to such attrocities. Bentham supported homosexuality in the 1700s. Thus, it’s Torres’ non utilitarian nonsense—one which goes against every plausible view of population ethics—that is at the root of historical evils.
. This is one reason that I’ve come to see longtermism as an immensely dangerous ideology. It is, indeed, akin to a secular religion built around the worship of “future value,” complete with its own “secularised doctrine of salvation,” as the Future of Humanity Institute historian Thomas Moynihan approvingly writes in his book X-Risk. The popularity of this religion among wealthy people in the West—especially the socioeconomic elite—makes sense because it tells them exactly what they want to hear: not only are you ethically excused from worrying too much about sub-existential threats like non-runaway climate change and global poverty, but you are actually a morally better person for focusing instead on more important things—risk that could permanently destroy “our potential” as a species of Earth-originating intelligent life.
Several points are worth making.
If people were looking for a philosophical excuse for not helping others, they’d choose objectivism. What type of person wants to take action on reducing existential threats, so they come up with a philosophical rationalization to justify that. People want to justify inaction—not action on a weird issue relating to the far future.
Longtermism is largely part of EA—a movement specifically build around using resources to do as much good as we can. One has to be delusional to think Ord or MacAskill is using longtermism as an excuse for not helping others, particularly when they donate all above roughly 35,000 dollars per year.
Even if one only cares about current people, reducing existential risks is still overwhelmingly important.
Calling it a religion is not an argument—it’s just invective.
To drive home the point, consider an argument from the longtermist Nick Beckstead, who has overseen tens of millions of dollars in funding for the Future of Humanity Institute. Since shaping the far future “over the coming millions, billions, and trillions of years” is of “overwhelming importance,” he claims, we should actually care more about people in rich countries than poor countries. This comes from a 2013 PhD dissertation that Ord describes as “one of the best texts on existential risk,” and it’s cited on numerous Effective Altruist websites, including some hosted by the Centre for Effective Altruism, which shares office space in Oxford with the Future of Humanity Institute. The passage is worth quoting in full:
Notice, there is no argument given by Torres against this conclusion. They just complain about unintuitive conclusions—particularly those that run afoul of social justice taboos, rather than offering real arguments. I’ve argued against Torres’ view here.
Additionally, EA’s actions on global health and development are entirely about improving the quality of life in poor countries. Thus, even if saving the life of a person in a rich country is intrinsically more important than saving the life of a person in a poor country, the best actions to take in terms of saving lives in the short term will be about saving lives in poor countries.
Never mind the fact that many countries in the Global South are relatively poor precisely because of the long and sordid histories of Western colonialism, imperialism, exploitation, political meddling, pollution, and so on. What hangs in the balance is astronomical amounts of “value.” What shouldn’t we do to achieve this magnificent end? Why not prioritize lives in rich countries over those in poor countries, even if gross historical injustices remain inadequately addressed? Beckstead isn’t the only longtermist who’s explicitly endorsed this view, either. As Hilary Greaves states in a 2020 interview with Theron Pummer, who co-edited the book Effective Altruism with her, if one’s “aim is doing the most good, improving the world by the most that I can,” then although “there’s a clear place for transferring resources from the affluent Western world to the global poor … longtermist thought suggests that something else may be better still.”
Torres pointing out irrelevant historical facts and putting scare quotes around value is, once again, not an argument. The Pummer quote describes why longermists should focus mostly on longtermist thing, rather than combatting global poverty, which is plausible even if we accept short termism. Thus, Torres lies about what Pummer believes. They additionally fails to pinpoint any action being done towards this end that he’d actually disagree with.
The reference to AI, or “artificial intelligence,” here is important. Not only do many longtermists believe that superintelligent machines pose the greatest single hazard to human survival, but they seem convinced that if humanity were to create a “friendly” superintelligence whose goals are properly “aligned” with our “human goals,” then a new Utopian age of unprecedented security and flourishing would suddenly commence. This eschatological vision is sometimes associated with the “Singularity,” made famous by futurists like Ray Kurzweil, which critics have facetiously dubbed the “techno-rapture” or “rapture of the nerds” because of its obvious similarities to the Christian dispensationalist notion of the Rapture, when Jesus will swoop down to gather every believer on Earth and carry them back to heaven. As Bostrom writes in his Musk-endorsed book Superintelligence, not only would the various existential risks posed by nature, such as asteroid impacts and supervolcanic eruptions, “be virtually eliminated,” but a friendly superintelligence “would also eliminate or reduce many anthropogenic risks” like climate change. “One might believe,” he writes elsewhere, that “the new civilization would [thus] have vastly improved survival prospects since it would be guided by superintelligent foresight and planning.”
Once again Torres has no argument—he just has slogans. Nothing that they have said should change anyone’s assessment of AI existential risks. The experts who have considered the issue rather than sneering at it tend to be pretty worried.
Torres’ article levies bad critiques, takes things out of context from hundred page books, and flagrantly misrepresents many points. It is not a serious objection to longtermism.
L + Ratio of Future Beings Doesn’t Matter + Don’t care about future people + Didn’t ask if they could live happy lives + Make people happy not make happy people
Bulldog: Destroyed