Critics of Longtermism have a frustrating habbit of misrepresenting what longtermism is. They often do this by pointing to quotes from prominent longtermists who, surprisingly, believe controversial things. This is not a legitimate move any more than it would be to claim that democrats are bad because they support the Iraq war — and then point to some Democrats who support the Iraq war. The point is that to be a Democratic, one need not support the Iraq war. Likewise, to be a longtermist, you don’t need to agree with Bostrom’s every utterance.
Tim Andersen has an article proclaiming that “Longtermism is repackaged utilitarianism and just as bad.”
Longtermism, if you aren’t familiar with the term, is the philosophy, promoted by philosopher Nick Bostrom of Oxford University, that our primary ethical obligation as a species is to ensure the post-human future for countless sentient beings.
This is not true. For one, longtermism is not just promoted by Bostrom. Thus, this would be like claiming that vegetarianism is the philosophy, promoted by (insert disagreeable vegetarian)…. Additionally, longtermists don’t have to think that making the future go well is our primary ethical obligation, just that it’s important and we should do more of it. All the same practical obligations fall out of this modest claim, called weak longtermism, and the more extreme claim, called strong longtermism.
If you aren’t familiar with Bostrom’s work, he is also responsible for the Bayesian (probabilistic) argument that we are all living in a computer simulation. I don’t think much of this argument either, but at least it didn’t have powerful moral implications.
I find it so infuriating when people do this. They’ll point to something weird that someone thinks, not provide any of the arguments that the person gives, sneer at it, and then move on to discredit the person. If you’re going to mention that they believe something weird, at least explain what the argument is, for heaven’s sake!
Longtermism is part of Bostrom’s ethics which he calls effective altruism. Sadly, effective altruism is a special case of utilitarianism — the idea that right and wrong are determined by whatever does the most people the greatest good.
Andersen doesn’t even explain what EA is, when he attacks Longtermism for being part of EA. He just claims falsely that it’s a special case of utilitarianism. This isn’t true — to be a utilitarian, you have to think people don’t have rights, and various other controversial claims. To be an effective altruist, you just have to think that helping people is very important, and we should do more of it rather than less. You can’t claim that EA is just utilitarian when one could be an Effective Altruist while also thinking that rights are inalienable and should never be violated as long as they thought that reducing the number of factory farmed animals and kids dying of malaria was a good thing.
This is particularly misleading when he presents EA as a Bostrom specific idea. Bostrom isn’t even a utilitarian — as one article says.
One common misconception is that he is a hardcore utilitarian. He is actually more of a pluralist, who takes moral uncertainty seriously, and sees utilitarianism as one among many useful frameworks for thinking about the future.
So we have four errors in the first few paragraphs. Impressive!
According to Bostrom’s predictions, humanity, if it survives the present epoch, will go on to a post-human future where conscious beings live lives of plenty and pleasure within elaborate computer simulations. If we successfully colonize our local cluster of the universe, this could amount to trillions upon trillions of conscious minds. Compared to those numbers, the Earth’s present number of human inhabitants at about 7.7 billion is a rounding error.
Bostrom doesn’t think that. He thinks that it might be true, but he isn’t super confident in it. Thus, claiming Bostrom believes this will happen if humanity describes the present epoch is misleading.
This is where the utilitarianism comes in. Our ethical obligation, according to longtermism, must be to those future people. In that case, altruism is “effective” because it relates to the greater good of benefiting them even at the expense of people living now.
All the criticisms of longtermism seem to follow a familiar template. Claim that longtermism requires accepting some controversial claim like utilitarianism, point out that that claim is controversial, sneer, and move on. One doesn’t need to be a utilitarian to accept longtermism — at no part in my series defending longtermism did I appeal to utilitarianism. Utilitarianism is more controversial than longtermism. However, as many have noted, longtermism follows from various plausible principles.
If you think that it’s good to make happy people, then we should care a lot about the potentially vast number of future people who could have awesome lives. If you don’t, then there’s nothing good about creating a person with a happy life who lives to 40 and also nothing good about creating someone with a happier life who lives to 80. Thus, on this account, you’d have no reason to create someone who’d live a longer healthier life rather than a shorter more miserable one. Such a result is deeply implausible. (This is just one of many arguments for longtermism).
Then it’s claimed that longtermists want to benefit the future at the expense of the future. This is false — longtermists tend to think that the best way to benefit the present will be reducing existential risks, which will also benefit the future. Thus, there’s a convergence in both goals.
This kind of moral theory, like Ayn Rand’s Objectivism of an earlier era, is popular with the billionaire class because it justifies their obscene wealth and hoarding. Unlike Rand, who saw selfishness as the great good and selflessness as an evil, dubiously redefining right and wrong, Bostrom simply redefines the economics of moral action. As long as they are doing something to ensure the technological future and the welfare of those future denizens of the cosmos, they are solidly ethical no matter what their current practices. Indeed, wealthy entrepreneurs such as Peter Thiel and Elon Musk have given money to Bostrom’s group at Oxford, perhaps in the hopes that his philosophy will justify their focus on technology at the expense of ordinary people who can’t afford their products and services or the ones they invest in.
This is, once again, false. The fact that some rich people agree with an idea doesn’t mean that the idea is just a justification for wealth. Longtermism doesn’t justify wealth — it justifies giving away wealth to reduce existential threats. Telling billionaires that they are risking the literal future when they hoard their billions isn’t an Ayn Rand esque excuse to hoard wealth — it, along with EA broadly, is a call to dramatic action on behalf of billionaires! Longtermists are explicitly worried about tech advances — many are worried about what Musk is doing to speed up AI development. The idea that Longtermism is just an excuse for rich people to fund tech startups, when one of their main concerns is about future tech, is laughable and demonstrative of utter ignorance on the subject.
Another practice of effective altruism is the idea that you can offset your moral failing by using money you obtain from your ventures to give to charity. Thus, rather than finding ethical ways to obtain wealth, it is morally acceptable to buy forgiveness for your sins. This dualistic attitude to good and evil, where it is simply a matter of balancing the scales, reduces atonement to a mathematical equation.
This is something that some EAs think. However, it is not a necessary requirement of being an EA. I don’t even know if most EAs think it. 80,000 hours urges against going into lots of unethical, well-paying industries.
Nineteenth century Russian author, Fyodor Dostoyevsky spoke to this mathematical attitude when he said,
men love abstract reasoning and neat systematization so much that they think nothing of distorting the truth, closing their eyes and ears to contrary evidence to preserve their logical constructions.
Moral arguments such as effective altruism and longtermism ignore their slippery slope which leads to the worst kinds of evil. It was precisely arguments such as these that led to the atrocities of the 20th century in Nazi Germany, Leninist-Stalinist Russia, Maoist China, and elsewhere, in the name of benefiting the most people in an imagined utopian future. It was precisely to counter such arguments that the United States constitution got its Bill of Rights. Rights are fundamentally opposed to utilitarian arguments because they guarantee an ethical obligation to a minority of people, even one person, at the expense of the majority.
Again, objections to utilitarianism are not objections to EA or longtermism. Bostrom isn’t a longtermist, I think Huemer agrees with longtermism despite being a deontological anarcho-capitalist, and many longtermists aren’t utilitarainism. But also, this is a terrible objection to utilitarianism. The claim that this justified the worst atrocities of Nazi Germiny and so on is false — it’s exclusion from the moral circle that lead to these atrocities. Utilitarianism explicitly includes everyone in the moral circle — that’s why utilitarians were morally so far ahead of their time. As Yetter Chappell notes
I’m not aware of any evidence suggesting that real-life dictators were actually influenced by Bentham, Mill, or other utilitarian thinkers. I don’t believe for a moment that Hitler et al. were honestly trying to impartially promote well-being, counting all people equally. And I’m sure that villains will find a way to rationalize their villainy no matter what moral philosophers might say. So I don’t think fears of a “utilitarianism → atrocities” causal pipeline have any credibility.
Additionally, the notion that legal rights are a non-utilitarian notion is false. While it’s true that utilitarians deny that we have natural, inalienable rights, there’s an obvious utilitarian reasons to be opposed to despots being allowed to violate legal rights whenever they claim there’s a good reason for it. There is more well-being when legal rights are given out.
Next, Andersen says
Dostoyevsky criticized Longtermism, long before it was invented, in his masterpiece The Brothers Karamasov. One of the brothers, Ivan, argues that an innocent child should never suffer for the future harmony of the species:
If all must suffer to pay for the eternal harmony, what have children to do with it, tell me, please? It’s beyond all comprehension why they should suffer, and why they should pay for the harmony. Why should they, too, furnish material to enrich the soil for the harmony of the future? … too high a price is asked for harmony; it’s beyond our means to pay so much to enter on it. [emphasis added]
This argument came about because many Christians justified present suffering on Earth because God through Christ will make all well in some distant future. That is a caricature of Christian ethics, but it does ask an important question about how God can allow children to suffer. Setting that theological question aside, we can ask how we can allow a child to suffer for the sake of a future harmony.
But longtermists don’t justify torturing people to improve the future. That would, after all, not improve the future. What they tend to advocate is giving money to organizations focused on reducing existential risks and using careers to reduce existential risks.
If the sufferings of a single child are too high a price, what of the suffering of a billion children as climate change promises to unleash untold suffering on the world’s current and as yet unborn innocents?
There is no mathematical equation that can justify ignoring or putting off the current crisis, if only for their sake.
Once again, you don’t have to think that the future matters more than the present to be a longtermist. You just have to think it matters a lot and we should do more to protect it. Longtermists are definitely more concerned about climate change than the general population. I don’t think that climate change will cause a billion children to have “untold suffering.” But also, pointing out that other things matter doesn’t undermine the case for caring disproportionately about the future. Dostoyevsky being a deontologist doesn’t undermine the case for not being a deontologist — particularly when Andersen presents no arguments from Dostoyevsky. If the only way to avert climate change was to doom the future and prevent 10^52 future beings from ever flourishing, that would be bad.
As I pointed out elsewhere
This is true, yet hard to see from our present location. Let’s consider past historical events to see if this is really unintuitive when considered rationally. The black plague very plausibly lead to the end of feudalism. Let’s stipulate that absent the black plague, feudalism would still be the dominant system, and the average income for the world would be 1% of what it currently is. Average lifespan would be half of what it currently is. In such a scenario, it seems obvious that the world is better because of the black plague. It wouldn’t have seemed that way to people living through it, however, because it’s hard to see from the perspective of the future.
Additionally, as I note in one of my articles defending longtermism, if we consider things from a grand timescale, the longtermist conclusion seems obvious.
If there are going to be a billion centuries better than this one, the notion that we should mostly care about this one starts to seem absurd. Much like it would be absurd to hold that the first humans’ primary moral concerns should have been their immediate offspring, it would be similarly ridiculous to hold that we should care more about the few billion people around today than the 10^52 future people.
This also provides a powerful debunking account of contrary views. Of course the current billions seem more important than the future to us today. People in the year 700 CE probably thought that the people alive at that time contained more importance than the future. However, when considered impartially, from “the point of view of the universe,” this century is revealed to be obviously less important than the entire future.
Imagine a war that happens in the year 10,000 and kills 1 quadrillion people. However, after the war, society bounces back and rebuilds, and the war is just a tiny blip on the cosmic time-scale. This war would clearly be worse than a nuclear war that would happen today and kill billions of people. However, this too would be a tiny occurrence—barely worth mentioning by historians in the year 700,000—and entirely ignored in the year 5 million.
Two things are worth mentioning about this.
The future would be worthwhile overall even if this war happened. However, this was is worse than a global war that would kill a billion people now. Thus, by transitivity, maintaining the future would be worth a global war that would kill a billion people. And if it would be worth killing a billion people in a global nuclear war, it’s really really important.
It becomes quite obvious how insignificant we are when we consider just how many centuries there could be. Much like we (correctly) recognize that events that happened to 30 people in the year 810 CE aren’t “globally,” significant, we should similarly recognize that what happens to us is far less significant than the affect we have on the future. Imagine explaining the neartermist view to a child in the year 5 million—explaining how raising taxes a little bit or causing a bit of death by slowing down medicine slightly was so important that it was worth risking fifty-thousand centuries of prosperity—combined with all of the value that the universe could have after the year 5 million—with billions more centuries to come!
(Just to clarify, I’m a strong longtermist, so I think the future matters much more than the present. However, you don’t have to think that to be a longtermist.)
Next, Andersen says
If one cannot turn to some logical system such at utilitarianism, in its effective altruism package, to define morality, then what is the alternative?
By definition an illogical system! But you can be a non-utilitarian longtermist!
It all comes down to a misunderstanding of what morality is.
Psychiatrist and brain lateralization expert, Iain McGilchrist offers the analogy that morals are like colors. They are an irreducible part of the human experience. While the eye is, of course, stimulated by light of particular wavelengths, like the Zen koan of the tree falling in the forest, you cannot say that light of a particular wavelength is the full experience of color. Human consciousness must contribute to that experience as well, but colors are not a thought or something that we create. They are a fundamental, pre-representational experience, a meeting between the mind and physical reality. We don’t get to decide what they are.
If this is so, then morals, likewise, are a meeting between the ethical mind and physical reality, not something that we invent. Rather they are “written on our hearts”, driven by the perception of human action, empathy, and the awareness of suffering.
Moral theory is not about balancing scales but about acting on our perceptions — obeying truth and, by analogy, not saying something is blue when we see it is clearly red. It is denial or willful ignorance that causes the most harm because we not only hurt others, we hurt ourselves. We choose not to see what we already know because we don’t want to sacrifice our own comfort of believing that what we are doing or how we are living is wrong. The great failing of utilitarianism and its evolution in effective altruism is that they replace intrinsic, irreducible values with mathematical equations.
Well, I’m a robust realist, so I don’t think that morality is just about doing what seems right to us. If it seemed right to us to ignore those who are not part of our tribes — as has seemed right to many humans — we still shouldn’t do that. We have obligations even if we do not recognize them!
We can have values that we don’t endorse upon reflection. I intuitively sympathize more with cute animals than ugly ones. But I shouldn’t act on this, because they don’t matter any more. Morality isn’t just about our initial intuitions.
On top of this, even if it were about our intuitions, longtermism would still be justified. As I and others have argued, we have many intuitions that lead us necessarily to longtermism — for more on this, read the longtermism is correct series.
Moral action, also like color, is immediate to our surroundings. It is not about what is going to happen thousands of years in the future. It is about what we do today because we rarely know what is going to happen in the future or how our actions will play out for good or ill.
This is clearly false. Suppose you planted a land-mind that would detonate in a thousand years. Would that be fine, because morality is “not about what is going to happen thousands of years in the future?”
The claim that morality is not about “how our actions will play out for good or ill,” is also false. If you knew with certainty that one of your actions would end the world, then that would clearly be immoral. Perhaps morality isn’t only concerned with consequences, but everyone with a brain agrees that consequences matter somewhat.
Even if we did know and all Bostrom says comes to pass, to Longtermists I ask what Ivan Karamazov asks Alyosha:
Imagine that you are creating a fabric of human destiny with the object of making men happy in the end, giving them peace and rest at last, but that it was essential and inevitable to torture to death only one tiny creature- that baby beating its breast with its fist, for instance- and to found that edifice on its unavenged tears, would you consent to be the architect on those conditions? Tell me, and tell the truth.
If a Longtermist answers yes, God help them out of their denial.
Yes, as I’ve argued here. But if you think the answer is no, you should lament the existence of the world and support ending it, for it contains much torture. This is not plausible. (This also, relevantly, doesn’t refute longtermism at all — it may be an objection to utilitarianism, but it is not, I believe, successful).
Whether you believe any of this, there is a choice between making moral decisions based on an imaginary future utopia or based on what we see in front of our faces.
And I find it bizarre that a little techno-babble and some dubious equations can resurrect a braindead moral theory. To see it, we only need to stop denying what is going on around us and recognize our obligation to this world, here and now.
It’s odd to claim that it’s imaginary without addressing any of the arguments for thinking that the future could be very good. Additionally, it’s not the equations that resurrect the moral theory. Utilitarianism is not dead, it needs no resurrection, and longtermism is not utilitarianism.
Thus, all the considerations are utterly unconvincing. A few objections are lobbed at utilitarianism, but no argument for longtermism is addressed, nor is any argument against the idea we should do much more to safeguard the future provided. The article hasn’t proved what it intended to — it has given no-one any reason to reject utilitarianism.
I think it'd be good to have some catchy, obviously-true phrases one can repeat that quickly address the biggest misconceptions about longtermism:
Longtermism doesn't entail utilitarianism.
Most longtermists aren't billionaires.
Ignoring tradeoffs doesn't make tradeoffs go away.
Planting bombs with delayed timers is still bad.