Longtermism Is Correct Part 1
Beginning a series of articles defending longtermism--hopefully this shall be a longterm project :)
Longtermism is, in my view, the most important project being currently done by present people. This article series shall defend why I think this. A reasonable definition is the following.
'Longtermism' is the view that positively influencing the long-term future is a key moral priority of our time.
If we think that the well-being of future people matters, we should be longtermists.
Stage One Of The Argument
In a previous article, I’ve presented a case for this position.
Here is that argument in full. (Note, I’ll use utility, happiness, and well-being interchangeably and disutility, suffering, and unpleasantness interchangeably).
1 It is morally better to create a person with 100 units of utility than one with 50. This seems obvious—pressing a button that would make your child only half as happy would be morally bad. If some environmental policy would halve future well-being, even if it would change the distribution of future people, it would be bad.
2 Creating a person with 150 units of utility and 40 units of suffering is better than creating a person with 100 units of utility and no suffering. After all, the one person is better off and no one is worse off. This follows from a few steps.
A) The expected value of creating a person with utility of 100 is greater than of creating one with utility zero.
B) The expected value of creating one person with utility zero is the same as the expected value of creating no one. One with utility zero has no value or disvalue to their life—they have no valenced mental states.
C) Thus, the expected value of creating a person with utility of 100 is positive.
D) If you are going to create a person with a utility of 100, it is good to increase their utility by 50 at the cost of 40 units of suffering. After all, 1 unit of suffering’s badness is equal to the goodness of 1 unit of utility, so they are made better off. They would rationally prefer 150 units of utility and 40 units of suffering to 100 units of utility and no suffering.
E) If one action is good and another action is good given the first action, then the conjunction of those actions is good.
These are sufficient to prove the conclusion. After all, C shows that creating a person with a utility of 100 is good, D shows that creating a person with utility of 150 and 40 units of suffering is better than that, so from E, creating a person with utility of 150 and 40 units of suffering is good.
This broadly establishes the logic for caring overwhelmingly about the future. If creating a person with any positive utility and no negative utility is good, and then increasing their utility and disutility by any amount, such that their positive utility increases more than their disutility does, is good, then you should create a person if their net utility is positive. This shows that creating a person with 50 units of utility and 49.9 units of disutility would be good. After all, creating a person with .05 units of utility would be good, and increasing their utility by 49.95 at the cost of 49 units of disutility would also be good, then creating a person with 50 units of utility is good. Thus, the moral value of increasing the utility of a future person by N is greater than the moral disvalue of increasing the disutility of a future person by any amount less than N.
Now, let’s add one more stipulation. The moral value of causing M units of suffering to a current person is equal to that of causing a future person M units of suffering. This is very intuitive. When someone lives shouldn’t affect how bad it is to make them suffer. If you could either torture a current person or a future person, it wouldn’t be better to torture the future person merely in virtue of the date of their birth. Landmines don’t get less bad the longer they’re in the ground.
However, even if you reject this, as long as you accept a more modest principle, namely, that the disvalue of the suffering of future people matters at least a little bit (say, at least .001%) as much as the disvalue of the suffering of current people. As we will show, there could be so many future people that these considerations will dominate if we care about future people at all.
From these we can get our proof that we should care overwhelmingly about the future. We’ve established that increasing the utility of a future person by N is better than preventing any amount of disutility for future people of less than N and that preventing M units of disutility for future people is just as good as preventing M units of disutility for current people. This would mean that increasing the utility of a future person by N is better than preventing any amount of disutility of current people of less than N. Thus, bringing about a person with utility of 50 is morally better than preventing a current person from enduring 48 units of suffering, by transitivity.
The common trite that it’s only good to make people happy, not to make happy people is false. This can be shown in two ways.
It would imply that creating a person with a great life would be morally equal to creating a person with a mediocre life. This would imply that if given the choice between bringing about a future person with utility of 5 and no suffering or a future person with utility of 50,000 and no suffering, one should flip a coin.
It would say that we shouldn’t care about the impacts of our climate actions on future people. For people who won’t be born yet, climate action will certainly change whether or not they exist. If a climate action changes when people have sex by even one second, it will change the identities of the future people that will exist. Thus, when we decide to take climate action that will help the future, we don’t make the future people better off. Instead, we make different future people who will be better off than the alternative batch of future people would have been. If we shouldn’t care about making happy people, then there’s no reason to take climate action for the future.
Stage 2
This logic has, so far, established that we ought to care a lot about bringing about lots of future people who will live good lives. However, we have not yet established either
We can affect future people a lot.
There could be lots of future people.
The first is relatively easy to show. There is a reasonably high chance that existential threats will end humanity in the next century. To quote my earlier article
“Additionally, even if one is a neartermist, they should still be largely on board with reducing existential threats. Ord argues in his book risk of extinction is 1 in 6, Bostrom concludes risks are above 25%, Leslie concludes they’re 30%, Rees says they’re 50%. Even if they’re only 1%—a dramatic underestimate, that still means existential risks will in expectation kill 79 million people, many times more than the holocaust. Thus, existential risks are still terrible if one is a neartermist, so Torres should still be on board with the project.”
There are lots of ways to reduce these existential risks. Examples include research into AI alignment to reduce AI risks, working in biosecurity, a variety of actions to reduce nuclear risks including
Nevertheless, some plausible goals include:
Reducing arsenal sizes.
Removing particular destabilising weapons (or preventing their construction), such as nuclear-armed cruise missiles.
Committing to "No first use" policies.
Committing to not target communications networks, cities, or nuclear power stations.
Preventing proliferation of nuclear weapons or materials to additional countries.
Reducing stockpiles of fissile material.
Improving relations between nuclear powers.
There are lots of other options listed here. You can donate here to help reduce existential risks and take other actions to improve the long term future. Lots of smart people spending money and using their careers to try to prevent extinction and improve the future in other ways can plausibly reduce existential risks and improve the quality of the future.
How good could the future be? Very very good. As I’ve argued here
the scenario with the highest expected value could have truly immense expected value.
1) Number of people. The future could have lots of people. Bostrom calculated that there could be 10^52 people by reasonable assumptions. This is such a vast number that even a 1 in 10 billion chance of there being an excellent future produces in expectation 10^42 people in the future.
Additionally, it seems like there's an even smaller but far from zero probability that it would be possible to bring about vastly greater numbers of sentient beings living very good lives. Several reasons to think this.
1) Metaculus says as of 2/25/22 that the odds that the universe will end are about 85%. Even if we think that this is a major underestimate, if it's even 99%, then it seems imminently possible for us to have a civilization that survives either forever or nearly forever.
2) Given the large number of unsolved problems in physics, the correct model could be very different form what we believe.
3) Given our lack of understanding of consciousness, it's possible that there's a way to infinitely preserve consciousness.
4) As Inslor says on Metaculus "I personally subscribe to Everett Many Worlds interpretation of QM and it seems to me possible that one branch can result in infinitely many downstream branches with infinity many possible computations. But my confidence about that is basically none."
5) Predictions in general have a fairly poor track record. Claims of alleged certainty are wrong about 1/8th of the time. We thus can't be very confident about such matters relating to how we can affect the universe 10 billion years from now.
Sandberg and Manheim argue against this, writing "This criticism cannot be refuted, but there are two reasons to be at least somewhat skeptical. First, scientific progress is not typically revisionist, but rather aggregative. Even the scientific revolutions of Newton, then Einstein, did not eliminate gravity, but rather explained it further. While we should regard the scientific input to our argument as tentative, the fallibility argument merely shows that science will likely change. It does not show that it will change in the direction of allowing infinite storage."
It's not clear that this is quite right. Modern scientific theories have persuasively argued against previous notions of time, causality, substance dualism, and many others. Additionally, whether or not something is aggregative or revisionist seems like an ill defined category. Theories may have some aggregationist components and other revisionist ones. Additionally, there might be interesting undiscovered laws of physics that allow us to do extra things that we currently can't.
While it's unlikely that we'll be able to go faster than light or open up wormholes, it's certainly far from impossible. And this is just one mechanism by which the survival of sentient beings could advance past the horizon imagined by Sandberg and Manheim. The inability of cavemen to predict what would go on in modern society should leave us deeply skeptical of claims relating to the possibilities of civilizations hundreds of millions of years down the line.
Sandberg and Manheim add "Second, past results in physics have increasingly found strict bounds on the range of physical phenomena rather than unbounding them. Classical mechanics allow for far more forms of dynamics than relativistic mechanics, and quantum mechanics strongly constrain what can be known and manipulated on small scales." This is largely true, though not entirely. The aforementioned examples explain how more modern physics allows us to figure out true things.
Sandberg and Manheim finish, writing "While all of these arguments in defense of physics are strong evidence that it is correct, it is reasonable to assign a very small but non-zero value to the possibility that the laws of physics allow for infinities. In that case, any claimed infinities based on a claim of incorrect physics can only provide conditional infinities. And those conditional infinities may be irrelevant to our decisionmaking, for various reasons."
I'd generally agree with the assessment. I'd currently give about 6% credence in it being theoretically possible for a civilization to last forever. However, the upside is literally infinite, so even low risks matter a great deal.
One might be worried about the possibility of dealing with infinities. This is a legitimate worry. However, rather than thinking of it as infinity, for now we can just treat it as some unimaginably big number (say Graham's number). This avoids paradoxes relating to it and is justified if we rightly think that an infinity of bliss is better than Graham's number years of bliss.
One might additionally worry that the odds are sufficiently low that this potential scenario can be ignored. This is, however, false, as can be shown with a very plausible principle called The Level Up Principle:
Let N be a number of years of good life
M is a different number of years of good life where M<N
P is a probability that's less than 100%
The principle states the following: For any state of the world with M, there is some value of P and N for which P(N) is overall better than certainty of M.
This is a very plausible principle.
Suppose that M is 10 trillion. For this principle to be true there would have to be some much greater amount of years of happy life for which a 99.999999999999999999% chance of it being realized is more choice worthy than certainty of 10 trillion years of happy life. This is obviously true. A 99.9999999999999999999999999999999999999999999999999999999999999999999999999999999% chance of 10^100^100^100 years of happy life is more choice worthy than certainty of 10 trillion years of happy life. However, if we take this principle seriously then we find that chances of infinity or inconceivably large numbers of years of happy life dominate all else. If we accept transitivity (as we should), then we would conclude that each state of the world has a slightly less probable state of the world that's more desirable because the number of years of good life is sufficiently greater. This would mean that we can keep diminishing the probability of the event, but increasing the number of years of good life, until we get to a low probability of some vast number of years of good life (say Graham's number) being better than a higher probability of trillions of years of happy life. This conclusion also follows straightforwardly if we shut up and multiply.
Other reasons to think that the far future could be very good.
2) The possibility of truly excellent states of consciousness.
We currently don't have a very well worked out theory of consciousness. There are lots of different scientific and philosophical views about consciousness. However, there are good reasons to be optimistic about the possibility of super desirable consciousness.
The immense malleability of consciousness. Our experiences are so strange and varied that it seems like conscious experience can take a wide number of forms. One would be a priori surprised to find that an experience as horrific as brutal torture, as good as certain pleasurable experiences, or as strange and captivating experience as people have when taking psychedelics drugs, are able to actually exist in the real world. All of these processes are extremely strange contours of conscious experience, showing that consciousness is at least very malleable. Additionally, all of these experiences were produced by the blind process of darwinian evolution, meaning that the true possibilities of conscious experience opened up by AI's optimizing for good experiences are far beyond that which randomly emerged.
The fact that these experiences have emerged despite our relatively limited computational capacities. Consciousness probably has something to do with mental computation. The human brain is a relatively inefficient computational device. However, despite that, we can have very vivid experiences--ones that are extremely horrific. The experience of being fried to death in an iron bull, being beaten to death, and many others discussed here , show that even with our fairly limited computational abilities, we have the ability to experience intensely vivid experiences. It seems like it should be possible to--with far more advanced computation--create positive experiences with hedonic value that far surpasses even the most horrific of current experiences. We don't have good reason to believe that there's some computational asymmetry that makes it more difficult to produce immensely positive experiences than immensely negative experiences. Darwinian evolution provides a perfectly adequate account of why the worst experiences are far more horrific than the best experiences are good, based on their impact on our survival. Dying in a fire hampers passing on ones genes more than having sex one time enables passing on of gene. This means that the current asymetry between the best and worst experiences shouldn't lead us to conclude that there's some fundamental computational difference between the resources needed to produce very good experiences and the resources needed to produce very bad experiences.
Based on the reasons given here, including peoples descriptions of intense feelings of pleasure, it seems possible to create states of unfathomable bliss even with very limited human minds, resulting in a roughly logarithmic scale of pain.
Even if we did have reason to think there was a computational asymmetry, there's no reason to think that the computational asymmetry is immense. No doubt the most intense pleasures for humans can be far better than the most horrific suffering is for insects.
I'd thus have about 93% credence in, if digital consciousness were possible, it being possible to create pleasure that's more intense than the most horrific instances of suffering are bad. Thus, the value of a future utopia could be roughly as good as the disvalue of dystopia would be bad. This gives us good reason to think that the far future could have immense value if there is successful digital sentience.
This all relies on the possibility of digital sentience. I have about 92% confidence in the possibility of digital sentience, for the following reasons.
1 The reason described in this article, "Imagine that you develop a brain disease like Alzheimer’s, but that a cutting-edge treatment has been developed. Doctors replace the damaged neurons in your brain with computer chips that are functionally identical to healthy neurons. After your first treatment that replaces just a few thousand neurons, you feel no different. As your condition deteriorates, the treatments proceed and, eventually, the final biological neuron in your brain is replaced. Still, you feel, think, and act exactly as you did before. It seems that you are as sentient as you were before. Your friends and family would probably still care about you, even though your brain is now entirely artificial.[1]
This thought experiment suggests that artificial sentience (AS) is possible[2] and that artificial entities, at least those as sophisticated as humans, could warrant moral consideration. Many scholars seem to agree.[3]"
2 Given that humans are conscious, unless one thinks that consciousness relates to arbitrary biological facts relating to the fleshy stuff in the brain, it should be possible at least in theory to make computers that are conscious. It would be parochial to assume that the possibility of being sentient merely relates to the specific line of biological lineage that lead to our emergence, rather than more fundamental computational features of consciousness.
3 Consider the following argument, roughly given by Eliezer Yudkowsky in this debate.
P1 Consciousness exerts a causally efficacious influence on information processing.
P2 If consciousness exerts a causally efficacious influence on information processing, copying human information processing would generate to digital consciousness.
P3 It is possible to copy human information processing through digital neurons.
Therefore, it is possible to generate digital consciousness. All of the premises seem true.
P1 is supported here.
P2 is trivial.
P3 just states that there are digital neurons, which there are. To the extent that we think that there are extreme tail ends to both experiences and numbers of people this gives us good reason to expect the tail end scenario for the long terms to dominate other considerations.
The inverse of these considerations obviously apply for a dystopia.
In later parts, we’ll explore more reasons for accepting the longtermist thesis.