> I disagree with this position for reasons Richard Y Chappell has explained very persuasively.
I find Chappell’s post unpersuasive. His take seems to be: We should adopt priors that favor intuitive-to-humans hypotheses about the consequences of our actions, such as “nuclear war would be bad for total welfare”. But:
1. I’m not convinced that we should adopt such priors, as opposed to priors based on more objective-looking principles like Occam’s razor and the principle of indifference.
2. Suppose we thought that we should adopt such priors, in principle. Still, we’re boundedly rational agents, and it’s extremely unclear how we should update on our evidence/arguments (including arguments for why nuclear war could be good). The appropriate response to this is still, plausibly, severely imprecise credences.
3. Finally, suppose we could press a magic button that got rid of the possibility of nuclear war without changing anything else, and we agreed it would improve total welfare to press it. It does not follow that any particular action aimed at reducing the chance of nuclear war that is actually available to us improves total welfare, because it will have many other consequences, too. See the distinction between "outcome robustness" and "implementation robustness" here [*].
(When we talked in person, it seemed like your rejection of cluelessness had to do with rejecting incomplete preferences/comparative beliefs in general. I think this is a more interesting line than Chappell’s, though still disagree, see e.g. here [**]).
Thanks for this comment Jesse, I think it's quite good! I'd add that on a less meta level, we should come clean about how little we know about "total welfare": we don't know what order of magnitude of sentient beings exist on earth (not to mention in the universe), the extent to which they suffer, what their specific internal preferences are, how their population and experiences are affected by our actions... If we're so clueless about total welfare in the present, I find it very unlikely that we can make strong claims about welfare in the future.
"But this is problematic: it implies that people 5,000 years ago were on the order of 10^64 times more important than present people."
Importance isn't a free-floating property. If someone is important, then they are important to someone. 5000 years ago, we all only existed as a vague and very distanct potential, so to our paleolithic forebears we really were rightly of no importance, while their unborn great-grandchildren (who they had firmer reason to believe would come to exist) were of some importance, and the people that they actually co-existed with were a lot of importance.
Longtermism about climate change makes a lot of sense: we might not be around to see the full catastophes, but some amount of people we have good grounds to believe will come to exist will be around. We know how our actions will directly impact them (our ancestors 5000 years ago were not in that position with us). AI doomer longtermism is on comparatively shakier grounds. And speculation about the quadrillions of distant future space people has almost no moral bearing— we have poor grounds for thinking these people will even come to be.
I disagree obviously -- and I find it bizarre how people assert moral anti-realism with no argument as if they are merely noting some trivial truth.
But putting that aside, I that if we have an ethical view that implies that one person being slightly happy tomorrow is 10^60 times more important than 10,000 people being happy in 5000 years, something has gone badly wrong!
I don't think I was asserting moral anti-realism (I don't have a firm meta-ethical position myself, but realism is possible), I was talking about your use of language. "Important" mean "important for someone". Misuse of language is going to lead to weird conclusions.
Small clarity quibble: "Even if you think that the odds of an existential catastrophe are only 1/1,000 in the next century, that still means existential catastrophe will on average kill around 8 million people."
Shouldn't this really be "in expectation" and not "on average"? With a binary one-off event like an existential catastrophe, I think the term "will on average" is misleading. Either it will kill everyone or it won't.
> If undergoing major sacrifices—say, reducing present welfare by half—would make the life of every future person 1% better, Strong Longtermists would say that would be an improvement.
What specific plans do you have which would accomplish this tradeoff?
Like a lot of your philosophical posts, this argument centers on multiplying some extremely large number of occurrences by a very speculative very small number. This has big problems:
Variance - what if your intervention is actually negative? What a waste of effort and political capital! Without a concrete proposal of course this is hard to argue, but very tiny effects (like the ones you claim will be super-dodeca-multiplied) tend to be very noisy and uncertain to estimate.
Practicality - it's impossible to get everyone to agree to wear a mask in a pandemic, let alone suffer a huge decrease in quality of life for the sake of the far future. Whatever route we take to the future, it will involve billions of sentient beings looking out for their own interests. If you can't align them well, this is just navel gazing.
Justification of bad actors - Sam Bankman-Fried justified his outrageous financial fraud by claiming that eventually, quadrillions of future humans would benefit from his speculative investments. Elon Musk is convinced that sending humans to Mars is the most significant advancement in human history and the fact that he's attempting this (he almost certainly will not succeed, but hey, tiny probabilities are A-OK) makes him a really good guy overall. Mao justified the deaths of millions with the promise of future prosperity for billions.
Good comment. Especially ”Like a lot of your philosophical posts, this argument centers on multiplying some extremely large number of occurrences by a very speculative very small number.”
Longtermism is partly obvious and partly very speculative or contentious. Especially when expounded by someone who thinks that the greatest priority we should have would be to reduce insect populations.
If I accept Strong Longtermism, and I believe that essentially all the moral consequences of my actions are in the far future and are completely unknowable, then why does it matter what I do now? I have no knowledge of the consequences of kicking a puppy today will be a trillion years from now, so who cares?
People only exists at the present, the perceptual present. Not some observer independant present, there is no such thing. You can't have real obligations for imaginary people. You can apply hypothetical obligation to hypothetical people but they don't suddenly turn real.
If people existed at some town called future where there's 100k people in need and on the way there was a smaller towen called near-future where there is one person in need of help and if you stop on the way, you can't help the 100k, what do you choose - I think either choice is morally permissible but that's not imporant.
Point is neither of these towns exists. What exists is what exists at the now, we won't go to future, future won't arrive at us. There's only the current world, people, one's fallible beliefs and understanding, feelings, expectations etc - real obligations pertains to what's existing and real.
A perfect illustration of the counterintuitive consequences of agent independent moral considerations — we only get 10^64 for pharaoh and longtermism with the assumption that truth value of moral judgments is completely not agent relative — if we assume that truth value is instead partially dependent on something similar to spatial proximity, say ~ logarithmic distance in time and space — we both don’t give the ludicrous amount of moral significance to pharaoh’s cake (it’s incredibly far from us, after all) and to potential people as well
So saying something like “other people are significant to me partly as much as they’re in spatial proximity to me” doesn’t strike me as an obviously false statement — it seems that inferred beliefs of plenty of people have this one
I’m not saying that I personally believe that truth value of moral statements is agent relative — but I think treating it this way significantly reduces the variance of decisions one makes (e.g. if you’re convinced that killing a person X will improve future’s people’s lives 1% with probability of almost 0 but more than 1/10^35, you should probably do so, given that that’s +EV, but probably not do so, given the fact that humans are incredibly bad at dealing with extreme probabilities), so maybe you get better EV by at least partially sticking to this rule
I think strictly speaking what you’re saying is obviously correct, but may not be obviously correct as layman’s behaviour
Another way to put it is that correctly evaluating marginal effect on “All future people” is extremely unlikely in absence of the plausible effect at least for two generations in advance”
"It’s hard to imagine a scenario where doing research into preventing extreme future suffering via unimaginably nightmarish dystopia is a bad thing"
Hmm, I don't find it that hard to imagine. All it takes is somebody hearing about that research and then deciding that it makes causing extreme future suffering via unimaginably nightmarish dystopia sound cool, and so deciding to bring it about. Or one of those researchers trying to get an AI to help in prevention, and then the AI flipping to doing the exact opposite. Or researchers fleshing out more plausible scenarios inspiring sadists or factory farmers or something like that.
Asteroid detection would probably be a better example of something robust to cluelessness-based objections; I feel like I've seen something along those lines out there. Yes, knowing about dangerous asteroids could inspire people to try to deflect them into Earth, but that's a) extremely difficult right now and b) not that useful to people far in the future with hypothetically more advanced technology. On the other hand, if we're talking about suffering digital minds or whatever, it seems possible to create ones and make them suffer given whatever far future tech is being used to prevent their suffering.
> It would be awfully convenient if after learning that the far future has nearly all the expected value in the world, it turned out that this had no significant normative implications.
This doesn't seem to be an accurate shot against cluelessness? Most clueless people I know (including me) have seen their priorities significantly affected by cluelessness over the long-term.
I actually have a very rough draft post somewhere making the opposite claim: if we believe that "suspicious convergence" is a thing, we should be very skeptical of longtermism insofar as it always recommends things that are very good in the short-term (by the lights of longtermists, at least), like better institutions and no nuclear war. Given how insanely complex the impacts of historical events on total welfare must have been, it would be very surprising if all long-term priorities matched short-term comfort so well. Though I don't think this is a sufficient argument against longtermism (I tend to think most arguments for anything based on concepts like "motivated reasoning" and "suspicious convergence" are insufficient).
I find myself thinking of an argument Tom Schelling liked to make, that we would/should not wish upon our forebears less good lives to make our lives better, because our lives are already so much better than theirs were. This presents no argument about avoiding extinction-level risk to future people. But it does offer an argument that we should expect our descendants to have lives that are so much better than ours (at least materially) that we should not be willing to give up much comfort to grant them all more comfort. It is not mere discounting - I suppose it is diminishing marginal returns in economic terms. But it is also just fairness or maximin.
Matt, what is your opinion of cryonics? It seems like a short logical leap from longermism making sense to cryonics making sense. But I couldn't find anything you have written about it.
The sensible objection I've seen to longtermism is that reality is complex enough, and even chaotic in the mathematical sense, that there is *no way to tell* the long-term direction of the effects of your actions. You do something to push things in one direction, and given enough time (that's decades, not centuries), it's roughly 50% that you've helped your cause, or the opposite.
Hard to be long-distanceist when your eyes only see so far, and your car is weird enough that steering in one direction is 50% likely to make it veer towards the opposite side after a while.
I think I agree with strong longtermism, but it seems plausible to me that most of the classic 'short-term' interventions have quite and robust expected long-term effects, on top of having highly certain positive short-term effects. (ex. shrimp stunning in the present may take root and influence food production in future space colonies). Do you think X risk and S risk work beats these out?
> I disagree with this position for reasons Richard Y Chappell has explained very persuasively.
I find Chappell’s post unpersuasive. His take seems to be: We should adopt priors that favor intuitive-to-humans hypotheses about the consequences of our actions, such as “nuclear war would be bad for total welfare”. But:
1. I’m not convinced that we should adopt such priors, as opposed to priors based on more objective-looking principles like Occam’s razor and the principle of indifference.
2. Suppose we thought that we should adopt such priors, in principle. Still, we’re boundedly rational agents, and it’s extremely unclear how we should update on our evidence/arguments (including arguments for why nuclear war could be good). The appropriate response to this is still, plausibly, severely imprecise credences.
3. Finally, suppose we could press a magic button that got rid of the possibility of nuclear war without changing anything else, and we agreed it would improve total welfare to press it. It does not follow that any particular action aimed at reducing the chance of nuclear war that is actually available to us improves total welfare, because it will have many other consequences, too. See the distinction between "outcome robustness" and "implementation robustness" here [*].
(When we talked in person, it seemed like your rejection of cluelessness had to do with rejecting incomplete preferences/comparative beliefs in general. I think this is a more interesting line than Chappell’s, though still disagree, see e.g. here [**]).
[*] https://forum.effectivealtruism.org/posts/rec3E8JKa7iZPpXfD/3-why-impartial-altruists-should-suspend-judgment-under
[**] https://forum.effectivealtruism.org/posts/NKx8sHcAyCiKT723b/should-you-go-with-your-best-guess-against-precise
I majorly disagree--we'll have to discuss this more later.
Thanks for this comment Jesse, I think it's quite good! I'd add that on a less meta level, we should come clean about how little we know about "total welfare": we don't know what order of magnitude of sentient beings exist on earth (not to mention in the universe), the extent to which they suffer, what their specific internal preferences are, how their population and experiences are affected by our actions... If we're so clueless about total welfare in the present, I find it very unlikely that we can make strong claims about welfare in the future.
"But this is problematic: it implies that people 5,000 years ago were on the order of 10^64 times more important than present people."
Importance isn't a free-floating property. If someone is important, then they are important to someone. 5000 years ago, we all only existed as a vague and very distanct potential, so to our paleolithic forebears we really were rightly of no importance, while their unborn great-grandchildren (who they had firmer reason to believe would come to exist) were of some importance, and the people that they actually co-existed with were a lot of importance.
Longtermism about climate change makes a lot of sense: we might not be around to see the full catastophes, but some amount of people we have good grounds to believe will come to exist will be around. We know how our actions will directly impact them (our ancestors 5000 years ago were not in that position with us). AI doomer longtermism is on comparatively shakier grounds. And speculation about the quadrillions of distant future space people has almost no moral bearing— we have poor grounds for thinking these people will even come to be.
I disagree obviously -- and I find it bizarre how people assert moral anti-realism with no argument as if they are merely noting some trivial truth.
But putting that aside, I that if we have an ethical view that implies that one person being slightly happy tomorrow is 10^60 times more important than 10,000 people being happy in 5000 years, something has gone badly wrong!
Moral antirealism is a trivial truth
Could you point to what action, which would make me slightly less happy tomorrow, would result in 10,000 people being happy in 5000 years?
I don't think I was asserting moral anti-realism (I don't have a firm meta-ethical position myself, but realism is possible), I was talking about your use of language. "Important" mean "important for someone". Misuse of language is going to lead to weird conclusions.
Small clarity quibble: "Even if you think that the odds of an existential catastrophe are only 1/1,000 in the next century, that still means existential catastrophe will on average kill around 8 million people."
Shouldn't this really be "in expectation" and not "on average"? With a binary one-off event like an existential catastrophe, I think the term "will on average" is misleading. Either it will kill everyone or it won't.
Fixed!
> If undergoing major sacrifices—say, reducing present welfare by half—would make the life of every future person 1% better, Strong Longtermists would say that would be an improvement.
What specific plans do you have which would accomplish this tradeoff?
Like a lot of your philosophical posts, this argument centers on multiplying some extremely large number of occurrences by a very speculative very small number. This has big problems:
Variance - what if your intervention is actually negative? What a waste of effort and political capital! Without a concrete proposal of course this is hard to argue, but very tiny effects (like the ones you claim will be super-dodeca-multiplied) tend to be very noisy and uncertain to estimate.
Practicality - it's impossible to get everyone to agree to wear a mask in a pandemic, let alone suffer a huge decrease in quality of life for the sake of the far future. Whatever route we take to the future, it will involve billions of sentient beings looking out for their own interests. If you can't align them well, this is just navel gazing.
Justification of bad actors - Sam Bankman-Fried justified his outrageous financial fraud by claiming that eventually, quadrillions of future humans would benefit from his speculative investments. Elon Musk is convinced that sending humans to Mars is the most significant advancement in human history and the fact that he's attempting this (he almost certainly will not succeed, but hey, tiny probabilities are A-OK) makes him a really good guy overall. Mao justified the deaths of millions with the promise of future prosperity for billions.
Good comment. Especially ”Like a lot of your philosophical posts, this argument centers on multiplying some extremely large number of occurrences by a very speculative very small number.”
Longtermism is partly obvious and partly very speculative or contentious. Especially when expounded by someone who thinks that the greatest priority we should have would be to reduce insect populations.
If I accept Strong Longtermism, and I believe that essentially all the moral consequences of my actions are in the far future and are completely unknowable, then why does it matter what I do now? I have no knowledge of the consequences of kicking a puppy today will be a trillion years from now, so who cares?
People only exists at the present, the perceptual present. Not some observer independant present, there is no such thing. You can't have real obligations for imaginary people. You can apply hypothetical obligation to hypothetical people but they don't suddenly turn real.
If people existed at some town called future where there's 100k people in need and on the way there was a smaller towen called near-future where there is one person in need of help and if you stop on the way, you can't help the 100k, what do you choose - I think either choice is morally permissible but that's not imporant.
Point is neither of these towns exists. What exists is what exists at the now, we won't go to future, future won't arrive at us. There's only the current world, people, one's fallible beliefs and understanding, feelings, expectations etc - real obligations pertains to what's existing and real.
A perfect illustration of the counterintuitive consequences of agent independent moral considerations — we only get 10^64 for pharaoh and longtermism with the assumption that truth value of moral judgments is completely not agent relative — if we assume that truth value is instead partially dependent on something similar to spatial proximity, say ~ logarithmic distance in time and space — we both don’t give the ludicrous amount of moral significance to pharaoh’s cake (it’s incredibly far from us, after all) and to potential people as well
So saying something like “other people are significant to me partly as much as they’re in spatial proximity to me” doesn’t strike me as an obviously false statement — it seems that inferred beliefs of plenty of people have this one
I’m not saying that I personally believe that truth value of moral statements is agent relative — but I think treating it this way significantly reduces the variance of decisions one makes (e.g. if you’re convinced that killing a person X will improve future’s people’s lives 1% with probability of almost 0 but more than 1/10^35, you should probably do so, given that that’s +EV, but probably not do so, given the fact that humans are incredibly bad at dealing with extreme probabilities), so maybe you get better EV by at least partially sticking to this rule
I think strictly speaking what you’re saying is obviously correct, but may not be obviously correct as layman’s behaviour
Another way to put it is that correctly evaluating marginal effect on “All future people” is extremely unlikely in absence of the plausible effect at least for two generations in advance”
"It’s hard to imagine a scenario where doing research into preventing extreme future suffering via unimaginably nightmarish dystopia is a bad thing"
Hmm, I don't find it that hard to imagine. All it takes is somebody hearing about that research and then deciding that it makes causing extreme future suffering via unimaginably nightmarish dystopia sound cool, and so deciding to bring it about. Or one of those researchers trying to get an AI to help in prevention, and then the AI flipping to doing the exact opposite. Or researchers fleshing out more plausible scenarios inspiring sadists or factory farmers or something like that.
Asteroid detection would probably be a better example of something robust to cluelessness-based objections; I feel like I've seen something along those lines out there. Yes, knowing about dangerous asteroids could inspire people to try to deflect them into Earth, but that's a) extremely difficult right now and b) not that useful to people far in the future with hypothetically more advanced technology. On the other hand, if we're talking about suffering digital minds or whatever, it seems possible to create ones and make them suffer given whatever far future tech is being used to prevent their suffering.
> It would be awfully convenient if after learning that the far future has nearly all the expected value in the world, it turned out that this had no significant normative implications.
This doesn't seem to be an accurate shot against cluelessness? Most clueless people I know (including me) have seen their priorities significantly affected by cluelessness over the long-term.
I actually have a very rough draft post somewhere making the opposite claim: if we believe that "suspicious convergence" is a thing, we should be very skeptical of longtermism insofar as it always recommends things that are very good in the short-term (by the lights of longtermists, at least), like better institutions and no nuclear war. Given how insanely complex the impacts of historical events on total welfare must have been, it would be very surprising if all long-term priorities matched short-term comfort so well. Though I don't think this is a sufficient argument against longtermism (I tend to think most arguments for anything based on concepts like "motivated reasoning" and "suspicious convergence" are insufficient).
I find myself thinking of an argument Tom Schelling liked to make, that we would/should not wish upon our forebears less good lives to make our lives better, because our lives are already so much better than theirs were. This presents no argument about avoiding extinction-level risk to future people. But it does offer an argument that we should expect our descendants to have lives that are so much better than ours (at least materially) that we should not be willing to give up much comfort to grant them all more comfort. It is not mere discounting - I suppose it is diminishing marginal returns in economic terms. But it is also just fairness or maximin.
I would recommend reading this critique of longtermist estimates of the value of existential risk mitigation to those interested in the topic:
https://globalprioritiesinstitute.org/wp-content/uploads/David-Thorstad-Three-mistakes-in-the-moral-mathematics-of-existential-risk.pdf
The most relevant part to this post is Thorstad's claim that there's actually a pretty good chance that not that many people will exist in the future.
Curious what you think about this post:
https://forum.effectivealtruism.org/posts/ycJiZxpSKEt4SYFug/extinction-is-probably-only-10-10-times-worse-than-one
Don't you believe there are already maximally infinite people or something?
Matt, what is your opinion of cryonics? It seems like a short logical leap from longermism making sense to cryonics making sense. But I couldn't find anything you have written about it.
The sensible objection I've seen to longtermism is that reality is complex enough, and even chaotic in the mathematical sense, that there is *no way to tell* the long-term direction of the effects of your actions. You do something to push things in one direction, and given enough time (that's decades, not centuries), it's roughly 50% that you've helped your cause, or the opposite.
Hard to be long-distanceist when your eyes only see so far, and your car is weird enough that steering in one direction is 50% likely to make it veer towards the opposite side after a while.
I think I agree with strong longtermism, but it seems plausible to me that most of the classic 'short-term' interventions have quite and robust expected long-term effects, on top of having highly certain positive short-term effects. (ex. shrimp stunning in the present may take root and influence food production in future space colonies). Do you think X risk and S risk work beats these out?