Discussion about this post

User's avatar
Jesse Clifton's avatar

> I disagree with this position for reasons Richard Y Chappell has explained very persuasively.

I find Chappell’s post unpersuasive. His take seems to be: We should adopt priors that favor intuitive-to-humans hypotheses about the consequences of our actions, such as “nuclear war would be bad for total welfare”. But:

1. I’m not convinced that we should adopt such priors, as opposed to priors based on more objective-looking principles like Occam’s razor and the principle of indifference.

2. Suppose we thought that we should adopt such priors, in principle. Still, we’re boundedly rational agents, and it’s extremely unclear how we should update on our evidence/arguments (including arguments for why nuclear war could be good). The appropriate response to this is still, plausibly, severely imprecise credences.

3. Finally, suppose we could press a magic button that got rid of the possibility of nuclear war without changing anything else, and we agreed it would improve total welfare to press it. It does not follow that any particular action aimed at reducing the chance of nuclear war that is actually available to us improves total welfare, because it will have many other consequences, too. See the distinction between "outcome robustness" and "implementation robustness" here [*].

(When we talked in person, it seemed like your rejection of cluelessness had to do with rejecting incomplete preferences/comparative beliefs in general. I think this is a more interesting line than Chappell’s, though still disagree, see e.g. here [**]).

[*] https://forum.effectivealtruism.org/posts/rec3E8JKa7iZPpXfD/3-why-impartial-altruists-should-suspend-judgment-under

[**] https://forum.effectivealtruism.org/posts/NKx8sHcAyCiKT723b/should-you-go-with-your-best-guess-against-precise

Expand full comment
Pelorus's avatar

"But this is problematic: it implies that people 5,000 years ago were on the order of 10^64 times more important than present people."

Importance isn't a free-floating property. If someone is important, then they are important to someone. 5000 years ago, we all only existed as a vague and very distanct potential, so to our paleolithic forebears we really were rightly of no importance, while their unborn great-grandchildren (who they had firmer reason to believe would come to exist) were of some importance, and the people that they actually co-existed with were a lot of importance.

Longtermism about climate change makes a lot of sense: we might not be around to see the full catastophes, but some amount of people we have good grounds to believe will come to exist will be around. We know how our actions will directly impact them (our ancestors 5000 years ago were not in that position with us). AI doomer longtermism is on comparatively shakier grounds. And speculation about the quadrillions of distant future space people has almost no moral bearing— we have poor grounds for thinking these people will even come to be.

Expand full comment
61 more comments...

No posts