Nice piece, but small correction: in the XPT, the median expert had the chance of extinction by 2100 at 6%, and the median superforecaster had the chance at 1%, so 10% seems on the higher end of expert forecasts. Probably you're referring to Ord, who has x-risk at 1/6, but I think he is much higher than most forecasters.
I admire your consistency and willingness to hammer down to a final logical conclusion, even if some of those conclusions appear absurd to a lot of people. Forcing skeptical listeners to think hard why they think a logically deduced conclusion is wrong and revisit their own logic makes us all smarter in the end, whoever “wins” the argument.
Agree with the vast majority of these, so will just point out some interesting points of disagreement or confusion.
21. I think philosophers are also quite confused a lot of the time about politics. I think it can seem like philosophers are _less_ confused to other philosophers, because in philosophy the notion of "disagreeing on certain factual claims, and thus arriving to different results" is well-respected, so long as your argument works. However, many philosophers who arrive at bad political conclusions do it by having insane opinions on factual claims that are just clearly wrong. This is another way to be very confused that philosophers can have blind-spots to. (Okay, but it's still better than the average person, that's a very low bar).
22. Non-desert seems very easily defensible. I guess it conflicts with people's intuitions strongly? Seems more obvious to me than some of the obvious stuff you listed. But I agree with all of it anyway.
23. Disagree. Think it's possible/plausible though, depends on persuasiveness of certain arguments to you. I could be moved by a conversation with a smart person here though.
24. Disagree, but think it's possible/plausible. We have never encountered anything that is not physical, I see little reason to think consciousness is different.
26. I just have high uncertainty on this.
Agree with everything else.
Finally, worth noting that I strongly agree with 29! Including the potentially ironic remark about substack (but I do think substack just _is_ better, so far at least).
4/20) that animal suffering outweighs human suffering, and animals in general live net negative lives
11)existential danger is real
17) Human intuition is fundamentally a broken method of assessing morality.
God in this worldview is a being who created a world that faces existential risk, gave humanity a broken intuition into morality, and gave countless animals net negative lives. This worldview accepts not just the problem of evil as you described, but the problem of evil, along with a creator who designed the moral instincts of human beings in such a fundamentally broken way that they would never align with true morality, which he is not the arbiter of anyway.
Overall, this just feels like shoving God into an atheist-shaped moral hole.
A couple of push-backs on this. The first is that unless you have some foundational access to moral truth, the belief that the overton window moving isn't just the overton window moving as opposed to actual moral progress has to be shown. As in, do we judge other societies in the past because we are right and they are wrong, or because judging other societies based on a belief that our morality is universal is a particular feature of our morality, originating it would seem in certain aspects of Christianity? How do we know it's progress and that societies in the past didn't know better than us? And second I think 3 is just way too simplistic, it's either that or just tautological, like saying suffering is bad because you suffer. But I might suffer going to the gym and doing something hard, there are many examples where I might have negative experience for something contextually good, and those experiences are arguably some of the most meaningful in life: so if the context can change how something 'feels' or what an experience means then it isn't the result of intrinsic badness but of a much more complex conceptual nesting in value. When you say "suffering is bad because of how it feels" you are drawing attention to an implied particular kind of experience that the statement doesn't delineate, and this seems obvious and is easily missed because the statement is apparently so self-evident, but I think a lot of your conclusions suffer from being drawn from overly simplistic axioms like this.
1. I think we've made lots of progress in many domains, so it's not that surprising we've made some in the moral domain. I don't think it's that hard for a person in the modern era to think through the arguments for slavery, say, and see that they are wrong and that slavery is immoral. But from that it follows that we've made progress on the slavery issue.
3. Suffering refers to the states that really, genuinely feel bad. It doesn't refer to the states that don't feel bad but sort of resemble things that feel bad and are meaningful. But in any case, as I explain here, none of the arguments require believing anything controversial about what makes suffering bad https://benthams.substack.com/p/were-not-the-center-of-the-moral
"Really, genuinely feel bad" just adds some more framing or adverbs but it's still the same thing. It's just saying "we know what badness is because it's bad." I submit you would have trouble finding a clear line between those examples of genuine badness and things that resemble them.
I don’t see how having irrational people who hold inconsistent and unreasonable positions make decisions impacting intelligent people as well as others coheres with position 29
Presumably it also matters that your system actually works in practice and doesn't lead to horrible corruption after one generation at most. Democracy is still - so far - the worst system aside from all the others.
How is it in conflict with a support in Longtermism. If you think Longtermism is good but people won’t support it, stop giving the people an opportunity to have a say in what is good. Problem solved.
Surprised you didn't bring up MacAskill's Better Futures, given your expected value maximization! Saving ourselves from existential risk is likely not be enough to capture most possible value
> The far future could have way more people than are around today
I agree with this, given it’s a “could” statement and there are plenty of things that could cause a population increase. Having said that, I’m skeptical that the global population will increase steadily over the next century: for only a few countries is the birth rate higher than the global replacement rate.
"When smart people disagree with you, you should rarely be confident that you’re right and they’re wrong, even if you feel like they didn’t rebut your arguments adequately. They obviously feel the same way about you."
I'm curious: how do you rhyme this with your confidence that FDT is "false"?
Is no-one going to defend incomparability? Not even on theoretical grounds... it's just clearly a feature of human psychology! If you ask me to compare two imaginary worlds with enough qualitative differences between them, but in the same general ballpark of attractiveness, my mind is going to punt on the comparison. No clear winner, and making either of the two slightly better doesn't make it a clear winner either. In other words, incomparable.
Now, you can produce some reasonable sounding axioms, apply some logic, bring in butterfly effects, and prove that you can push the logic to generate entire oceans of incomparability that would theoretically turn me into a nihilist.
But the next day, when you present the next set imaginary worlds, my mind is just as likely to keep punting. Look at one, look at the other, and just say, "no clear preference".
And next, when you present me some clear and simple dilemma, I'll still have a positive non-nihilistic answer. I'll still want to call mom, or whatever.
What this proves, as far as I can tell, is that your logical system is a poor model for my moral psychology.
At this point, you'd probably want to say that because logic is right, my moral psychology is wrong. But... what kind of wrong is that?
It can't be descriptively wrong, because it accurately describes what I feel. And I don't see how it can be normatively wrong either; that would require some other higher moral authority to contradict my intuitive judgments. But logic cannot provide that, because logic adds no normative power of its own, it just reworks what is already there.
I disagree with many of your points, but I have to say thanks for how clearly you've been arguing them for years.
I don't find the top-down, explanatory God you argue for convincing, nor the thorough-going moral realism that goes together with it. And despite good intentions and a number of good results, I'm not on board with EA either. But your writings have been clarifying the contours of a number of possible views, and outlining the best arguments for them, so I feel clearer than before about exactly what I'm disagreeing with, and why.
Agree on 4 and 29 though. Factory farming and social media both suck, and they're more alike than one would think.
2. "Every metric" is objectively not true, provided we make our metrics granular enough. Pollution, (aspects of) urban planning, and probably aspects of mental health have worsened recently, to name a few. Also, social media exists now.
15/21. Most people are indeed extremely irrational, but sometimes I wonder if the moral conclusions of most people tend to occasionally be better than those of analytic philosophers, for reasons that are demonstrated very well in this post. There's a lot of generational wisdom contained in the average person's opinions, even if they have no idea as to what that wisdom is or why it's there. I greatly enjoy your articles, but I don't think I've ever seen you speak about second order effects, or how human biological and psychological biases might affect expected utility (while it's ridiculous to dismiss ideas as unworkable just because they're new, we can probably expect that almost every ethical proposition in history has failed, most for reasons that can probably be deduced, and what we see are the ones that didn't.). The average person also doesn't understand economics, urban planning, or medicine deeply, but this doesn't stop unplanned economies and unplanned cities from functioning far better than those designed by subject experts. Medicine, too, used to be really, really terrible - up until very recently, seeing a highly-trained doctor was worse than not! Sometimes I try and get into the thought process of philosophers and wonder if we aren't still in the bloodletting stage of analytic philosophy, where many relevant considerations are thrown out of the window for the simple reason that we have no good way of calculating them yet. Agree that most people do have really awful politics, though.
22. Defending the need to create well-off people is actually pretty damn hard, mostly because viewing pleasure and pain as being on the same spectrum is completely insane. There is a good reason that most people don't view giving X number of already happy people handjobs (no matter how high you make X) as being equivalent to saving a person from agonizing torture; the failure to formalize this in ethical systems is probably a result of the disdain for what conclusions would invariably follow from doing so, at least in naive utilitarian ethical systems.
23. None of your arguments against God seem to hold up once you stop making the assumption that universes are supposed to be a any way (simpler, for example) other than the only way that we have ever observed universes being, which is not a sensible assumption. But it's almost a moot point, I think, because the only type of God (one with a human heaven, or human hell, or both) we are both concerned with is vanishingly unlikely compared to a hands-off creator.
I mostly agree with the rest, aside from a few that I need to think more about.
# 22. I feel the opposite about some of the judgements you listed. In particular, I don’t think that partiality and desert are correct, regardless of utilitarianism. They might be more intuitive but, imo, there are very strong arguments against them. Personally, I have many more reservations about aggregation and the non existence of rights
Nice piece, but small correction: in the XPT, the median expert had the chance of extinction by 2100 at 6%, and the median superforecaster had the chance at 1%, so 10% seems on the higher end of expert forecasts. Probably you're referring to Ord, who has x-risk at 1/6, but I think he is much higher than most forecasters.
I admire your consistency and willingness to hammer down to a final logical conclusion, even if some of those conclusions appear absurd to a lot of people. Forcing skeptical listeners to think hard why they think a logically deduced conclusion is wrong and revisit their own logic makes us all smarter in the end, whoever “wins” the argument.
Agree with the vast majority of these, so will just point out some interesting points of disagreement or confusion.
21. I think philosophers are also quite confused a lot of the time about politics. I think it can seem like philosophers are _less_ confused to other philosophers, because in philosophy the notion of "disagreeing on certain factual claims, and thus arriving to different results" is well-respected, so long as your argument works. However, many philosophers who arrive at bad political conclusions do it by having insane opinions on factual claims that are just clearly wrong. This is another way to be very confused that philosophers can have blind-spots to. (Okay, but it's still better than the average person, that's a very low bar).
22. Non-desert seems very easily defensible. I guess it conflicts with people's intuitions strongly? Seems more obvious to me than some of the obvious stuff you listed. But I agree with all of it anyway.
23. Disagree. Think it's possible/plausible though, depends on persuasiveness of certain arguments to you. I could be moved by a conversation with a smart person here though.
24. Disagree, but think it's possible/plausible. We have never encountered anything that is not physical, I see little reason to think consciousness is different.
26. I just have high uncertainty on this.
Agree with everything else.
Finally, worth noting that I strongly agree with 29! Including the potentially ironic remark about substack (but I do think substack just _is_ better, so far at least).
Pillars of this worldview are simultaneously
29) that God probably exists
4/20) that animal suffering outweighs human suffering, and animals in general live net negative lives
11)existential danger is real
17) Human intuition is fundamentally a broken method of assessing morality.
God in this worldview is a being who created a world that faces existential risk, gave humanity a broken intuition into morality, and gave countless animals net negative lives. This worldview accepts not just the problem of evil as you described, but the problem of evil, along with a creator who designed the moral instincts of human beings in such a fundamentally broken way that they would never align with true morality, which he is not the arbiter of anyway.
Overall, this just feels like shoving God into an atheist-shaped moral hole.
A couple of push-backs on this. The first is that unless you have some foundational access to moral truth, the belief that the overton window moving isn't just the overton window moving as opposed to actual moral progress has to be shown. As in, do we judge other societies in the past because we are right and they are wrong, or because judging other societies based on a belief that our morality is universal is a particular feature of our morality, originating it would seem in certain aspects of Christianity? How do we know it's progress and that societies in the past didn't know better than us? And second I think 3 is just way too simplistic, it's either that or just tautological, like saying suffering is bad because you suffer. But I might suffer going to the gym and doing something hard, there are many examples where I might have negative experience for something contextually good, and those experiences are arguably some of the most meaningful in life: so if the context can change how something 'feels' or what an experience means then it isn't the result of intrinsic badness but of a much more complex conceptual nesting in value. When you say "suffering is bad because of how it feels" you are drawing attention to an implied particular kind of experience that the statement doesn't delineate, and this seems obvious and is easily missed because the statement is apparently so self-evident, but I think a lot of your conclusions suffer from being drawn from overly simplistic axioms like this.
1. I think we've made lots of progress in many domains, so it's not that surprising we've made some in the moral domain. I don't think it's that hard for a person in the modern era to think through the arguments for slavery, say, and see that they are wrong and that slavery is immoral. But from that it follows that we've made progress on the slavery issue.
3. Suffering refers to the states that really, genuinely feel bad. It doesn't refer to the states that don't feel bad but sort of resemble things that feel bad and are meaningful. But in any case, as I explain here, none of the arguments require believing anything controversial about what makes suffering bad https://benthams.substack.com/p/were-not-the-center-of-the-moral
"Really, genuinely feel bad" just adds some more framing or adverbs but it's still the same thing. It's just saying "we know what badness is because it's bad." I submit you would have trouble finding a clear line between those examples of genuine badness and things that resemble them.
The throughline is that they feel bad on my preferred view. Alternative accounts: the throughline is that the sufferer doesn't like them.
Does 21 preclude support for democracy? I concur with 21 and hence I am anti-democracy as a concept. I much prefer an authoritarian aristocracy.
No. Democracy might still be less bad than alternatives.
I don’t see how having irrational people who hold inconsistent and unreasonable positions make decisions impacting intelligent people as well as others coheres with position 29
Presumably it also matters that your system actually works in practice and doesn't lead to horrible corruption after one generation at most. Democracy is still - so far - the worst system aside from all the others.
How is it in conflict with position 29? I would have said something more like position 10, were I making the same argument.
How is it in conflict with a support in Longtermism. If you think Longtermism is good but people won’t support it, stop giving the people an opportunity to have a say in what is good. Problem solved.
Yeah that’s what I mean, longtermism + democracy are in conflict in this way.
Surprised you didn't bring up MacAskill's Better Futures, given your expected value maximization! Saving ourselves from existential risk is likely not be enough to capture most possible value
That's another good one! Should have included it.
From point 10:
> The far future could have way more people than are around today
I agree with this, given it’s a “could” statement and there are plenty of things that could cause a population increase. Having said that, I’m skeptical that the global population will increase steadily over the next century: for only a few countries is the birth rate higher than the global replacement rate.
Interesting points!
"When smart people disagree with you, you should rarely be confident that you’re right and they’re wrong, even if you feel like they didn’t rebut your arguments adequately. They obviously feel the same way about you."
I'm curious: how do you rhyme this with your confidence that FDT is "false"?
Is no-one going to defend incomparability? Not even on theoretical grounds... it's just clearly a feature of human psychology! If you ask me to compare two imaginary worlds with enough qualitative differences between them, but in the same general ballpark of attractiveness, my mind is going to punt on the comparison. No clear winner, and making either of the two slightly better doesn't make it a clear winner either. In other words, incomparable.
Now, you can produce some reasonable sounding axioms, apply some logic, bring in butterfly effects, and prove that you can push the logic to generate entire oceans of incomparability that would theoretically turn me into a nihilist.
But the next day, when you present the next set imaginary worlds, my mind is just as likely to keep punting. Look at one, look at the other, and just say, "no clear preference".
And next, when you present me some clear and simple dilemma, I'll still have a positive non-nihilistic answer. I'll still want to call mom, or whatever.
What this proves, as far as I can tell, is that your logical system is a poor model for my moral psychology.
At this point, you'd probably want to say that because logic is right, my moral psychology is wrong. But... what kind of wrong is that?
It can't be descriptively wrong, because it accurately describes what I feel. And I don't see how it can be normatively wrong either; that would require some other higher moral authority to contradict my intuitive judgments. But logic cannot provide that, because logic adds no normative power of its own, it just reworks what is already there.
Interested to hear more why you think: "If your reasoning doesn’t approximate Bayesian reasoning, then something has gone wrong."
I disagree with many of your points, but I have to say thanks for how clearly you've been arguing them for years.
I don't find the top-down, explanatory God you argue for convincing, nor the thorough-going moral realism that goes together with it. And despite good intentions and a number of good results, I'm not on board with EA either. But your writings have been clarifying the contours of a number of possible views, and outlining the best arguments for them, so I feel clearer than before about exactly what I'm disagreeing with, and why.
Agree on 4 and 29 though. Factory farming and social media both suck, and they're more alike than one would think.
Suffering is bad because it feels bad? Why does it feel bad? Because they are suffering! C’mon mah boi
A few objections:
2. "Every metric" is objectively not true, provided we make our metrics granular enough. Pollution, (aspects of) urban planning, and probably aspects of mental health have worsened recently, to name a few. Also, social media exists now.
15/21. Most people are indeed extremely irrational, but sometimes I wonder if the moral conclusions of most people tend to occasionally be better than those of analytic philosophers, for reasons that are demonstrated very well in this post. There's a lot of generational wisdom contained in the average person's opinions, even if they have no idea as to what that wisdom is or why it's there. I greatly enjoy your articles, but I don't think I've ever seen you speak about second order effects, or how human biological and psychological biases might affect expected utility (while it's ridiculous to dismiss ideas as unworkable just because they're new, we can probably expect that almost every ethical proposition in history has failed, most for reasons that can probably be deduced, and what we see are the ones that didn't.). The average person also doesn't understand economics, urban planning, or medicine deeply, but this doesn't stop unplanned economies and unplanned cities from functioning far better than those designed by subject experts. Medicine, too, used to be really, really terrible - up until very recently, seeing a highly-trained doctor was worse than not! Sometimes I try and get into the thought process of philosophers and wonder if we aren't still in the bloodletting stage of analytic philosophy, where many relevant considerations are thrown out of the window for the simple reason that we have no good way of calculating them yet. Agree that most people do have really awful politics, though.
22. Defending the need to create well-off people is actually pretty damn hard, mostly because viewing pleasure and pain as being on the same spectrum is completely insane. There is a good reason that most people don't view giving X number of already happy people handjobs (no matter how high you make X) as being equivalent to saving a person from agonizing torture; the failure to formalize this in ethical systems is probably a result of the disdain for what conclusions would invariably follow from doing so, at least in naive utilitarian ethical systems.
23. None of your arguments against God seem to hold up once you stop making the assumption that universes are supposed to be a any way (simpler, for example) other than the only way that we have ever observed universes being, which is not a sensible assumption. But it's almost a moot point, I think, because the only type of God (one with a human heaven, or human hell, or both) we are both concerned with is vanishingly unlikely compared to a hands-off creator.
I mostly agree with the rest, aside from a few that I need to think more about.
# 22. I feel the opposite about some of the judgements you listed. In particular, I don’t think that partiality and desert are correct, regardless of utilitarianism. They might be more intuitive but, imo, there are very strong arguments against them. Personally, I have many more reservations about aggregation and the non existence of rights
Are you anti pleasure?