Kieran Setiya has written a popular reply to Will MacAskill, claiming that the arguments he lays out in What We Owe The Future are “shaky.” While Setiya is generally a good philosopher and has lots of useful things to say, in this article I think he has dropped the ball to a significant degree. In this article, I’ll just address what was said about ethics; that is, after all, my forte.
Setiya first claims that “Their outlook draws on utilitarian thinking about morality. According to utilitarianism.” Now, I’ve defended utilitarianism at length and I think it’s correct. However, you don’t need to be a utilitarian to think that longtermism is true. As long as you think that well-being matters — which nearly everyone does, that’s why depression is bad, for example — then you’ll think that unfathomable amounts of future well-being is a top moral priority. Setiya seems to largely agree with this, so I won’t spend too much time beleaguering the point.
An awful lot turns on the intuition of neutrality, then. MacAskill gives several arguments against it. One is about the ethics of procreation. If you are thinking of having a child, but you have a vitamin deficiency that means any child you conceive now will have a health condition—say, recurrent migraines—you should take vitamins to resolve the deficiency before you try to get pregnant. But then, MacAskill argues, “having a child cannot be a neutral matter.” The steps of his argument, a reductio ad absurdum, bear spelling out. Compare having no child with having a child who has migraines, but whose life is still worth living. “According to the intuition of neutrality,” MacAskill writes, “the world is equally good either way.” The same is true if we compare having no child with waiting to get pregnant in order to have a child who is migraine-free. From this it follows, MacAskill claims, that having a child with recurrent migraines is as good an outcome as having a child without. That’s absurd. In order to avoid this consequence, MacAskill concludes, we must reject neutrality.
But the argument is flawed. Neutrality says that having a child with a good enough life is on a par with staying childless, not that the outcome in which you have a child is equally good regardless of their well-being. Consider a frivolous analogy: being a philosopher is on a par with being a poet—neither is strictly better or worse—but it doesn’t follow that being a philosopher is equally good, regardless of the pay.
But if we accept that having a child with a very good life is just as good as not having a child and not having a child is just as good as having a child with a mediocre but still good life, then by transitivity, having a child with a very good life would be as good as not having a child. If two things are both just as good as a third thing then by transitivity they must be equal.
Denying this gets money pumping. Call Life 1 a very good life, Life 2 a good but less good life, and No Life there not being a life at all.
You start with Life 1. Someone offers you 1 cent to go to no life. You take it because no life is as good as life 1 and 1 cent is better than no cents. Then you’re offered a cent to go to life 2. You take that for the same reason. Then you’re offered that if you pay 10 cents you’ll go back to life one. This is clearly an improvement. But not you’re down 8 cents and you’re back where you started.
A striking fact about cases like the one MacAskill cites is that they are subject to a retrospective shift. If you are planning to have a child, you should wait until your vitamin deficiency is resolved. But if you don’t wait and you give birth to a child, migraines and all, you should love them and affirm their existence—not wish you had waited, so that they’d never been born. This shift explains what’s wrong with a second argument MacAskill makes against neutrality. Thinking of his nephew and two nieces, MacAskill is inclined to say that the world is “at least a little better” for their existence. “If so,” he argues, “the intuition of neutrality is wrong.” But again, the argument is flawed. Once someone is born, you should welcome their existence as a good thing. It doesn’t follow that you should have seen their coming to exist as an improvement in the world before they came into existence. Neutrality survives intact.
Well, the intuition that MacAskill has is that not only should they be loved, but that there life is good overall. If a person lived a miserable life, you might have the intuition that they would have been better never born. That doesn’t seem to be the case when we imagine a person.
If we imagine that the future could either be extinction or utopia, then it seems really unintuitive that they’d be equal. The notion that god creating utopia would be no better than creating nothing is very surprising. However, Setiya’s view entails this.
Finally, the neutrality intuition requires affirming a puzzling asymmetry. Presumably creating people with very bad lives is immoral. However, if so, it seems that conversely, creating people with good lives is moral. Justifications for why creating miserable people is bad would also spill over to apply to the positive case.
In rejecting neutrality, MacAskill leans toward the “total view” on which one population distribution is better than another if it has greater aggregate well-being. This is, in effect, a utilitarian approach to population ethics. The total view says that it’s always better to add an extra life, if the life is good enough. It thus supports the longtermist view of existential risks. But it also implies what is known as the Repugnant Conclusion: that you can make the world a better place by doubling the population while making people’s lives a little worse, a sequence of “improvements” that ends with an inconceivably vast population whose lives are only just worth living. Sufficient numbers make up for lower average well-being, so long as the level of well-being remains positive.
The total view doesn’t necessarily entail repugnance — see Nebel’s paper totalism without repugnance.
Many regard the Repugnant Conclusion as a refutation of the total view. MacAskill does not. “In what was an unusual move in philosophy,” he reports, “a public statement was recently published, cosigned by twenty-nine philosophers, stating that the fact that a theory of population ethics entails the Repugnant Conclusion shouldn’t be a decisive reason to reject that theory. I was one of the cosignatories.” But you can’t outvote an objection. Imagine the worst life one could live without wishing one had never been born. Now imagine the kind of life you dream of living. For those who embrace the Repugnant Conclusion, a future in which trillions of us colonize planets so as to live the first sort of life is better than a future in which we survive on Earth in modest numbers, achieving the second.
As I argue here and as Huemer argues, our intuitions about this are not very reliable. Additionally, there are lots of very plausible principles that require we accept the repugnant conclusion. I lay out some of them in my article, many more can be given.
Ironically, Setiya’s view holds that the repugnant conclusion isn’t repugnant. After all, both the world where ten billion people have excellent lives and the world where vast numbers of people have lives barely worth living are just as good as no world — in both cases, they’re just as good as no one existing.
MacAskill has a final argument, drawing on work by Parfit and by economist-philosopher John Broome. “Though the Repugnant Conclusion is unintuitive,” he concedes, “it turns out that it follows from three other premises that I would regard as close to indisputable.” The details are technical, but the upshot is a paradox: the premises of the argument seem true, but the conclusion does not. As it happens, I am not convinced that the premises are compelling once we distinguish those who exist already from those who may or may not come into existence, as we did with MacAskill’s nephew and nieces. But the main thing to say is that basing one’s ethical outlook on the conclusion of a paradox is bad form. It’s a bit like concluding from the paradox of the heap—adding just one grain of sand is not enough to turn a non-heap into a heap; so, no matter how many grains we add, we can never make a heap of sand—that there are no heaps of sand. This is a far cry from MacAskill’s “simple” starting point.
The analogy goes the other way — the paradox of the heap shows that either
There are no heaps
Or
A heap of sand can be one grain of sand removed from not being a heap.
(There are other solutions by accepting vagueness, for example, but I digress). This shows something surprising. Both propositions seem intuitively true.
If we have four positions and we must reject one of them, we should reject the least costly one. If we see over and over again that one proposition causes dozens of paradoxes then we should reject that one. This is especially true because we’d expect our intuitions to sometimes be wrong — we should often reject deeply held intuitions.
Nor does MacAskill stop here; he goes well beyond the Repugnant Conclusion. Since it’s not just human well-being that counts, for him, he is open to the view that human extinction wouldn’t be so bad if we were replaced by another intelligent species, or a civilization of conscious AIs. What matters to the longtermist is aggregate well-being, not the survival of humanity.
Longtermists don’t have to be utilitarians — they can think that things other than well-being matter.
Nonhuman animals count, too. Though their capacity for well-being varies widely, “we could, as a very rough heuristic, weight animals’ interests by the number of neurons they have.” When we do this, “we get the conclusion that our overall views should be almost entirely driven by our views on fish.” By MacAskill’s estimate, we humans have fewer than a billion trillion neurons altogether, whereas wild fish have three billion trillion. In the total view, they matter three times as much as we do.
This conclusion is hard to deny, as I argue here. Clearly beings don’t matter less just because they’re a different species — if it turned out that I was a different species as other humans despite having the same physical and mental abilities, I wouldn’t matter less. Thus, Bentham was right; “The question is not, Can they reason?, nor Can they talk? but, Can they suffer?”
Fish screaming in unfathomable agony, experiencing in total three times the pain that we do, seems to matter. That suffering is bad is near undeniable, so lots of it is very bad. This seems a little weird, but it doesn’t actually seem implausible when one suitably reflects — it’s just that we don’t think about fish very much.
Don’t worry, though. We shouldn’t put their lives before our own, since there is reason to believe their lives are terrible. “If we assess the lives of wild animals as being worse than nothing on average, which I think is plausible (though uncertain),” MacAskill writes, “we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing.” That’s because human growth and expansion are sparing them from all that misery. From this perspective, the anoxic oceans of six-degree warming come as a merciful release.
I agree with this conclusion. For more on this, see Tomasik’s writing.
To his credit, MacAskill admits room for doubt, conceding that he may be wrong about the total view in population ethics. But he also has a view about what to do when you’re not sure of the moral truth: assign a probability to the truth of each moral view, “then take the action that is the best compromise between those views—the action with the highest expected value.” This raises problems of both theory and practice.
In practice, there is a threat that longtermist thinking will dominate expected value calculations in the same way as tiny risks of human extinction. If there is even a 1 percent chance of longtermism being true, and it tells us that reducing existential risks is many orders of magnitude more important than saving lives now, these numbers may swamp the prescriptions of more modest moral visions.
The theoretical problem is that we ought to be uncertain about this way of handling moral uncertainty. What should we do when uncertainty goes all the way down? At some point, we fall back on moral judgment and face what philosophers have called the problem of “moral luck.” What we ought to do, whatever our beliefs, is to act in accordance with the moral truth of how to act with those beliefs. There’s no way to insure ourselves against moral error—to guarantee that, while we may have made mistakes, at least we acted as we should, given what we believed. For we may be wrong about that, too.
There are profound divisions here, not just about the content our moral obligations but about the nature of morality itself. For MacAskill, morality is a matter of detached, impersonal theorizing about the good. For others, it is a matter of principles by which we could reasonably agree to govern ourselves. For still others, it’s an expression of human nature.
I can’t do justice for MacAskill’s full research on moral uncertainty. He has, however, written a book on it — his solution avoids fanaticism. If you think morality might be about theorizing about the good, might be about human nature, and so on, MacAskill gives ways to make decisions given that uncertainty.
Morality isn’t made by us—we can’t just decide on the moral truth—but it’s made for us: it rests on our common humanity, which AI cannot share.
Setiya gives no reason to think that AI couldn’t do the same type of moral reasoning we can do. We’ve seen AI gain extraordinary abilities — Setiya’s proclamations sounds very much like those of early naysayers who claimed AI could never outplay humans in chess.
IIRC MacAskill doesn't have a solution to fanaticism in his book on Moral Uncertainty. He notes that fanaticism is an unresolved problem for all decision theories too and basically leaves it at that.