Utilitarianism Wins Outright Part 29: Clearing Up Confusion
The pattern of counterintuitive implications of utilitarianism
As I’ve documented in great detail throughout the other 28 parts of this series and my response to Huemer, utilitarianism has many implications that seem unintuitive at first. This is the core of the objections to utilitarianism—at least, those that aren’t conceptually confused in some way1. This isn’t terribly surprising. Economists often remark about the unintuitiveness of economics—the truth is unlikely to be what we expect.
Of course, these implications don’t provide good evidence against utilitarianism, both because they’re eviscerated by prolonged reflection and they’re the type of unintuitiveness we’d expect on the hypothesis that utilitarianism is correct. But one can still ask whether there’s a common pattern to utilitarianism’s divergence from our intuitions—whether they all share something in common. And they do.
One thing that they have in common is that our non-utilitarian intuitions are false. This comment of mine is, however, not a shock if one has read any of my writings on the subject.
Yet there’s something else fundamental that they have in common that, once fully grasped, makes the “objections,” to utilitarianism lose all of their force. When you can see the algorithm behind your non-utilitarian judgments, it becomes obvious that they’re off track in very similar ways. It’s easy to explain how they all go off track. This is not a single specific error, but a cluster of related cognitive errors.
Let’s begin with an example—the organ harvesting case. A doctor can kill a patient and harvest their organs to save five people. Should they? Our intuitions generally say no.
What’s going on in our brains—what’s the reason we oppose this? Well, we know that social factors and evolution dramatically shape our moral intuitions. So, if there’s some social factor that would result in strong pressure to hold to the view that the doctor shouldn’t kill the person, it’s very obvious that this would affect our intuitions. Are there?
Well, of course. A society in which people went around killing other people for the greater good would be a much worse society. We have good rules to place strong prohibitions on murder, even for the allegedly greater good.
Additionally, it is a practical necessity that we accept, as a society, some doing allowing distinction. Given that doing the maximally good thing all the time would be far too demanding, as a society, we treat there as being some fundamental distinction between doing and allowing. Society would collapse if we treated murder as being only a little bit bad. Thus, it’s super important that we treat murder as very bad. But given that we can’t treat failing to do something unfathomably demanding as horrendous—equivalent to murder—we have to treat there as being some distinction between doing and allowing.
After this distinction is in place, our intuitions about organ harvesting are very obviously explainable. If killing is treated as unfathomably evil, while not saving isn’t, then killing to save will be seen as horrendous.
To see this, imagine if things were the other way. Imagine if we were living in a world in which every person will kill one person per day, in an alternative multiverse segment, unless they fast during that day. Additionally, imagine that, in this world, each person saved dozens of people per day in alternative multiverse segment, unless they take drastic action. In this world, it seems clear that failing to save would be seen as much worse than killing, given that saving is easy, but failing to kill is very difficult. Additionally, imagine that these people saw those who they were saving, and they felt empathy for them. Thus, not saving someone would provoke similar internal emotional reactions in that world as killing does in ours.
So what do we learn from this. Well, to state it maximally bluntly and concisely, many of our non-utilitarian intuitions are the results of social norms that we design to have good consequences, which we then take to be significant independently of their good consequences. These distinctions are never derivable from plausible first principles, never have clear delineations, and always result in ridiculous reductios. They are more epiphenomena—an unnecessary byproduct of correct moral reasoning. We correctly see that society needs to enshrine rights as a legal concept, and then incorrectly feel an attachment to them as an intrinsic feature of morality.
When we’re taught moral norms as a child, we’re instructed with rigid norms like “don’t take other people’s things.” We try to reach reflective equilibrium with those intuitions, carefully reflecting until they form coherent networks of moral beliefs. Then, later in life, we take them as the moral truth, rather than derivative heuristics.
Let’s take another example: desert. Many people think others intrinsically deserve things. Well, there’s a clear social and evolutionary benefit to thinking that. Punishment effectively deters crime and prevents people from harming others, assuming they’re locked up. The callousness of means ends reasoning and complexity of the moral calculus that, if recognized, may be itself undermined and self defeating, makes this intuition very strong.
Unreliable emotional reactions play a major role in our moral intuitions—particularly when they’re non-consequentialist. Many of our moral intuitions don’t rely on what seem like bad states of affairs, but what make a person seem like a bad person. Well, if we accept that a murderer is a bad person—which we surely need to—then it’s not at all surprising hat we’d have the intuition that it’s bad to kill one to save five; after all, it turns you into a bad person. Good actions surely don’t make bad people!
But this is not the only thing behind our non-utilitarian intuitions. Many of them rely on a selective lack of empathy. It’s much harder to empathize with those who can’t talk—or who we don’t listen to. If you heard the screams of the children as they withered away from malaria, whose lives you could’ve saved by foregoing your vacation, it would seem much more intuitive that you should do so.
We know that humans have a specific moral circle—a limited range of entities that they care about. It was hard enough getting slave owners to include black people in the moral circle. Yet it’s much more difficult when the people are far away and can’t talk.
Some humans are mentally very similar to some non-human animals. There are severe mental disabilities that make people roughly similarly capable to cows, pigs, or chickens. And yet we all feel like Bree matters, while it’s much harder to empathize in the same way with a cow or a pig. The reason for this is simple; pigs can’t talk, advocate for themselves, and they don’t look like us. If pigs looked like people, we almost certainly wouldn’t eat bacon.
It’s hard to empathize with future people because they can’t talk to us. If we could talk with a merely possible person who would describe their life, we’d care much more about their interests. That’s why Bostrom’s letter from utopia was so compelling.
The ideal morality wouldn’t use our faulty system of empathy. Instead, when evaluating the importance of an entity, we’d ask whether we’d care about that entities interests if we were going to become that entity. If one would slowly turn into a pig, they’d care what happened to them after they became a pig. The same is not true of plants.
If you’re skeptical about this theory of empathy consider the following question: which entities are part of our moral circle? Well, everyone who is in our moral circle either can speak up for themselves or look like someone who can advocate for themselves2. Isn’t that funny? What are the odds that the beings that ultimately matter would happen to look like us mostly—or at least beings that can reason with us?
Up until this point, the moral circle has only expanded to those who have advocates. Yet that obviously is a morally arbitrary factor. If chickens could speak up for themselves, we almost certainly wouldn’t eat them.
There are a few small exceptions here. One of them relates to retributivism. We think that people who commit brutal crimes deserve to suffer, even if they can advocate for themselves. However, in this case, we have a blinding hatred for these people. It’s unsurprising that our moral circle wouldn’t include those who we hate with a passion. Additionally, this is easily explained by the heuristics account presented before.
But what about cases like torture vs dust specks, or the repugnant conclusion. In these cases, we have empathy for the people harmed. But nonetheless, our intuitions go off-track. What’s going on?
Well, for one, there’s an obvious reason that we have a social norm that involves treating torture as far worse than shutting off a sports game. Society would be worse if a careless utilities worker was treated worse than Jeffrey Dahmer.
Yet another important error that we make involves simple mathematical errors in reasoning. As Huemer points out
When we try to imagine a billion years, our mental state is scarcely different, if at all, from what we have when we try to imagine a million years. If promised a billion years of some pleasure, most of us would react with little, if any, more enthusiasm than we would upon being promised a million years of the same pleasure. Intellectually, we know that one is a thousand times more pleasure than the other, but our emotions and felt desires will not reflect this.
He later notes
In many cases, we make intuitive errors when it comes to compounding very small quantities. In one study, psychologists found that people express greater willingness to use seatbelts when the lifetime risk of being injured in a traffic accident is reported to them, rather than the risk per trip (Slovic, Fischhoff, and Lichtenstein 1978). This suggests that, when the very small risk per trip is presented, people fail to appreciate how large the risk becomes when compounded over a lifetime. They may see the risk per trip as ‘negligible’, and so they neglect it, forgetting that a ‘negligible’ risk can be large when compounded many times.
For an especially dramatic illustration of the hazards of trusting quantitative intuitions, imagine that there is a very large, very thin piece of paper, one thousandth of an inch thick. The paper is folded in half, making it two thousandths of an inch thick. Then it is folded in half again, making it four thousandths of an inch thick. And so on. The folding continues until the paper has been folded in half fifty times. About how thick would the resulting paper be? Most people will estimate that the answer is something less than a hundred feet. The actual answer is about 18 million miles.15
For a case closer to our present concern, consider the common intuition that a single death is worse than any number of mild headaches. If this view is correct, it seems that a single death must also be worse than any amount of inconvenience. As Norcross observes, this suggests that we should greatly lower the national speed limit, since doing so would save some number of lives, with (only) a great cost in convenience.16 Yet few support drastically lowering the speed limit. Indeed, one could imagine a great many changes in our society that would save at least one life at some cost in convenience, entertainment, or other similarly ‘minor’ values. The result of implementing all of these changes would be a society that few if any would want to live in, in which nearly all of life’s pleasures had been drained.
In all of these cases, we find a tendency to underestimate the effect of compounding a small quantity. Of particular interest is our failure to appreciate how a very small value, when compounded many times, can become a great value. The thought that no amount of headache-relief would be worth a human life is an extreme instance of this mistake—as is the thought that no number of low-utility lives would be worth as much as a million high-utility lives.
Thus, our intuitions about very big numbers are incredibly unreliable. When it comes to the utility monster, for example, we literally can’t imagine what it’s like to be a utility monster. When comparing a 1 in 1 billion chance of 100 quadrillion utility to a certainty of 10 utility, it’s obvious why we’d prefer the 10 utility. Given our inability to do precise intuitive calculations about such numbers, we just round the 1 in a billion chance to zero.
Similarly, as Yetter-Chappell points out, there’s lots of status quo bias. A major reason why the repugnant conclusion seems repugnant, a world where the utility monster eats everyone seems wrong, and it seems wrong to push the guy off the bridge in the trolley problem is because that would deviate from the status quo. If we favor the status quo, then it’s no surprise that utilitarianism would go against our intuitions about favoring the status quo. Our aversion to loss also explains why we want to keep things similar to how they are currently.
The repugnant conclusion seems pretty unintuitive when we’re comparing 10 billion people with awesome lives to 100^100 people with lives barely worth living. However, if we compare 1 person with an awesome life to 100,000 people with lives barely worth living, the one person being more important seems much less intuitive. This shows that our judgments are dramatically shaped with status quo bias—when the number of people will be similar to the current number, it seems much more unintuitive.
Another related bias is the egocentric bias. Much of morality comes from empathy—imagining ourself in the other person’s position. However, it’s nearly impossible to imagine being in the position of many people. We may have a bias to not care about harms that befall immense stupidity, because we think we’d never be that stupid. Similarly, we may be biased against future humans because it’s hard to imagine being a person far in the future, and the same goes for non-human animals. People are better at remembering things that may affect them in the future.
When we think about the repugnant conclusion, given the difficulty of imagining not existing, it’s very easy to think which world we’d rather exist in. Our quality of life is higher in the world with 10 billion people, so it’s unsurprising that we’d rather exist in that world, as Huemer notes.
Similarly, people care much more about preventing harms from foes with faces than foes without faces. So in the organ harvesting case, for example, when the foes have a face (namely, you) it’s entirely clear why the one murder begins to seem worse than the five deaths.
This is clearly a non-exhaustive list of cases in which our non-utilitarian judgments go awry. But once you feel the way the algorithm goes awry from the inside, the judgments fade.
The can’t measure utility objection, for example, is just confused—it doesn’t rise to the level of being supported by bad intuitions.
There are a few tiny exceptions which I’ll get to in a moment