We Should Expect Our Intuitions to Misfire in Cases Where Heuristics Fail
Utilitarians who claim this aren't just trying to worm out of uncomfortable scenarios--we should expect our intuitions to function the way these utilitarians claim they do
There are lots of cases where utilitarianism diverges from our initial intuitions. Many of these cases involve scenarios wherein one can take some act that is nearly always wrong—say torture—and then stipulate a bunch of extra features that make the torture have good consequences. Then, the critic of utilitarianism will hold up the case triumphantly—and metaphorically—and declare “haha, utilitarianism has crazy, unintuitive implications. It’s even okay with torture. Checkmate!”
Examples of cases like this include the following: there’s a person who can torture another, however, they’d get so much pleasure from it that it would outweigh the suffering caused, even taking into account second and third-order effects. Many people still find it unintuitive that they have most reason to torture the other person.
But we should expect utilitarianism to misfire in a lot of these cases. Consider this chess place—what do you think is the best move?
It turns out that it is QH6. After all, that forces mate. If they take with the king Rh8 is mate, if they take with the pawn, Rfx7 forces mate.
This was the dramatic end to a world chess world championship between Carjakin an Carlsen. When Carlsen sacrificed the queen, Carjakin resigned immediately.
Now, this is clearly the best move. But it’s really not obvious that it is. The reason for this is that our heuristics guide us a lot in chess. It’s generally a bad idea to sacrifice your queen, so we don’t even consider the possibility that it might be a good idea. Thus, analogously, we should expect our moral intuitions to misfire in cases where we have a moral heuristic that’s almost always reliable—like ‘don’t torture’—and then we have some weird case where it misfires.
Sunstein has a brilliant paper on this, showing the prevalence of heuristics. The abstract says
With respect to questions of fact, people use heuristics – mental short-cuts, or rules of thumb, that generally work well, but that also lead to systematic errors. People use moral heuristics too – moral short-cuts, or rules of thumb, that lead to mistaken and even absurd moral judgments. These judgments are highly relevant not only to morality, but to law and politics as well. Examples are given from a number of domains, including risk regulation, punishment, reproduction and sexuality, and the act/omission distinction. In all of these contexts, rapid, intuitive judgments make a great deal of sense, but sometimes produce moral mistakes that are replicated in law and policy. One implication is that moral assessments ought not to be made by appealing to intuitions about exotic cases and problems; those intuitions are particularly unlikely to be reliable. Another implication is that some deeply held moral judgments are unsound if they are products of moral heuristics. The idea of error-prone heuristics is especially controversial in the moral domain, where agreement on the correct answer may be hard to elicit; but in many contexts, heuristics are at work and they do real damage. Moral framing effects, including those in the context of obligations to future generations, are also discussed.
He goes on to say
The classic work on heuristics and biases deals not with moral questions but with issues of fact. In answering hard factual questions, those who lack accurate information use simple rules of thumb. How many words, in four pages of a novel, will have “ing” as the last three letters? How many words, in the same four pages, will have “n” as the secondto-last letter? Most people will give a higher number in response to the first question than in response to the second (Tversky & Kahneman 1984) – even though a moment’s reflection shows that this is a mistake. People err because they use an identifiable heuristic – the availability heuristic – to answer difficult questions about probability. When people use this heuristic, they answer a question of probability by asking whether examples come readily to mind. How likely is a flood, an airplane crash, a traffic jam, a terrorist attack, or a disaster at a nuclear power plant? Lacking statistical knowledge, people try to think of illustrations. For those without statistical knowledge, it is far from irrational to use the availability heuristic; the problem is that this heuristic can lead to serious errors of fact, in the form of excessive fear of small risks and neglect of large ones
Or consider the representativeness heuristic, in accordance with which judgments of probability are influenced by assessments of resemblance (the extent to which A “looks like” B). The representativeness heuristic is famously exemplified by people’s answers to questions about the likely career of a hypothetical woman named Linda, described as follows: “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations” (see Kahneman & Frederick 2002; Mellers et al. 2001). People were asked to rank, in order of probability, eight possible futures for Linda. Six of these were fillers (such as psychiatric social worker, elementary school teacher); the two crucial ones were “bank teller” and “bank teller and active in the feminist movement.” More people said that Linda was less likely to be a bank teller than to be a bank teller and active in the feminist movement. This is an obvious mistake, a conjunction error, in which characteristics A and B are thought to be more likely than characteristic A alone. The error stems from the representativeness heuristic: Linda’s description seems to match “bank teller and active in the feminist movement” far better than “bank teller.” In an illuminating reflection on the example, Stephen Jay Gould observed that “I know [the right answer], yet a little homunculus in my head continues to jump up and down, shouting at me – ‘but she can’t just be a bank teller; read the description’” (Gould 1991, p. 469). Because Gould’s homunculus is especially inclined to squawk in the moral domain, I shall return to him on several occasions
Providing more evidence for this, Sunstein claims.
In a finding closely related to their work on heuristics, Kahneman and Tversky find “moral framing” in the context of what has become known as “the Asian disease problem” (Kahneman & Tversky 1984). Framing effects do not involve heuristics, but because they raise obvious questions about the rationality of moral intuitions, they provide a valuable backdrop. Here is the first component of the problem:
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences are as follows:
If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor? Most people choose Program A. But now consider the second component of the problem, in which the same situation is given, but followed by this description of the alternative programs:
If Program C is adopted, 400 people will die.
If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.
Most people choose Problem D. But a moment’s reflection should be sufficient to show that Program A and Program C are identical, and so too for Program B and Program D. These are merely different descriptions of the same programs. The purely semantic shift in framing is sufficient to produce different outcomes. Apparently, people’s moral judgments about appropriate programs depend on whether the results are described in terms of “lives saved” or in terms of “lives lost.” What accounts for the difference? The most sensible answer begins with the fact that human beings are pervasively averse to losses (hence the robust cognitive finding of loss aversion, Tversky & Kahneman 1991). With respect to either self-interested gambles or fundamental moral judgments, loss aversion plays a large role in people’s decisions. But what counts as a gain or a loss depends on the baseline from which measurements are made. Purely semantic reframing can alter the baseline and hence alter moral intuitions (for many examples involving fairness, see Kahneman et al. 1986)
This shows definitively that moral heuristics deleteriously affect their actions. Just as with the Affordable Care Act—where describing it as Obamacare dramatically decreases its popular support—the way that one describes various moral scenarios affects popular attitudes towards them.
So we have independent lines of evidence that should make us expect our intuitions to misfire in cases where heuristics inform us of one thing and utilitarianism—even if correct—informs us of other things. This isn’t just a futile attempt to explain away our non-utilitarian intuitions—it’s what we should actively expect even if utilitarianism were correct.
Thus, we should be a little bit more sympathetic to utilitarianism in cases where it conflicts with various moral heuristics. We shouldn’t just treat the fact that it’s occasionally unintuitive to count strongly against it.
I’ve invoked chess repeatedly in this article—I’ll do so once more. Alphazero was the first AI to play chess without being trained on human strategy. It played chess like a strange alien—but it played better chess. We should expect the correct morality to seem strange and alien to us, just like we should with chess. This is so even if we’re intuitionists about morality.
We Should Expect Our Intuitions to Misfire in Cases Where Heuristics Fail
This is why I’m so averse to hypotheticals. First because they contain implicit assumptions about the nature of the world, which can be radically different from our own, and second, they’re often just to make the ethical proposal violate a moral axiom — something we’ve arbitrarily decided oughtn’t be.
Take, for example, the hypothetical of “what if slavery were good for the slaves?” I say this is bad, because first — it makes a claim about the nature of the world which simply could not happen with humans the way they are and were — and it secondly fails for the reasons you lay out here — if you stipulate that something is good then it is good.
> There are lots of cases where utilitarianism diverges from our initial intuitions.
Isn't this fatal for utilitarianism? Why would we ever believe in utilitarianism except because of an intuition that said that pleasure was good?