A Bayesian Analysis Of When Utilitarianism Diverges From Our Intuitions
Why we shouldn't worry too much about the cases in which utilitarianism goes against our intuitions
“P(B)|A=P(B)xP(A)|B/P(A)”
—Words to live by
Lots of objections to utilitarianism, like the problem of measuring utility, rest on conceptual confusions. However, of the ones that don’t rest on basic conceptual confusions, most of them rely on the notion that utilitarianism is unintuitive. Utilitarianism entails the desirability of organ harvesting, yet some people have strange intuitions that oppose killing people and harvesting their organs (I’ll never understand such nonsense!).
In this post, I will lay out some broad considerations about utilitarianisms’ divergence from our intuitions and explain why these are not very good evidence against utilitarianism.
The fact that there are lots of cases where utilitarianism diverges from our intuitions is not surprising on the hypothesis that utilitarianism were correct. This is for two reasons.
There are enormous numbers of possible moral scenarios. Thus, even if the correct moral view corresponds to our intuitions in 99.99% of cases, it still wouldn’t be too hard to find a bunch of cases in which the correct view doesn’t correspond to our intuitions.
Our moral intuitions are often wrong. They’re frequently affected by unreliable emotional processes. Additionally, we know from history that most people have had moral views we currently regard as horrendous.
Because of these two factors, our moral intuitions are likely to diverge from the correct morality in lots of cases. The probability that the correct morality would always agree with our intuitions is vanishingly small. Thus, given that this is what we’d expect of the correct moral view, the fact that utilitarianism diverges from our moral intuitions frequently isn’t evidence against utilitarianism. To see if they give any evidence against utilitarianism, let’s consider some features of the correct moral view, that we’d expect to see.
The correct view would likely be able to be proved from lots of independent plausible axioms. This is true of utilitarianism.
We’d expect the correct view to make moral predictions far ahead of its time—like for example discerning the permissibility of homosexuality in the 1700s.
While our intuitions would diverge from the correct view in a lot of cases, we’d expect careful reflection about those cases to reveal that the judgments given by the correct moral theory to be hard to resist without serious theoretical cost. We’ve seen this over and over again with utilitarianism, with the repugnant conclusion , torture vs dust specks , headaches vs human lives, the utility monster, judgments about the far future, organ harvesting cases and other cases involving rights, and cases involving egalitarianism. This is very good evidence for utilitarianism. We’d expect incorrect theories to diverge from our intuitions, but we wouldn’t expect careful reflection to lead to the discovery of compelling arguments for accepting the judgments of the incorrect theory. Thus, we’d expect the correct theory to be able to marshal a variety of considerations favoring their judgments, rather than just biting the bullet. That’s exactly what we see when it comes to utilitarianism.
We’d expect the correct theory to do better in terms of theoretical virtues, which is exactly what we find.
We’d expect the correct theory to be consistent across cases, while other theories have to make post hoc changes to the theory to escape problematic implications—which is exactly what we see.
There are also some things we’d expect to be true of the cases where the correct moral view diverges from our intuitions. Given that in those cases our intuitions would be making mistakes, we’d expect there to be some features of those cases which make our intuitions likely to be wrong. There are several of those in the case of utilitarianism’s divergence from our intuitions.
A) Our judgments are often deeply affected by emotional bias1
B) Our judgments about the morality of an act often overlap with other morally laden features of a situation. For example, in the case of the organ harvesting case, it’s very plausible that lots of our judgment relates to the intuition that the doctor is vicious—this undermines the reliability of our judgment of the act.
C) Anti utilitarian judgments get lots of weird results and frequently run into paradoxes. This is more evidence that they’re just rationalizations of unreflective seemings, rather than robust reflective judgment.
D) Lots of the cases where our intuitions lead us astray involve cases in which a moral heuristic has an exception. For example, in the organ harvesting case, the heuristic “Don’t kill people,” has a rare exception. Our intuitions formed by reflecting on the general rule against murder will thus be likely to be unreliable.
Conclusions
Suppose we had a device that had a 90% accuracy rate in terms of identifying the correct answer to math problems. Then, suppose we were deciding whether the correct way to solve a math problem was to use equation 1 or equation 2. We use the first equation to solve 100 math problems, and the result is the same as the one given by the device 88 times. We then use equation 2 and find the results correspond with those given by the device in all 100 cases.
We know one of them has to be wrong, so we look more carefully. We see that the 12 cases in which the first equation gets a result different from that of the second equation are really complex cases in which the general rules seem to have exceptions—so they’re the type of cases in which we’d expect the equation that’s merely a heuristic to get the wrong result. We also look at both of the equations and find that the first equation has much more plausibility, it seems much more like the type of equation that would be expected to be correct.
Additionally, the second equation has lots of subjectivity—some of the values for the constants are chosen by the person applying it. Thus, there’s some room for getting the wrong result based on bias and the assumptions of the person applying it.
We then see a few plausible seeming proofs of the first equation and see that the second equation isn’t able to make sense of lots of different math problems, so it has to attach auxiliary methods to solve those. We then hear that previous students who have used the first equation have been able to solve very difficult math problems—ones that the vast majority of people (most of whom use the second equation), have almost universally gotten wrong. We also see that equation 2 results in lots of paradoxical seeming judgments—we have to call in our professional mathematician friend (coincidentally named Michael Huemer) to find a way to make them not straightforwardly paradoxical, and the judgment he arrives at requires lots of implausible stipulations to rescue equation 2 from paradox. Finally, we find that in all 12 of the cases in which the correct answer diverges from the answer given by the device, there are independent pretty plausible proofs of the results gotten by equation 1 and they’re more difficult problems—the type that we’d expect the device to be less likely to get right.
In this case, it’s safe to say that equation 1 would be the equation that we should go with. While it sometimes gets results that we have good reason to prima facie distrust (because they diverge from the method that’s accurate 90% of the time), that’s exactly what we would expect if it were correct, which means all things considered, that’s not evidence against it. Additionally, the rational judgment in this case would be that equation 2—like many moral systems—is just an attempt to mirror our starting beliefs, rather than to figure out the right answer, forcing revision of our beliefs.
Our moral intuitions are the same—they’re right most of the time, but on the hypothesis that they’re usually right, we’d still expect there to be lots of cases in which our moral intuitions are wrong. When we carefully reflect on the things we’d expect if utilitarianism were correct, they tend to match exactly what we see.
Incoherent. The only thing that drives a belief in Util, as a good friend of mine once said is: "Happiness is good, asking why happiness is good is like asking why 2+2 = 4, it just is!". That's a raw intitutive assertion, one that people can simply disagree with. Most of the virtues of utilitarianism can be captured by other systems that don't rely on happiness being good, and the ones that do not rely entirely on this assertion.