INTRODUCTION
Arjun has written his opening statement in our debate about utilitarianism (my opening statement can be found here).
As far as I know, all attempts to find this have proven flawed, but I think that weak deontology is less flawed, and more importantly that in light of moral uncertainty people should make choices that take into account the possibility that most reasonable moral theories could be correct.
I’d agree that given moral uncertainty, we shouldn’t act as strict utilitarians. However, this fact does nothing to show utilitarianism is not correct. This debate is about what one in fact has most reason to do — be it the utilitarian act in all situations or some other — so pointing out what it’s reasonable to do given moral uncertainty (which is much like factual uncertainty) does nothing to show that utilitarianism is not correct. Discussion of how we should practically reason given uncertainty has nothing to say about which theory is actually correct.
Arjun next claims that part of utilitarianism involves believing
Hedonism or preferentism: The only intrinsic good is pleasure (for the “hedonistic utilitarian”) or desire-satisfaction (for the “preference utilitarian”).
Now, while I am a hedonist, this is a bad definition. One can, like Chappell, be a utilitarian objective list theorist.
Consequentialism suffers from well-known thought experiments in which intuitions leads toward decision-making based on factors other than consequences. These are well known so I won’t list them: I mentioned some in Just Say No to Utilitarianism and you can find some more in the first section of Michael Huemer’s blog post Why I Am Not a Utilitarian.
I wrote a ten part blog series response to Huemer’s why I’m not a utilitarian—see parts 1, 2, 3, 4, 5, 6, 7, 8 ,9, and 10 .
My opening statement provided an extensive defense of hedonism, so nothing more is worth saying on that subject. I’ll return to the Just Say No to Utilitarianism article in a moment.
PARTIALITY FAILS WHEN WE REALLY CONSIDER ITS IMPLICATIONS
The weakest of the three tenets is impartiality. It conflicts with the strong intuition that people have particular obligations to specific people as a result of their relationships to them. Parents have an obligation to consider the interests of their children over the interests of strangers, all else equal.
There are several objections to this point.
First, a strong form of impartiality is clearly evolutionarily debunkable—there’s an obvious evolutionary reason for us to favor our close kin over strangers. However, as the moral circle expands, we should care more about others.
Second, it’s collectively self defeating. After all, if we all do what’s best for our families at the expense of others, given that everyone is part of a family, every person doing what’s best for their own family will be bad for families as a whole. This is the basic logic behind the prisoner’s dilemma — and it’s argued in much greater detail by Parfit.
Third, we can rig up scenarios where this ends up being infinitely bad. Suppose that both you and I both have families that will experience 100,000,000 units of suffering. However, we both have 50,000,000 opportunities to either decrease our own families suffering by one or to decrease the others families suffering by two. If we both do what’s best, rather than just what’s best for our family, neither will suffer at all, while if we act on the alternative maxim, this would end up being unfathomably morally bad.
Fourth, it seems very clear that when we think in terms of what is truly important, our family members are not intrinsically more important than others, meaning that our special obligations to them are only practical. As Chappell says
As a general rule, when other theorists posit compelling agent-relative reasons (e.g. to care about one’s own children), I don’t want to deny them. I’d rather expand upon those reasons, and make them agent-neutral. You think your kid matters immensely? Me too! In fact, I think your kid matters so much that others (besides you) have similarly strong reasons to want your kid’s life to go well. And, yes, you have similarly strong reasons to want other kids’ lives to go well, too.
Fifth, there’s an obvious reason why, even if others are just as important as those close to us, we’d neglect their interests. We generally don’t spend very much time thinking about strangers who are far away—just as they don’t spend much time thinking about us—and so we don’t really think about our obligations to them. As Chappell says
We tend not to notice those latter reasons so much. So it might seem incredible to claim that you have equally strong reasons to want the best for other kids. (They sure don’t feel as important to us.) But reasons only get a grip on us insofar as we attend to them, and we tend not to think much about strangers—and even if we tried, we don’t know much about them, so their interests are apt to lack vivacity.
The better you get to know someone, the more you tend to (i) care about them, and (ii) appreciate the reasons to wish them well. Moreover, the reasons to wish them well don’t seem contingent on you or your relationship to them—what you discover is instead that there are intrinsic features of the other person that makes them awesome and worth caring about. Those reasons predate your awareness of them. So the best explanation of our initial indifference to strangers is not that there’s truly no (or little) reason to care about them (until, perhaps, we finally get to know them). Rather, the better explanation is simply that we don’t see the reasons (sufficiently clearly), and so can’t be emotionally gripped or moved by them, until we get to know the person better. But the reasons truly were there all along.
Sixth, utilitarianism can provide a good reason for us to have special obligations. There’s a good practical reason for us to care more about our families. We have much greater ability to influence our family and close friends than others, and creating tight knit relationships that require adopting special obligations like marriage or friendship is clearly a utilitarian good. This combined with the previous arguments is the best accounts of special obligations—they make sense on practical grounds, but when they don’t, they don’t seem valuable.
Seventh, it seems like for something to be morality, it must be impartial. Egoism is not a legitimate candidate for morality, because it doesn’t consider what we have most impartial reason to do overall—it just considers what’s best for us.
ADRESSING ARJUN’S COUNTEREXAMPLES
Arjun’s counterexamples come from Caplan, and he described them in this article.
Grandma: Grandma is a kindly soul who has saved up tens of thousands of dollars in cash over the years. One fine day you see her stashing it away under her mattress, and come to think that with just a little nudge you could cause her to fall and most probably die. You could then take her money, which others don’t know about, and redistribute it to those more worthy, saving many lives in the process. No one will ever know. Left to her own devices, Grandma would probably live a few more years, and her money would be discovered by her unworthy heirs who would blow it on fancy cars and vacations. Liberated from primitive deontic impulses by a recent college philosophy course, you silently say your goodbyes and prepare to send Grandma into the beyond.
I think one should not kill Grandma in this case — after all, things go best when you’re the type of person who wouldn’t kill your grandmother to donate her money to save lives. Virtues of a good utilitarian would rule this out. As Chappell notes
While utilitarianism as a theory is fundamentally impartial, it does not recommend that we attempt to naively implement impartiality in our own lives and decision-making if this would prove counterproductive in practice. This allows plenty of scope for utilitarians to accommodate various kinds of partiality on practical grounds.
But we can change it slightly to ask whether, were a vicious person to do this and then give away half her wealth, who had no other obligations, whether that would be good. My answer to that would be yes, relative to doing nothing. After all, it would save a bunch of lives.
Chappell additionally notes
Finally, it is worth flagging that the history of partiality includes many examples of group discrimination, such as discrimination based on race, sex, or religion, that we now recognize as morally unacceptable. While this certainly does not prove that all forms of partiality are similarly problematic, it should at least give us pause, as we must consider the possibility that some of our presently-favored forms of partiality (or discrimination on the basis of perceived similarity or closeness) could ultimately prove indefensible.
Our intuitions about this case are caused by a few things. First, contemplating of how bad it is to die. However, in this case, we’re saving a bunch of lives, so the badness of death cuts both ways. If, rather than killing grandma to save a bunch of other people, our grandma was the one who was saved, or someone else close to us was saved, at the expense of someone else’s grandmother, our intuitions about the case would flip. It’s only when you compare a real, flesh and blood person to nameless, faceless, far away strangers that things seem unintuitive.
Given that this would save the most lives, it seems clear that a perfectly moral third party observer would hope that grandma would die at that moment. After all, that would bring about a better world. But if we have most reason to hope for something to happen, it seems we also would have reason to bring it about.
Additionally, for this case to be justified by utilitarianism, we’d have to have near total certainty — perhaps a declaration from god — that we wouldn’t be found out. Thus, biting this bullet doesn’t require accepting any unintuitive, real world results.
Additionally, grandma would choose to be killed from behind the veil of ignorance. If she was a random person near death—either herself or a person dying of malaria who would be saved—she would consent to this. Given that making her totally rational and impartial would make her favor the action, that gives good reason to favor the action.
Additionally, comments I made about organ harvesting in this article clearly apply here—I’ll quote them in full.
What’s going on in our brains—what’s the reason we oppose this? Well, we know that social factors and evolution dramatically shape our moral intuitions. So, if there’s some social factor that would result in strong pressure to hold to the view that the doctor shouldn’t kill the person, it’s very obvious that this would affect our intuitions. Are there?
Well, of course. A society in which people went around killing other people for the greater good would be a much worse society. We have good rules to place strong prohibitions on murder, even for the allegedly greater good.
Additionally, it is a practical necessity that we accept, as a society, some doing allowing distinction. Given that doing the maximally good thing all the time would be far too demanding, as a society, we treat there as being some fundamental distinction between doing and allowing. Society would collapse if we treated murder as being only a little bit bad. Thus, it’s super important that we treat murder as very bad. But given that we can’t treat failing to do something unfathomably demanding as horrendous—equivalent to murder—we have to treat there as being some distinction between doing and allowing.
After this distinction is in place, our intuitions about organ harvesting are very obviously explainable. If killing is treated as unfathomably evil, while not saving isn’t, then killing to save will be seen as horrendous.
To see this, imagine if things were the other way. Imagine if we were living in a world in which every person will kill one person per day, in an alternative multiverse segment, unless they fast during that day. Additionally, imagine that, in this world, each person saved dozens of people per day in alternative multiverse segment, unless they take drastic action. In this world, it seems clear that failing to save would be seen as much worse than killing, given that saving is easy, but failing to kill is very difficult. Additionally, imagine that these people saw those who they were saving, and they felt empathy for them. Thus, not saving someone would provoke similar internal emotional reactions in that world as killing does in ours.
So what do we learn from this. Well, to state it maximally bluntly and concisely, many of our non-utilitarian intuitions are the results of social norms that we design to have good consequences, which we then take to be significant independently of their good consequences. These distinctions are never derivable from plausible first principles, never have clear delineations, and always result in ridiculous reductios. They are more epiphenomena—an unnecessary byproduct of correct moral reasoning. We correctly see that society needs to enshrine rights as a legal concept, and then incorrectly feel an attachment to them as an intrinsic feature of morality.
When we’re taught moral norms as a child, we’re instructed with rigid norms like “don’t take other people’s things.” We try to reach reflective equilibrium with those intuitions, carefully reflecting until they form coherent networks of moral beliefs. Then, later in life, we take them as the moral truth, rather than derivative heuristics.
Consider the situation in greater depth and the utilitarian conclusion becomes more clear. The death of your grandmother is a tragedy, but the death of dozens of others is a far greater tragedy. You’re just choosing between those states of affairs.
To quote my earlier article again.
Similarly, as Yetter-Chappell points out, there’s lots of status quo bias. A major reason why … it seems wrong to push the guy off the bridge in the trolley problem is because that would deviate from the status quo. If we favor the status quo, then it’s no surprise that utilitarianism would go against our intuitions about favoring the status quo. Our aversion to loss also explains why we want to keep things similar to how they are currently.
If we accept that the lives of lots of people are more important than the life of one grandmother, then the aversion to killing ones grandmother to save many lives must be grounded in either
A) Special obligations. However, utilitarianism provides a great account of this—it explains why we have special obligations to save our family members.
B) Some conception of rights—this was refuted in my first article.
C) Status quo bias—for it explicitly advocates against shifting from a worse world that happens to be the status quo to a better world.
To quote my article for a third time,
Similarly, people care much more about preventing harms from foes with faces than foes without faces. So in the organ harvesting case, for example, when the foes have a face (namely, you) it’s entirely clear why the one murder begins to seem worse than the five deaths.
Now let’s modify the scenario to imagine that it involved killing your grandmother to save five of your other grandparents. Well, this seems clearly worth it, as I argue in my debate opening statement. The death of a grandmother is very terrible, but the death of five is grandparents is worse. This is especially true because all of your grandparents would rationally be in favor of it, assuming they didn’t know who it was.
Thus, the scenario is as follows. Five of your grandparents will die and one will be fine unless you kill the one who will be fine. However, you don’t know which one will be fine and which ones will die. This case seems structurally similar to the case of killing a grandparent to kill five others — it just robs you of access from the start of which grandparent will be saved. However, impartially considered, saving five other people is as important as saving five of your grandparents, therefore, by transitivity, if you should kill your grandparent in this scenario, you should also in the other scenario.
Now, let’s modify the scenario again. You have two choices
A) Kill two random people out of the five people that will be saved to save all five
B) Kill your grandmother to save five
C) Don’t save any of the 5.
A would clearly be better than C — it is, after all, a pareto improvement. Every single person in the group would clearly hope that you’d choose A over C. However, B plausibly seems better than A — you kill fewer people, and the person you do kill is old and near death. Thus, by transitivity, B is better than C, which entails the utilitarian judgment about this case.
Finally, in this scenario, it would clearly be better to convince the grandmother to save lives. Thus, you also need a guarantee that you won’t be able to convince her to donate.
Arjun’s second counterexample is as follows.
Child: Your son earns a good living as a doctor but is careless with some of his finances. You sometimes help him out by organizing his receipts and invoices. One day you have the opportunity to divert $1,000 from his funds to a charity where the money will do more good; neither he nor anyone else will ever notice the difference, besides the beneficiaries. You decide to steal your child’s money and promote the overall good.
Now, you plausibly have good pragmatic reasons to avoid stealing from your child. Relationships with covert theft tend not to work out very well. However, the act considered in isolation would clearly be good.
In this case, seriously assess the salient features. On the one hand, you have the ability to save about one fourth of a person. On the other hand, it would cost someone else 1000 dollars that they don’t need. Clearly, a person’s life is much more important than 1000 dollars.
All the things I said about the previous situation apply here — it’s plausibly rooted in status quo bias, relies on an erroneous notion of rights, it would be hoped for by a perfectly moral third party observer, and so on.
Let’s consider modifying the scenario to be about stealing 9000 dollars — after all, there are plausibly zero people who think it would be bad to steal 1000 but it would be good to steal 9000. Well in this case the question is whether you should steal money from your child that won’t adversely affect them to save the lives of two people. The answer is obviously yes! Stealing to save several lives is a deal worth taking.
The only reason this scenario provokes horrifying indifference about the fate of several lives is because the way in which the lives are saved is indirect, society would be dysfunctional if people saw donating money to effective charities as worth stealing for, and the person harmed is someone who we have good reason to have special obligations to. Yet when we really reflect on the scenario — what really matters about it — it’s comparing trifles to human lives. It’s thus a very easy decision.
Moreover, for the reasons described above, it’s exactly the type of intuition that we’d expect to be unreliable. When we compare people that are close to us to far away, nameless faceless people, it’s totally unsurprising that we’d care more about the people close to us. This scenario is like Ted Cruz proposing the reductio to utilitarianism that it cares equally about Americans and Iraqis — the fact that one theory privileges the very near isn’t a good thing.
I’M NOT EMOTIONAL; YOU ARE
Next, Arjun says
I’m not sure what angle Matthew will take, so I’ll only anticipate two counterarguments in advance. First, there’s a response rejecting the intuitions in the specific cases on the grounds that they’re based on an emotional reaction that isn’t morally relevant. Utilitarianism also rests on intuitions and a case just as strong could be made that those intuitions have an emotional motivation.
Utilitarianism rests on motivations, yet its motivations are much more reliable. To quote an earlier article I wrote
One 2012 study finds that asking people to think more makes them more utilitarian. When they have less time to think, they become conversely less utilitarian. If reasoning lead to utilitarianism, this is what we’d expect. More time to reason would make people proportionately more utilitarian.
A 2021 study, compiling the largest available dataset, concluded across 8 different studies that greater reasoning ability is correlated with being more utilitarian. The dorsolateral prefrontal cortex’s length correlates with general reasoning ability. It’s length also correlates with being more utilitarian. Coincidence? I think not1.
Yet another study finds that being under greater cognitive pressure makes people less utilitarian. This is exactly what we’d predict. Much like being under cognitive strain makes people less likely to solve math problems correctly, it also makes them less likely to solve moral questions correctly. Correctly being in the utilitarian way.
Yet the data doesn’t stop there. A 2014 study found a few interesting things. It looked at patients with damaged VMPC’s—a brain region responsible for lots of emotional judgements. It concluded that they were far more utilitarian than the general population. This is exactly what we’d predict if utilitarianism were caused by good reasoning and careful reflection, and alternative theories were caused by emotions. Inducing positive emotions in people conversely makes them more utilitarian—which is what we’d expect if negative emotions were driving people not to accept utilitarian results.
THIS APPROACH TO ETHICS IS BAD
As one might be able to deduce from this sub-header, I’m not a fan of this approach to ethics that involves rejecting plausible principles that explain a lot of moral data based on a few apparent counterexamples. I argue this point in this article
The fact that there are lots of cases where utilitarianism diverges from our intuitions is not surprising on the hypothesis that utilitarianism were correct. This is for two reasons.
There are enormous numbers of possible moral scenarios. Thus, even if the correct moral view corresponds to our intuitions in 99.99% of cases, it still wouldn’t be too hard to find a bunch of cases in which the correct view doesn’t correspond to our intuitions.
Our moral intuitions are often wrong. They’re frequently affected by unreliable emotional processes. Additionally, we know from history that most people have had moral views we currently regard as horrendous.
Because of these two factors, our moral intuitions are likely to diverge from the correct morality in lots of cases. The probability that the correct morality would always agree with our intuitions is vanishingly small. Thus, given that this is what we’d expect of the correct moral view, the fact that utilitarianism diverges from our moral intuitions frequently isn’t evidence against utilitarianism. To see if they give any evidence against utilitarianism, let’s consider some features of the correct moral view, that we’d expect to see.
The correct view would likely be able to be proved from lots of independent plausible axioms. This is true of utilitarianism.
We’d expect the correct view to make moral predictions far ahead of its time—like for example discerning the permissibility of homosexuality in the 1700s.
While our intuitions would diverge from the correct view in a lot of cases, we’d expect careful reflection about those cases to reveal that the judgments given by the correct moral theory to be hard to resist without serious theoretical cost. We’ve seen this over and over again with utilitarianism, with the repugnant conclusion , torture vs dust specks , headaches vs human lives, the utility monster, judgments about the far future, organ harvesting cases and other cases involving rights, and cases involving egalitarianism. This is very good evidence for utilitarianism. We’d expect incorrect theories to diverge from our intuitions, but we wouldn’t expect careful reflection to lead to the discovery of compelling arguments for accepting the judgments of the incorrect theory. Thus, we’d expect the correct theory to be able to marshal a variety of considerations favoring their judgments, rather than just biting the bullet. That’s exactly what we see when it comes to utilitarianism.
We’d expect the correct theory to do better in terms of theoretical virtues, which is exactly what we find.
We’d expect the correct theory to be consistent across cases, while other theories have to make post hoc changes to the theory to escape problematic implications—which is exactly what we see.
There are also some things we’d expect to be true of the cases where the correct moral view diverges from our intuitions. Given that in those cases our intuitions would be making mistakes, we’d expect there to be some features of those cases which make our intuitions likely to be wrong. There are several of those in the case of utilitarianism’s divergence from our intuitions.
A) Our judgments are often deeply affected by emotional bias1
B) Our judgments about the morality of an act often overlap with other morally laden features of a situation. For example, in the case of the organ harvesting case, it’s very plausible that lots of our judgment relates to the intuition that the doctor is vicious—this undermines the reliability of our judgment of the act.
C) Anti utilitarian judgments get lots of weird results and frequently run into paradoxes. This is more evidence that they’re just rationalizations of unreflective seemings, rather than robust reflective judgment.
D) Lots of the cases where our intuitions lead us astray involve cases in which a moral heuristic has an exception. For example, in the organ harvesting case, the heuristic “Don’t kill people,” has a rare exception. Our intuitions formed by reflecting on the general rule against murder will thus be likely to be unreliable.
(I elaborate more on this point in the conclusion of the linked article, but I’m excluding it for word count reasons).
TWO MORE CHALLENGES
While we’re here, I’ll present two explanatory challenges for non-utilitarian accounts
1
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
The non utilitarian may object that the badness of the act depends on how much harm is done. They might say that the first action is a more serious rights violation. Suppose the formula they give is that the badness of a rights violation = twice the amount of suffering caused by that rights violation.
This leads them open to a few issues. First, this rules out all harms that don’t cause any suffering. Thus, this account can’t hold that harmless rights violations are bad. Second, it doesn't seem to go well with the idea of rights. Rights violations seem to add bad things to the act done, independent of suffering caused.
Maybe the deontologist can work out a complex arithmetic to avoid this issue. However, this is an issue that is easy to solve for utilitarians, yet which requires complexity for deontologists and others who champion rights.
2
Consequentialism provides the only adequate account of how we should treat children. Several actions being done to children are widely regarded as justifiable, yet are not for adults.
Compelling them to do minimal forced labor (chores).
Compelling them to spend hours a day at school, even if they vehemently dissent and would like to not be at school.
Forcing them to learn things like multiplication, even if they don’t want to.
Forcing them to go to bed when their parents think will make things go best, rather than when they want to.
Not allowing them to leave their house, however much they protest.
Disciplining them in ways that cause them to cry, for example putting them on time-out.
Controlling the food they eat, who they spend time with, what they do, and where they are at all times.
However, lots of other actions are justifiable to do with adults, yet not with children.
Having sex with them if they verbally consent.
Not feeding them (i.e. one shouldn’t be arrested if they don’t feed a homeless person nearby. They should, however, if they don’t feed their children. Not feeding ones children is morally worse than beating their children, while the same is not true of unrelated adults.)
Employing them in damaging manual labor.
Consequentialism provides the best account of these obligations. Each of these obligations makes things go best, which is why they apply. Non consequentialist accounts have trouble with these cases.
One might object that children can’t consent to many of these things, which makes up the difference. However, consent fails to provide an explanation. It would be strange to say, for example, that the reason you can prohibit someone from leaving your house is because they don’t consent to leaving your house. Children are frequently forced to do things without consent, like learn multiplication, go to school, and even not put their fingers in electrical sockets. Thus, any satisfactory account has to explain why their inability to consent only bars them from consenting to some of those things.
CONCLUSION
Well, that’s all for now folks. That’s my response to Arjun. His criticisms of utilitarianism are the standard (unsuccessful) objections. See you in the next one!