Utilitarianism Wins Outright Part 19: Cases, Cases, Cases
Cases Where Utilitarianism Allegedly Gets the Wrong Answers (but Doesn't Actually)
I recently asked people to produce cases where utilitarianism gets the wrong results. I predict that all of them will fail. This article will try to show that. 6 cases were given, which I shall address here.
1
“Rich person brutally maims a homeless person to displace anger. Upon reflection feels guilt (causally determined) to alleviate his guilt, does so by donating large sum of money to EA charity offsetting the disutility. Resulting in positive net utility.Util = good to maim homeless.”
I can see how this would be a bit counterintuitive. However, let’s modify the case a little bit. Imagine that the person when they feel guilty donate the money to the person who they wronged, leading to the person being able to afford an expensive surgery to save his life. The man would have otherwise died. This means that the man was made overall better off. In this case, the conjunction of actions seem clearly good.
Well, presumably helping someone even worse off than the man wronged isn’t considerably less important than helping the man wronged. Thus, if we accept
A) Donating money to save the life of the person who was maimed in response to maiming the person would make the pair of actions good.
B) Donating money to save someone else’s life is just as good as donating money to the man wronged.
Then we’d have to accept
C) Donating money to save someone else’s life in response to maiming the person would be good.
This may seem counterintuitive. This is for two reasons.
Much of our moral intuitions relate to viciousness. Even if the actions are right, we might think they’re wrong because they indicate character defects
It’s hard to truly understand how morally counterintuitive our world is, such that the cost to save a life is a few thousand dollars. If we were looking at it from the perspective of the victim, this would seem more intuitive.
2
“Utilitarianism seems to entail that I'm not permitted to sacrifice my own life to save a friend who will live a life minimally less happy than mine if they live. Whereas, common intuition tells us that it is superogatory, not impermissible.”
This seems to be slightly confused. Utilitarianism tends not to deal in concepts of permissibility or impermissibility. Instead, utilitarianism describes actions as being good or bad, or very good or very bad.
If we accept
A) Completeness: Morality must dictate what should be done in all cases.
We have to accept this, otherwise we’re subject to being money pumped. If we accept that morality doesn’t tell you whether you should save the person’s life in either cases
A person will have marginally less happiness than you
A person will have marginally more happiness than you
Then from this it would logically follow
A person’s future happiness is irrelevant to your obligation to save them.
However, 3 is implausible, requiring us to accept
Your obligation to save the life of a person who will live an extra week=your obligation to save the life of a person who will live an extra 40 years.
These in conjunction mean we should accept
B) A plausible moral theory should tell you when you should sacrifice your life for another.
If we accept B it seems we should accept
C) You should sacrifice your life to save another if your life will be worse than theirs. You might think it’s strange to call it immoral. Well, immoral generally refers to not caring about others, rather than not doing what is best. It would be weird to call sacrificing your life to prevent another from breaking their leg immoral, but it’s obvious that it shouldn’t be done. This was addressed more in this article.
3
The repugnant conclusion (although that’s a pretty obvious one). What about systematically cultivating people with Down syndrome in order to increase utility.
The Repugnant conclusion was already addressed here. The down syndrome thing is weird. It’s not clear that this does increase utility. However, if it did, it seems fine. Much like it would be good to genetically modify children so they don’t get Alzheimer’s or depression, the same thing seems to apply here. This seems to be the only account of why it would be good to press a button that would make your child happier, but bad to press a button that would make them less happy.
4
Many well-known anti-U cases are "wrong action" cases, but there are also "wrong diagnosis" cases, following Ross. Intuitively, what *makes* promise-breaking, lying, racism, tyranny, callousness wrong is not *only* that they cause suffering.
I agree that isn’t what seems to make them wrong. However, this isn’t very relevant.
1 Humans are notoriously terrible at understanding why they do what they do. Thus, I don’t place much stock in why people think they find things wrong. As this article says “Takemoto refers to a 1977 analysis conducted by Richard Nisbett and Timothy DeCamp Wilson, which found that people were unable to identify what had prompted them to behave a certain way, even when it was seemingly obvious. For example, in one study, participants were given a placebo pill, and told that it would reduce physical symptoms associated with receiving an electric shock. After taking the pill, participants took four times as much amperage as people who hadn't taken the pill. But when asked why, only one-quarter of subjects attributed their behavior to the pill, instead saying things like they had built radios when they were younger and so they were used to electric shocks.”
2 Humans often think in terms of rough heuristics. Thus, if things are usually bad, we associate them with being bad, even if they don’t cause harm.
3 One good way of testing this is to identify the necessary and sufficient conditions for the named phenomena being wrong. They only seem to be wrong in cases when they usually cause suffering. Lying to avoid hurting one’s feelings doesn’t seem particularly wrong. If tyranny was the only way to prevent the end of the world, tyranny would stop seeming so bad. We think racism is bad, but there are lots of things that would be called racism if they generally brought about bad outcomes. Examples include
Affirmative action
People only wanting to marry members of their own race (just because those are the only people they find attractive)
Reparations
Identitarianism (ie, people are okay with supporting black businesses because they are owned by black people, in a way that would be objectionable for white businesses).
Nationalism (people are often fine with Israel existing as a Jewish religious state, even though people would generally oppose a Christian religious state).
Pride Months (most people support a black pride month, but would object to a white pride month)
We’d similarly think that generally prejudiced sentiments of our racist uncle are less objectionable than they’d be if held by the president of the United States—merely based on the harm caused.
The point of these is not to stake out a firm position on whether or not these are objectionable, but rather to argue that whether we find them objectionable depends on their expected outcomes.
4 I think there’s a knockdown argument all of these allegedly wrong making properties. Suppose that we had a choice between two worlds
Each person tells 100^100^100^100 lies and lives a life where each second they experience unimaginable good experience—more than all that has ever been experienced so far in the history of the world.
No one tells any lie and everyone is terribly miserable all the time.
If we’d prefer world 1 that means that we think that a small amount of good experience worth a lie. The reason utilitarianism seems initially counterintuitive is because it’s not iterated. When it is iterated, it becomes obvious that these only appear to be wrong making properties. The same basic process can be done with any of the other things that appear to make things wrong.
5 Much of morality relates to our judgement of other people. However, judging people involves ascertaining their intentions, explaining why the things that go into our moral calculus are relevant to assessing their virtue—not just the outcomes of their actions.
5
If you've ever been wronged in profound ways, it can be hard to see what that person did as wrong *because* they could have done something that produced greater overall happiness: to me, that seems so, too, impersonal.
The responses to the previous argument all apply. Whether something benefits others doesn’t affect whether we’ve been wronged—it just affects whether the action was all things considered wrong.
6
Watching a sad movie. Like one where you're just bawling throughout.
That wouldn’t be judged as bad by utilitarianism—utilitarianism just wants to avoid bad experiences. If one enjoys the experience of watching a sad movie then it wouldn’t be bad. Utilitarianism looks at good experiences—not just whether those experiences would be represented by a smiley face or a frowning face.
Conclusion
Counterexamples to utilitarianism seem always to be cases where our intuitions are wrong and utilitarianism is right. This article sought to demonstrate that broad thesis. I tested this by being given examples by other people at random on twitter. All of them seemed to substantiate the broad pattern.
What about a dad having his teenage daughter have her group of friends over for a sleep over then drugging/roofying them all and then molesting and raping them while they are passed out, without them knowing; let's even include the daughter. The next morning no one is the wiser and no one ever finds out. The dad achieves a massive amount of utility. Is this a good thing overall? Should we encourage this sort of behaviour if the people could do it without getting caught?