Believers In Special Obligations Must Think That Friendship Is (Sometimes) Much Worse Than Hitler
Why I don't believe in special obligations
Introduction
There are lots of good reasons to care more about your friends and family than about random strangers. When people watch out for those close to them, society goes better. Families and friendships are perhaps the most efficient vehicles of making people better off in the history of the world—so it’s not hard to explain why we should value them. But believers in special obligations go further—they say that even if all else is really equal, you should value your friends and family more than others. Obligations are relative to a person, and your moral obligations to help your friends and family are much stronger than your obligations to help strangers, even ignoring all the reasons why you helping your family makes the world better.
I think special obligations are non-existent. You have no special obligations. Of course, there are many biological and sociocultural reasons why we think there are special obligations—those who cherished their family passed on their genes more. In addition, society obviously acts as though there are special obligations. So it’s not hard to explain why we’d believe in special obligations even if there weren’t any. Our moral beliefs in special obligations were given to us by culture and evolution, rather than through reason. And as we’ll see in the last section, there’s an even more powerful explanation of our beliefs in special obligations that undermines their truth.
So the arguments for special obligations aren’t good. But what of the arguments against special obligations? Fortunately, I think there are three basically knock-down arguments against special obligations.
The hope objection
Here’s a plausible principle: perfectly moral people should want people to do the right thing. Jesus and God would not be sitting in heaven, praying that you’ll do the wrong thing.
Here’s another plausible principle: if there are special obligations, then if you can either avert 30 years of life lost from your family members or 50 years of life lost from a stranger, you should avert 30 years of life lost from your family members.
From these, however, we can show that the belief in special obligations has super implausible implications. Suppose that my friend is named John and some stranger’s friend is named Todd. I can either give Todd an extra 50 years of life or give John an extra 30 years of life. If there are special obligations, I should give the extra 30 years of life to John. But suppose the stranger can also give either 50 years of life to John or 30 years of life to Todd. If there are special obligations, he should give the 30 years of life to Todd. But this means that, because the third party prefers people act rightly to wrongly, the third party should prefer a state of affairs in which both Todd and John are given 30-year life extensions to one in which they’re given 50-year life extensions. This is obviously false. But the premises are pretty trivial.
The basic idea isn’t revolutionary. Special obligations are collectively self-defeating—if everyone follows them, we might all end up worse off. But this combined with the premise that third parties should want people to do the right thing means that if special obligations are part of the fabric of morality, third parties should prefer that everyone’s worse off. In fact, even if we jettison the premise that third parties should want people to do the right thing, it’s odd to think that perfect beings acting rightly will sometimes be catastrophic for everyone, even when they’re aware of all the relevant information.
You might object that there’s an exception to special obligations. They should only be followed if deviating from them is not part of a Pareto-improving action. So if some action is part of a sequence of actions that makes everyone better off, then you should take the action that makes the world best, rather than taking special obligations. But this bumps up against our special obligation intuitions:
Suppose that you can either save your loved one from enduring grievous harm or one stranger. However, you discover that someone else already saved your loved one instead of the stranger, when doing so produced slightly more benefit for your loved one. It still doesn’t seem like you should save the stranger over your loved one—if the stranger will only benefit slightly more.
Suppose you save a stranger from great pain instead of a loved one from slightly less great pain. Ten years later, however, someone saves your loved one from great pain instead of the stranger. It seems odd that this would retroactively make your action not wrong, but this is what this view would hold.
You can either prevent your child or a random stranger from developing some horrible disease. You find out that they prevented your child from developing some terrible disease, rather than their child from developing a slightly less terrible disease. Still though, it seems like you should save your child.
Some stranger takes some action that gives you the option to save their child or your own and also makes your child slightly better off. If you save their child, then everyone will be better off relative to a world where they hadn’t taken that action. Nonetheless, it seems wrong—if there are special obligations—to take that action.
Note that there is no other way out. The worry for special obligations arises in all cases where some sequence of actions makes everyone better off, but violates individual agents’ special obligations. In order to avoid the objection, therefore, one has to think that in all cases where violating a special obligation leads to a Pareto improvement, one should violate them. But that implies the counterexamples I described earlier. So there is no way out for the believer in special obligations.
Is friendship literally worse than Hitler?
This will get a bit technical—feel free to skip it and move on to the next section if you get confused. It is, however, a pretty convincing argument.
Assume that there are special obligations. Some person, call him Tim will have to, one year from now, make a decision of whether to avert 100,000,000 units of suffering from Juan or 110,000,000 units of suffering from Bertha. Assume that the units of suffering which we are discussing are equal to an amount of suffering equivalent to all suffering humans have ever experienced in world history; so Juan will suffer 110,000,000 times as much as all humans have in the history of the world. Assume Tim always does what he ought to do.
Suppose that Tim is deciding whether to befriend Juan. Should he? As I’ll show, believers in special obligations are committed to the belief that it would be a catastrophe of unimaginable proportions for Tim to make a friend. If there are special obligations, then Tim should help Juan instead of Bertha—after all, you should benefit your friends even if you could instead benefit a stranger 1.1 times as much. You should, for example, give your friend a cancer treatment that would allow him to live an extra ten years rather than give a stranger eleven extra years.
But this means that if Tim befriends Juan he should benefit Juan, therefore he will benefit Juan because he’s a perfect being. But benefitting Juan instead of Bertha is a catastrophe of unimaginable proportions—it introduces more badness to the world than all atrocities in human history. So on this account, making friends is literally worse than Hitler.
You might object and say that benefitting friends is especially good rather than the benefiter merely having especially strong reasons to do it. On this account, the world would be a better place if Tim benefits Juan than if he benefits Bertha because the world is improved more when one benefits a friend than a stranger. This has a few problems:
It’s just not an account of special obligations. The idea of special obligations is that one shouldn’t always aim impartially at the good rather than a modified account of the good. Now, maybe this is not a problem, but it’s worth noting how radical of a proposal this is.
This counterintuitively implies that if you could either prevent your friend’s heart attack or do nothing and allow someone else to prevent their friend’s heart attack, both options would be equally worth doing, because on this account the reason why it’s worth benefitting a friend over a stranger is based on the importance of benefitting friends over strangers—but this is an agent-neutral account, so it implies that enabling someone else to benefit a friend over a stranger is just as important as doing so yourself.
This implies that if you could either prevent someone from experiencing 10 years of suffering or allow their friend to prevent 9 years of suffering, you should allow their friend to prevent the 9 years of suffering because the benefit would be made better by being provided by a friend rather than a stranger. But this is clearly crazy.
I feel like I’m beating a dead horse here, but worth noting a fourth reason this view is totally wrong. This account still implies that friendship is sometimes worse than Hitler. The believer in special obligations will think you have an especially strong obligation not to harm your friends. But suppose you must inflict tons of suffering on one of two people. If befriending them makes their harm worse by some percent, then if their suffering becomes sufficiently large, befriending them would be arbitrarily bad by making the badness of causing them to suffering worse by some percent.
The only hope—that I can think of—for the believer in special obligations is to claim that your special obligations to help avert small amounts of harm are greater than your special obligations to avert large amounts of harm. Thus, while you should prevent your wife from getting a heart attack rather than prevent two strangers from getting heart attacks, if we compare your reasons to prevent your wife from experiencing 110,000,000,000,000 units of torture rather than preventing a stranger from experiencing 110,000,000,000,000 units of torture, your reasons are only slightly greater than your reasons to prevent your wife from experiencing 100,000,000,000,000 units of torture rather than a stranger from experiencing 100,000,000,000,000 units of torture. Thus, because, as the amount that you’ve already benefitted someone approaches infinity, the effect that special obligations have on your reasons to benefit them more approaches zero, it might be the case that however bad it is for special obligations to cause a person to take the act that doesn’t make things go best it is always best for a person to create the thing that produces a special obligation—a friendship, for example. Thus, on this account, friendship wouldn’t be worse than Hitler—or even bad at all.
There are two versions of this view—one is lousy, and the other is worse. The first one, which we’ll call the Lifetime Based View says that over the course of a lifetime, the more you benefit someone to whom you have a special obligation, the less of an impact special obligations have on whether you should prioritize their interests over the interests of others, in the future. For example, suppose for the last 72,000 years, each year I’ve prevented my friend Lin from going to jail and being mistreated. Well then, my reasons to prevent this from happening to him in year 72,001 are just a bit stronger than my reasons to prevent a stranger from being jailed for a year. This view, unfortunately, is hard to square with the existence of special obligations.
Suppose that there are various incredibly long-lived beings. Both have been great friends for the last 100,000 years and have served each other’s interests. On this account, if they can save their friend or a stranger, it wouldn’t matter much who they save, because their reasons to help their friends are only a bit stronger than their reasons to help strangers. But this is wildly counterintuitive if we accept the existence of special obligations—if there are two people who have been deeply devoted to each other for a long time, they would still have robust special obligations.
The other view one might have is called the Acts Based View. According to this view, if you have special obligations to someone, as an act produces great benefits to them rather than a stranger, the amount that extra benefits given to them rather than the stranger influence the worthwhileness of the act approaches zero. So if there was a single act that would prevent 1,000 units of suffering from being experienced by your friend rather than a stranger, it would be very important to benefit your friend rather than the stranger, but if it would prevent 2,000 units of suffering from being experienced by either your friend or a stranger, benefitting your friend rather than the stranger would only be slightly more important than it is in the case where you can avert 1,000 units of suffering.
But the acts-based view has very counterintuitive results. As we’ve seen before, you still have strong reasons to benefit your friend rather than a stranger even if you’ve already benefitted your friend a lot. But this means the view oddly privileges some acts. If benefitting your friend rather than a stranger once a year is something you have strong moral reason to do, on this account, this is only so if it doesn’t comprise one act. So you’d have very strong reasons to every year press a button that benefits your friend rather than a stranger, but not to press a single button that would cause you to—every year—benefit a friend rather than a stranger. This is hard to believe—whether benefitting others is one act or multiple has nothing to do with its worthwhileness.
In addition, this view implies that friendship is worse than Hitler. It’s plausible that if two people are only conscious for a very short time during a year and never interact, their friendship has very little intrinsic value, just as after one is dead, one’s friendship with them is not very valuable. But this means that if each year for 10^40 years a perfect person became conscious for one second, was given the option to benefit their friend a lot or a stranger a greater amount, and then went back to their slumber, they ought to benefit their friend on account of their special obligation. But this means that them creating the friendship would produce lots of disvalue—10^40 years worth of great disvalue—which makes their friendship plausibly worse than Hitler.
The paradox of special obligations
The idea here is based on Richard’s paradox of deontology. The following two principles are plausible, and defended in Richard’s paper:
Third parties should hope that people do the right thing.
If a third party prefers a world where you take action C to one in which you take action A, they prefer that you take action C to action A.
Consider three states of affairs.
w1) you save your loved one, but this prevents two other people from saving theirs.
w2) you save your loved one but a random unanticipated side effect is that two other people can't save theirs.
w3) the other two each save their loved one.
> represents preferability from the standpoint of a third party.
w3>w2≥w1. Therefore, w1<w3, but w3 is just w1 where you don't save your loved one and the other two do instead. Therefore a third party should want you to no save your loved one, even if this prevents two strangers from doing so. Therefore you should not save your loved one if doing so prevents two strangers from doing so. But this obviously conflicts with the existence of special obligations.
Debunking
Many beliefs that people have don’t track the truth. Sometimes these are influenced by various biases. In the case of special obligations, as I hope to have made here, there’s a straightforward conflict between various intuitions that we have. But there are very straightforward reasons to reject our intuitions based on special obligations.
We have an abundance of evidence that our moral beliefs are often based on emotional reactions—on feelings of attraction and aversion. If we feel good about some action—if it provokes in us a feeling of rightness rather than wrongness or horror—we’re more likely to do it. But obviously helping those close to us—those we like—produces in us a more positive feeling than helping strangers. So it’s obvious that we’d feel like it was better to help friends and loved one’s rather than strangers, even if it wasn’t.
The reason that we want to help loved ones rather than strangers is that we care about them more. They produce a more salient emotional reaction, just like a drowning child nearby provokes a stronger emotional reaction than a far-away child dying of malaria. So we have a very plausible and straightforward debunking account. For this reason, we have an abundance of evidence against special obligations and powerful evidence against the reliability of the evidence for special obligations. We should thus give up our belief in the existence of special obligations.
Hi Bentham.
I am not sure that in number 4 the deontologist cannot avoid your counterexample. You write:
<i> "Some stranger takes some action that gives you the option to save their child or your own and also makes your child slightly better off. If you save their child, then everyone will be better off relative to a world where they hadn’t taken that action. Nonetheless, it seems wrong—if there are special obligations—to take that action."</i>
Off the top of my head, the deontologist mother could reply: "The special obligation that i must discharge is not that i physically be the one who saves my child. It is rather that i make it happen that my child is saved. This could happen even through an intentional omission of mine. So there is no problem to coordinate with the stranger to have him save my child -- in a more efficient way no less, which is another special obligation towards my child that i need to discharge,".
Or am i missing something?
I'm going to defend special obligations, and you're going to say "Yeah, that view of it is fine, I'm just arguing against some sort of metaphysically reified version", and I'm going to link to https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/. Now that I've made my predictions, let's go:
1. Alice has a child. Then she thinks about it and realizes that she could donate the money she spends on feeding the child to instead feed ten starving children in Africa. So she donates it to Africa and lets her child starve to death. Good or bad?
2. Bob takes a loan out from his friend, earns some money, and is able to pay the loan back. Instead he donates the money to starving children in Africa and tells his friend to call the cops if he objects (which he won't do, because it's a small amount of money and not worth a court case). Good or bad?
3. Carol got sick, and David saved her life by caring for her 24-7 for months until she recovered. Now David gets sick. Carol could either care for him, or let him die and instead volunteer at the local food bank, which would produce 2x as much utility. Which should she do?
4. Your waiter, who relies on tips to live, went above and beyond to give you excellent service, even though you technically came in very slightly after the restaurant's usual closing time. You can either give him a good tip, or stiff him and spend the money on malaria nets. Which do you do?
I think the idea of "special obligations" is just formalizing the sorts of actions we take in situations 1-4 which are necessary for society to exist at all, and I think arguing against them on the grounds that they're not ground-level-real is like arguing against the existence of rocks (which are also not ground-level-real, just abstractions over atoms). If you were going to criticize people who believe rocks are fundamental and not made of atoms, you should make it really clear that that's what you're doing, so people don't get the impression that you don't believe in normal rocks.
I think the Juan/Bertha analogy is boring. Compare to anything about making a promise. Suppose I promise to help you move tomorrow. But then it turns out that God says unless I go to such-and-such a place tomorrow, he will kill 10,000,000 people. That's worse than Hitler, so it seems like making promises is worse than Hitler, right? No. The solution is some combination of "accept that God doing crazy things will make moral calculations come out weird" and "have a widely-understood norm that promises are up to some extremely high point, but no further".
See also https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/