24 Comments

Hi Bentham.

I am not sure that in number 4 the deontologist cannot avoid your counterexample. You write:

<i> "Some stranger takes some action that gives you the option to save their child or your own and also makes your child slightly better off. If you save their child, then everyone will be better off relative to a world where they hadn’t taken that action. Nonetheless, it seems wrong—if there are special obligations—to take that action."</i>

Off the top of my head, the deontologist mother could reply: "The special obligation that i must discharge is not that i physically be the one who saves my child. It is rather that i make it happen that my child is saved. This could happen even through an intentional omission of mine. So there is no problem to coordinate with the stranger to have him save my child -- in a more efficient way no less, which is another special obligation towards my child that i need to discharge,".

Or am i missing something?

Expand full comment

Neither of the two strangers would be able to--or would--save their child, they'd instead save their own child. So if person A passes up saving her child, people B and C will save their own children.

Expand full comment

One more point. Can the objection i was trying to bring up here be brought against your Todd-John example? You write:

<i> "Suppose that my friend is named John and some stranger’s friend is named Todd. I can either give Todd an extra 50 years of life or give John an extra 30 years of life. If there are special obligations, I should give the extra 30 years of life to John. But suppose the stranger can also give either 50 years of life to John or 30 years of life to Todd. If there are special obligations, he should give the 30 years of life to Todd. But this means that, because the third party prefers people act rightly to wrongly, the third party should prefer a state of affairs in which both Todd and John are given 30-year life extensions to one in which they’re given 50-year life extensions. This is obviously false. But the premises are pretty trivial.

The basic idea isn’t revolutionary. Special obligations are collectively self-defeating—if everyone follows them, we might all end up worse off.</i>

Why aren't the deontologists who wish to discharge their special obligations not allowed to coordinate their behaviour in your example? Assuming they know the relevant facts and decide to coordinate they can very well claim that they discharge their special obligation by giving the 50 years of life to the non-friend, secure in the knowledge that they have discharged their special obligation to make it happen that their own firiend be better off, given that they know the other party will do the same.

Expand full comment

We're assuming that they can't coordinate by stipulation of the hypothetical.

Expand full comment

Yes, i understand that we can stipulate that they cannot coordinate. But the way you presented your case did not make it clear that they cannot coordinate, and it seemed to me that non-philosophers (like me) in the audience would get the wrong impression that deontological special obligations fetishistically preclude even coordination, which they don't. Besides, you actually said, in the immediate continuation of your paragraph that I cited above, that people acting under the moral guidance of special obligations, if these obligations are taken as part of “the fabric of morality”, then these people can be “catastrophic for everyone , even when they’re aware of all the relevant information”

Here is your full sentence:

.

<i> “But this combined with the premise that third parties should want people to do the right thing means that if special obligations are part of the fabric of morality, third parties should prefer that everyone’s worse off. In fact, even if we jettison the premise that third parties should want people to do the right thing, it’s odd to think that perfect beings acting rightly will sometimes be catastrophic for everyone, even when they’re aware of all the relevant information.”</i>

But if they (the two agents in the Todd-John example) are aware of “all the relevant information”, the fact of non-coordination that we stipulate must be a conscious decision to not coordinate, at least if it is to be worthy of your incredulous stare (“it is odd”).

But, anyway, here is my question: Deontology does not make a secret of espousing a morality that can make <i>everyone</i> worse off in case of lack of full information. The deontologist who does not turn the trolley will not turn it even if it is the case that (without the deontologist's knowing it) the one victim would have been saved too trough some weird physical occurrence if only she had diverted the trolley to save the five. If your point against a specific aspect of Deontology, namely the importance assigned to deontological special obligations, is to have any extra import (over and above your other misgivings over Deontology) then this can only be if indeed the special obligations are somehow precluding deontologists from coordinating in the relevant circumstances (because, remember, you had for the sake of argument temporarily jettisoned your main argument re the person who hopes for the agents, in order to stress what seems to you an additional putative absurdity of special obligations, namely that following as an agent the special obligations can make everyone worse off, even in knowledge of all relevant information.

I am trying to say that your claim in the second paragraph of yours that I cited would have been unintelligible unless you were indeed assuming that the 2 agents choose to not coordinate – and that would have been so even if you had not stated that you actually find the whole thing odd when the parties have full information.

Expand full comment

Sorry, i cannot understand the set up you are referring to. I need it more spelled out (who is A, who is the deontologist, whose child is in danger etc). In your quote there is just one stranger who is "making your [the deontologist's] child better off" if the deontologist takes him up on his offer. How would that be possible if the stranger does not save the deontologist's child in case the deontologist accept the offer? A dead child cannot be better off.

Expand full comment

A is a person whose child is in danger. A can save A's own child or allow B and C to each save their child. They'd make their child better off by saving their life.

Expand full comment

I had understood a totally different set up re the child. Thank you for the clarification.

Expand full comment

I'm going to defend special obligations, and you're going to say "Yeah, that view of it is fine, I'm just arguing against some sort of metaphysically reified version", and I'm going to link to https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/. Now that I've made my predictions, let's go:

1. Alice has a child. Then she thinks about it and realizes that she could donate the money she spends on feeding the child to instead feed ten starving children in Africa. So she donates it to Africa and lets her child starve to death. Good or bad?

2. Bob takes a loan out from his friend, earns some money, and is able to pay the loan back. Instead he donates the money to starving children in Africa and tells his friend to call the cops if he objects (which he won't do, because it's a small amount of money and not worth a court case). Good or bad?

3. Carol got sick, and David saved her life by caring for her 24-7 for months until she recovered. Now David gets sick. Carol could either care for him, or let him die and instead volunteer at the local food bank, which would produce 2x as much utility. Which should she do?

4. Your waiter, who relies on tips to live, went above and beyond to give you excellent service, even though you technically came in very slightly after the restaurant's usual closing time. You can either give him a good tip, or stiff him and spend the money on malaria nets. Which do you do?

I think the idea of "special obligations" is just formalizing the sorts of actions we take in situations 1-4 which are necessary for society to exist at all, and I think arguing against them on the grounds that they're not ground-level-real is like arguing against the existence of rocks (which are also not ground-level-real, just abstractions over atoms). If you were going to criticize people who believe rocks are fundamental and not made of atoms, you should make it really clear that that's what you're doing, so people don't get the impression that you don't believe in normal rocks.

I think the Juan/Bertha analogy is boring. Compare to anything about making a promise. Suppose I promise to help you move tomorrow. But then it turns out that God says unless I go to such-and-such a place tomorrow, he will kill 10,000,000 people. That's worse than Hitler, so it seems like making promises is worse than Hitler, right? No. The solution is some combination of "accept that God doing crazy things will make moral calculations come out weird" and "have a widely-understood norm that promises are up to some extremely high point, but no further".

See also https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

Expand full comment

A few things: First, I don't think the John and Todd situation actually poses any serious challenge to a sufficiently developed conception of special obligations. It's just a general example of how first-order and second-order moral commitments might diverge. Both Todd and John ought to prefer that the other acts altruistically *precisely because joint altruism here would bring about the best possible benefits for their own children.* Their second-order commitment to honoring a special obligation gives them reason, in this situation, to adopt a first-order commitment to altruism or impartial benevolence. It would be pretty easy to construct a similar situation wherein someone acting on the first-order principle of maximizing outcomes regardless of familial ties ultimately results in everyone being worse off - in that case, their impartial benevolence held as a second-order moral principle would countenance adopting a first-order moral principle involving special obligations to family members. Utilitarians have discussed this plenty in terms of their own philosophy, so I'm not sure why it's an issue here.

Secondly, I think your analysis of the Bertha and Juan ends up in weird places because, as a utilitarian, your singular reliance on the concept of beneficence requires you to see a special obligation as a sort of "weighting" applied to beneficent judgments, rather than a reason in its own right to be considered alongside beneficence. In other words, it's not that "your special obligations to help avert small amounts of harm are greater than your special obligations to avert large amounts of harm." It's that your special obligation has a fixed strength that is at times capable of overriding the strength of your beneficent duty to reduce suffering and at other times is not. It seems obvious to me that Tim should definitely aid Bertha in this situation because, while his friendship may provide him with a legitimate reason to aid Juan, that reason isn't more powerful than his reason to prevent an extra ten trillion lifetimes worth of suffering. I'm not sure what's problematic about that view.

Finally, I'll just say that I think the second premise of the paradox of deontology (and, implicitly, in the same sort of argument you're making here) is ambiguous - it depends what you mean by "preferring a world." Imagine, for example, that I have the opportunity to murder a stranger in order to get one billion dollars. What does it mean to ask "which world I prefer?" If you conceptualize worlds apart from their causal histories, then you're just asking me whether I'd have one billion dollars and a dead stranger or a no money and a living stranger. Obviously, I would rather have a billion dollars. So in that sense, I "prefer" the world where I commit the murder - but then the second premise is basically just saying "If you prefer the consequences of an act, then you prefer the act" which is undoubtedly false for anyone who isn't a strict consequentialist. But if you incorporate the sorts of actions involved in producing the world *into* your conception of that world, then the premise is true but the third party would not necessarily rank those worlds in that order *if the third party had a non-consequentialist ethical framework.* But then again, even someone who believes in special obligations wouldn't necessarily endorse the idea that saving a loved one in a way that *prevents* two others from saving their loved ones is appropriate anyway.

So yeah, I don't think any of this is going to challenge a sufficiently thoughtful deontologist or virtue ethicist because they fundamentally don't share your view that ethical situations are best evaluated "from the standpoint of a third party" who lacks any of their own ethical commitments and only analyzes outcomes - I know everyone has different interests and it's not always worth it to spend time examining views you don't hold, but you should try to really explore more non-consequentialist ethical models in more detail if you want to construct arguments that really challenge them.

Expand full comment

On your first point--I claim that third parties should want you to do the right thing, but this poses challenges for special obligations. There aren't cases under utilitarianism where a third party hopes you act wrongly--if we're talking about what you have most objective reason to do.

On the second: well, that's a fine view, but if you think that you should benefit your friend even if you could benefit a stranger 1.1 times more you should benefit your friend, then you can't hold the view. In that section, I draw out the absurd implications of the various versions of the view that you could defend.

For the third point--when we talk about preferring a world, I think that's basically a primitive notion--we're asking which world you'd rather be actual. If you're not sure which world is the real one, which one would you want to be is the one you prefer.

Expand full comment

You're right that a utilitarian third party would never want you to act wrongly, but they might want you to act as a non-utilitarian. There may be some contingent set of social and psychological facts that make it so every child is better off if their parents assume special obligations to their children exist. In that case, the utilitarian third party would not want you to be a utilitarian in a first-order sense. They would want you to act in ways that utilitarianism straightforwardly condemns, but only because doing so ultimately best fulfills the utilitarian aim in a second-order sense. I think most people recognize that and don't find it particularly controversial. So my claim is just that your John and Todd example is the same sort of dilemma, but reversed. In that situation, a third party who accepted special obligations would hope you act as though you don't have them, just as a utilitarian third party might sometimes hope you act as though you *do* have special obligations. But neither are hoping you act wrongly all told.

Two the second point, I would say you're still making a mistake by seeing special obligations as modifiers on a foundational beneficent obligation. That's not how most people see them as working. The principle isn't "You should benefit a friend even if you could benefit a stranger 1.1 times more." The principle is just "You should benefit a friend," period. And that reason for action competes with the general reason for action "You should prevent excess suffering." In some cases, the prior reason will be stronger. In other cases, it's outweighed.

Finally, I'd just say that, if your account of a particular world includes its causal history, then I don't think any particular moral theory would fall prey to this paradox. If I believe it's wrong to kill one person in order to harvest their organs and save five, then I don't prefer the world where five people are alive over the world where one person is alive - I prefer the state of that world at the particular moment it's being evaluated, but I don't see that state as justifying the act that actualized it. I would prefer the world where one person was alive and an unjust act had not been committed.

Expand full comment

They might want you to be a non-utilitarian, but they'd never want you to take a non-utilitarian act. So perhaps believing in special obligations is optimific, but they'd always want you to take the optimal action.

Everyone agrees that you should benefit a friend. The believers in special obligations say you have stronger reasons to do that than to benefit strangers.

The world does include its causal history. But that doesn't avoid any step of the paradox.

Expand full comment

Sure, and the neutral observer who believes in special obligations would never want you to take an act that would fail, in some way, to make good on that obligation. But the decision to altruistically aid someone else over your own children inside a larger system wherein adherence to that principle made your own children better off would be the best way to make good on that obligation. So there's no contradiction.

I don't think, once again, that it's right to say a belief in special obligations is a belief that we have "more reason" to benefit friends over strangers. It's more accurately framed as a belief that we have a reason to benefit friends *in addition* to our general reasons to benefit any random person. Your utilitarian perspective here only allows you to have one measure that special obligations must "act on," but deontologists and virtue ethicists can admit multiple sorts of action-guiding reasons that work alongside each other or conflict. Do you agree that, in that conception, the sorts of conflicts you're bringing up are often easily resolvable?

Finally, if a world does include its causal history, then there's no paradox - any third party who is a virtue ethicist or a deontologist would want to actualize the world wherein agents acted rightly, even if doing so creates a world that's "worse" in terms of outcome. But I would also just dispute the usefulness of this framing when it comes to moral systems that are agent-centered rather than outcome-centered.

Expand full comment

If we have an extra reason to benefit friends over strangers, then we have more reason to benefit friends than strangers. I'm not assuming utilitarianism--just that special obligations say that you should benefit your friends even when you could benefit strangers a bit more. I don't agree that the virtue ethical account resolves one iota of the problem, and it seems that you haven't articulated how it does.

Which step of the preference relation do you think the third party would reject? Do you agree they prefer the world where the two save their loved ones to the one where just one does?

Expand full comment

I think the situation is underdescribed - I would imagine most virtue ethicists don't think you're always justified in *preventing* someone else from fulfilling an important duty merely because you want to fulfill your own. In that case, their ranking would be W3>w2>>>>>>>W1. They just wouldn't be arriving at that particular order based solely on consequences.

But if you look at a situation where that sort of conflict didn't exist - let's say it's just one person at a lake where there are two strangers drowning alongside their one child - then if W1 is the person rescuing both strangers and W2 is the person rescuing their own child, the virtue ethicist could say that they prefer she save her own child *and* that they prefer W2 (because it's a world wherein someone acted appropriately in response to their special obligation).

Expand full comment

Definitely sympathetic to the argument that everyone could or would be worse off if we consistently act on special obligations. It's just, I disagree that morality exists outside of the people holding it. There's not universal ideals (possible exception of the categorical imperative), there's just what people want and what they're willing to do/risk to get it.

Expand full comment

Why do I have reason to prefer that you satisfy your special obligation to your children? I'm not pursuing a world where special obligation satisfaction is maximized. If the world is full of people with distinct and often incompatible special obligations, then there is little reason to think that third parties will or should always root for you to satisfy your own.

I think the conclusion is being snuck into the initial premise. Whether you think your reasons should line up with a neutral observer's reasons for you hinges on whether you can have special obligations that a neutral party wouldn't share.

Expand full comment

How should i as a relative normy change my beliefs in regard to this? How do you think it affects your actions?

Expand full comment

Not much.

Expand full comment

I like that Jesus and God got a shout out in this post. ;) And I like the thoughtfulness here, but here's my argument: Love is the essence of what life is really about. Charity toward strangers is good and important. But love at its best is personal, intimate, and emotional. Charity toward a stranger is beautiful, but it only flows in one direction (from the giver). The way most of us prefer to be loved is when there is a mutual exchange. I value you for who you are, and you value me for who I am. We know each other, we embrace each other, and we delight in each other. And we love each other so much we will give whatever we can for the happiness of the beloved. This kind of love is what we long for. And this is why some charitable endeavors (done from a distance), may relieve suffering, don't provide for the core need of humanity to experience love in the fullest sense. It is the intimate love that brings us fulfillment, so we should not diminish the glue that sticks us together. If everyone sacrificed for their family and friends the world would be better off than if everyone sacrificed for strangers. Why? Because we would all be valued in an intimate way, rather than in a utilitarian way. Jesus said, "Greater love has no one than this: to lay down one's life for one's friends."

Which would the writer of this blog prefer: Your blog is read by 200 people who were assigned to read your blog and were reading from sheer obligation to "do the right thing." Or 100 people who read your blog because they think you are amazing, intelligent, insightful, compassionate, and are eager to hear your thoughts?

Expand full comment

You are describing what the belief in special obligations is. But I've given objections to it--drawing out its implausible implications. It seems to me you haven't addressed them.

I think love is obviously very valuable--it brings people great joy--which explains why I'd want people who like me to read my blog. But that's a different question from whether there are special obligations.

Expand full comment

"In addition, this view implies that friendship is worse than Hitler. It’s plausible that if two people are only conscious for a very short time during a year and never interact, their friendship has very little intrinsic value, just as after one is dead, one’s friendship with them is not very valuable. But this means that if each year for 10^40 years a perfect person became conscious for one second, was given the option to benefit their friend a lot or a stranger a greater amount, and then went back to their slumber, they ought to benefit their friend on account of their special obligation. But this means that them creating the friendship would produce lots of disvalue—10^40 years worth of great disvalue—which makes their friendship plausibly worse than Hitler."

I'm confused as to why creating the friendship would produce disvalue. Wouldn't it just produce a very low amount of value? Or are you referring to the comparative disvalue of a world with the friendship + the 10^40 acts of special obligation vs. a world with 10^40 acts of greater benefit to a stranger (with the idea being that the more/less valuable the friendship, the more/less valuable the extra value of benefitting the friend)?

Expand full comment

Because it would cause them to take the action that makes things worse for 10^40 consecutive years.

Expand full comment