I recently debated Ben Burgis on the topic “the organ harvesting case gives us good reason to doubt utilitarianism.” I took the negative position. Here is my opening statement, which I typed out before hand.
There are lots of good reasons to think the organ harvesting case doesn’t count against utilitarianism.
Part 1: General objections to rights
Here I’ll present a series of philosophical problems with the notion of rights.
1 Everything that we think of as a right is reducible to utility considerations. For example, we think people have the right to life, which obviously makes people’s lives better. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are 100 trillion aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are significant, then the aliens grabbing the legs of the humans, in ways that harm no one, would be morally bad. The amount of rights violations would outweigh and not only be bad, but they would be the worst thing in the world. However, it doesn’t seem plausible that the aliens should have to experience being burned alive, when no humans even find out about what’s happening, much less are harmed. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically tortured all of the time but where there are no rights violations.
3 A reductionist account is not especially counterintuitive and does not rob our understanding of or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
4 We generally think that it matters more to not violate rights than it does to prevent other rights violations, so one shouldn’t kill one innocent person to prevent two murders. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them, on this view. This seems deeply implausible. Similarly, this view entails that it’s more important for a person to eliminate one landmine that will kill a child set down by themself, rather than eliminating five landmines set down by other people—another unintuitive view.
5 We have lots of scientific evidence that judgments favoring rights are caused by emotion, while careful reasoning makes people more utilitarian. Paxton et al 2014 show that more careful reflection leads to being more utilitarian.
People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders, should obviously choose the world in which the one person does the murder to prevent 5.
An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The people in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 5^99 murders, when the alternative actions could have resulted in only one murder, because they’d keep passing the buck until the 100th circle. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being, who always chooses correctly.
8 Let’s start assuming one holds the following view
Deontological Bridge Principle: This view states that you shouldn’t push one person off a bridge to stop a trolley from killing five people.
This is obviously not morally different from
Deontological Switch Principle: You shouldn’t push a person off a bridge to cause them to fall on a button which would lift the five people to safety, but they would not be able to stop the trolley.
In both cases you’re pushing a person off a bridge to save five. Whether their body stops the train or pushes a button to save other people is not morally relevant.
Suppose additionally that one is in the Switch scenario. They’re deciding whether to make the decision and a genie appears to them and gives them the following choice. He’ll push the person off the bridge onto the button, but then freeze the passage of time in the external world so that the decision maker can have ten minutes to think about it. At the end of the ten minutes, they can either lift the one person who was originally on the bridge back up or they can let the five people be lifted up.
It seems reasonable to accept the Genie’s offer. If, at the end of ten minutes, they decide that they shouldn’t push the person, then they can just lift the person back up such that nothing actually changes in the external world. However, if they decide not to then they’ve just killed one to save five. This action is functionally identical to pushing the person in switch. Thus, accepting the genie’s offer is functionally identical to just giving them more time to deliberate.
It’s thus reasonable to suppose that they ought to accept the genie’s offer. However, at the end of the ten minutes they have two options. They can either lift up one person who they pushed before to prevent that person from being run over, or they do nothing and save five people. Obviously they should do nothing and save five people. But this is identical to the switch case, which is morally the same as bridge.
We can consider a parallel case with the trolley problem. Suppose one is in the trolley problem and a genie offers them the option for them to flip the switch and then have ten minutes to deliberate on whether or not to flip it back. It seems obvious they should take the genie’s offer.
Well at the end of ten minutes they’re in a situation where they can flip the switch back, in which case the train will kill five people instead of one person, given that it’s already primed to hit one person. It seems obvious in this case that they shouldn’t flip the switch back. Thus, deontology has to hold that taking an action and then reversing that action such that nothing in the external world is different from if they hadn’t taken and then reversed the action, is seriously morally wrong.
If flipping the switch is wrong, then it seems that flipping the switch to delay the decision ten minutes, but then not reversing the decision, is wrong. However, flipping the switch to delay the decision ten minutes and then not reversing the decision is not wrong. Therefore, flipping the switch is not wrong.
Maybe you hold that there’s some normative significance to flipping the switch and then flipping it back, making it so that you should refuse the genie’s offer. This runs into issues of its own. If it’s seriously morally wrong to flip the switch and then to flip it back, then flipping it an arbitrarily large number of times would be arbitrarily wrong. Thus, an indecisive person who froze time and then flipped the switch back and forth googolplex times, would have committed the single worst act in history by quite a wide margin. This seems deeply implausible.
Either way, deontology seems committed to the bizarre principle that taking an action and then undoing it can be very bad. This is quite unintuitive. If you undo an action, such that the action had no effect on anything because it was cancelled out, that can’t be very morally wrong. Much like writing can’t be bad if one hits the undo button and replaces it with good writing, it seems like actions that are annulled can’t be morally bad.
It also runs afoul of another super intuitive principle, according to which if an act is bad, it’s good to undo that act. On deontological accounts, it can be bad to flip the switch, but also bad to unflip the switch. This is extremely counterintuitive.
9 (Huemer, 2009) gives another paradox for deontology which starts by laying out two principles (p. 2)
“Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
This is intuitive—how we classify the division between actions shouldn’t affect their moral significance.
Second (p.3) “If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.”
Now Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong. However, this clearly wouldn’t be morally wrong.
10 Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, (Mogenson and Macaskill, 2021) argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people, by changing very slightly the time in which lots of other people have sex.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist, and no doubt some will violate rights in significant ways and others will have their rights violated in ways caused by you. Mogenson and Macaskill argue that consequentialism is the only way to account for why it’s not wrong take most mundane, banal actions, which change the distribution of future people, thus violating (and preventing) vast numbers of rights violations over the course of your life.
11 The pareto principle, which says that if something is good for some and bad for no one then it is good, is widely accepted. It’s hard to deny that something which makes people better off and harms literally no one is morally good. However, from the Pareto principle, we can derive that organ harvesting is morally the same as the trolley problem.
Suppose one is in a scenario that’s a mix of the trolley problem and the organ harvesting case. There’s a train that will hit five people. You can flip the switch to redirect the train to kill one person. However, you can also kill the person and harvest their organs, which would cause the 5 people to be able to move out of the way. Those two actions seem equal, if we accept the Pareto principle. Both of them result in all six of the people being equally well off. If the organ harvesting action created any extra utility for anyone, it would be a Pareto improvement over the trolley situation.
Premise 1 One should flip the switch in the trolley problem
Premise 2 Organ harvesting, in the scenario described above, plus giving a random child a candy bar is a pareto improvement over flipping the switch in the trolley problem
Premise 3 If action X is a pareto improvement over an action that should be taken, then action X should be taken
Therefore, organ harvesting plus giving a random child a candy bar is a action that should be taken
Part 2: Specific objections to the organ harvesting case
First, there’s a way to explain our organ harvesting judgments away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes.
Second, we have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctor’s offices for fear of being murdered. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5.
Third, we can imagine several modifications of the case that makes the conclusion less counterintuitive.
First, imagine that the six people in the hospital were family members, who you cared about equally. Surely we would intuitively want the doctor to bring about the death of one to save five. The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.
A second objection to this counterexample comes from Savulescu (2013), who designs a scenario to avoid unreliable intuitions. In this scenario there’s a pandemic that affects every single person and makes people become unconscious. One in six people who become unconscious will wake up—the other 5/6ths won’t wake up. However, if the one sixth of people have their blood extracted and distributed, thus killing them, then the five will wake up and live a normal life. It seems in this case that it’s obviously worth extracting the blood to save 5/6ths of those affected, rather than only 1/6ths of those affected.
Similarly, if we imagine that 90% of the world needed organs, and we could harvest one person's organs to save 9 others, it seems clear it would be better to wipe out 10% of people, rather than 90%.
A fourth objection is that, upon reflection, it becomes clear that the action of the doctor wouldn’t be wrong. After all, in this case, there are four more lives saved by the organ harvesting. It seems quite clear that the lives of four people are fundamentally more important than the doctor not sullying themself.
Fifth, we would expect the correct view to diverge from our intuitions in a wide range of cases, the persistence of moral disagreement and the fact that throughout history we’ve gotten lots of things morally wrong show that the correct view would sometimes diverge from our moral intuitions. Thus, finding some case where they diverge from our intuitions is precisely zero evidence against utilitarianism, because we’d expect the correct view to be counterintuitive sometimes. However, when it’s counterintuitive, we’d expect careful reflection to make our intuitions become more in line with the correct moral view, which is the case, as I’ve argued here.
Sixth, if we use the veil of ignorance, and imagine ourself not knowing which of the six people we were, we’d prefer saving five at the cost of one, because it would give us a 5/6ths, rather than a 1/6ths chance of survival.
Just want to respond to a few points:
> 6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders, should obviously choose the world in which the one person does the murder to prevent 5.
What is meant by "benevolent" in the phrase "benevolent third party observer"? There seems to be two plausible meanings: either a benevolent third-party observer is someone who (1) hopes the world is made better, or (2) hopes agents do what is right.
If you mean (1), then the statement "a third party observer should hope that you do what is right" is question-begging against deontology. Deontology explicitly affirms that it is sometimes right to perform actions that make the world worse. Therefore, deontology explicitly affirms that agents should sometimes perform actions that go against the wishes of a third-party observer.
But if you mean (2), then the statement "a third party observer...should obviously choose the world in which the one person does the murder to prevent 5" is question-begging against deontology. The statement can be translated to "an observer who wants agents to do what is right...should obviously choose the world in which the one person does the murder to prevent 5". But this premise obviously just negates deontology by itself.
There's a similar issue with the ring case:
> Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse.
This premise seems sufficient to negate deontology. Deontology explicitly states that we are sometimes morally required to perform actions that make the world worse. If that's right, then there are obviously going to be cases where giving a perfectly moral agent more options will make the world worse. E.g. giving more power to perfect deontologists instead of perfect consequentialists will obviously make the world worse, so long as the deontologist has to choose between violating rights and making the world better. That's just what the view affirms.