Debate with Truth-Teller Part 1: My Opening Statement Arguing Against Deontology
Why deontology is wrong--beginning a written debate
Introduction
Recently, Truth Teller has agreed to a written debate about deontology with yours truly. The format is as follows; over the course of the next 4 weeks, each week, we will produce one article of up to 7,000 words. The first article will be our opening statement—the second will respond to the first, the third will respond to the second, and so on.
Let me begin by defining deontology. Deontology is (roughly) the idea that there is at least some constraint on our pursuit of the good. Thus, even though a world where you harvest someone’s organs to save five is better than one in which you don’t, deontologists holds that you still shouldn’t—there are moral side constraints. Thus, if you’re a deontologist, you’ll oppose killing one to save five, a doctor harvesting the organs of their patients, and pushing the man off the bridge to save five.
To avoid reinventing the wheel, much of this will be copied from other articles I’ve written in the past. Most of my best objections to deontology have already been written about.
Argument 1 Bombs in the park
Suppose a person puts a bomb in a park for malevolent reasons. Then, they realize that they’ve done something immoral, and they decide to take the bombs out of the park. However, while they’re in the process, they realize that there are two other bombs planted by other people. They can either diffuse their own bomb, or the other two bombs. Each bomb will kill one person.
It seems very obvious that they shouldn’t diffuse their own bomb—they should instead diffuse the two others. But this is troubling—on the deontologist’s account, this is hard to make sense of. When choosing between diffusing their own bomb or two others, they are directly making a choice between making things such that they violate two people’s rights or violate another person’s rights.
To avoid this, the deontologist can try to make the following argument. They can claim that it’s wrong to take an action that violates rights, but if you take an action that causes you to violate rights, there are only consequentialist reasons counting against that action. But this runs into a problem. Suppose that you know that in one hour, you’ll be a consequentialist—though for non-rational reasons. You can either drive to a hospital now or not do so. You know that if you drive to the hospital now, when you’re a consequentialist, you’ll kill one person to harvest her organs and save five. In this case, driving to the hospital seems wrong. If it’s wrong to violate rights, then it’s wrong to take some action that will predictably result in your violating rights. But if that’s wrong, then it’s wrong to take out your bomb rather than the two others. But that’s clearly false.
Amos thought that there might be some doctrine of double effect way to avoid this. But I don’t think that this works. If you plant the bomb in a park and it kills someone, you have done an act with the worst possible intention—your intention when you planted it was killing someone. Thus, the intent to save people in cases like the organ harvesting case would make organ harvesting better, if it would have any effect, not worse.
Argument 2 Deontology holds you should want people to do the wrong thing
This argument is a simpler and less comprehensive—also probably less decisive—version of Richard’s argument here. Definitely check out Richard’s excellent paper on the topic—he presses the argument in a more sophisticated way, giving it overwhelming force. This is probably the best argument against deontology.
Suppose you’re deciding whether or not to kill one person to prevent two killings. The deontologists hold that you shouldn’t. However, it can be shown that a third party should hope that you do. To illustrate this, suppose that a third party is deciding between you killing one person to prevent the two killings, or you simply joining the killing and killing one indiscriminately. Surely, they should prefer you kill one to prevent two killings to you killing one indiscriminately.
Thus, if you killed one indiscriminately, that would be no worse than killing one to prevent two killings, from the standpoint of a third party. But a third party should prefer you killing one indiscriminately to two other people each killing one indiscriminately. Therefore, by transitivity, they should prefer you kill one to prevent two killings to the two killings happening—thus they should prefer you kill one to prevent two. To see this let’s call you killing one indiscriminately YKOI, you killing one to prevent two killings YKOTPTK, and the two killings happening TKH.
YKOTPTK< YKOI<TKH. < represents being preferable. Thus, the deontologist should want you to do the wrong thing sometimes—a perfectly moral third party should hope you do the wrong thing.
Argument 3 The reversal paradox
Imagine one thinks that it’s wrong to flip the switch in the trolley problem. While we’ll first apply this scenario to the trolley problem, it generalizes to wider deontological commitments. The question is, suppose one accidentally flips the switch. Should they flip it back?
It seems that the answer is obviously not. After all, now there’s a trolley barreling toward one person. If you flip the switch it will kill five people. In this case, you should obviously not flip the switch.
However, there’s a very plausible principle that says that if an action is wrong to do, then if you do it, you should undo it. Deontologists thus have to reject this principle. They have to think that actions are wrong, but you shouldn’t undo them. More precisely, the principle says the following.
Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, you should undo it.
This problem can also apply to the footbridge case. Suppose you push the guy. Should you pull him back up? No—if you come across a person who is going to stop a train from killing five, you obviously shouldn’t preemptively save him by lifting him up, costing five lives.
This also applies to the organ harvesting case. Suppose you harvest the guys organs and put it in five other people. However, you can take the five organs out of the people, killing them, and put it back in the original person, saving him. Should you do that? Of course not!
Maybe you think you should. In that case, we can add two modifications to the scenario.
The first modification to the scenario involves adding a time delay. The second involves memory loss. Thus, you are deciding whether to take five organs out of people, killing them, and put them in one person. Ordinarily, you wouldn’t do that. However, someone informs you a year ago that you took the organs out of the original person. Thus, you’d just be reversing the earlier action.
It still seems obviously wrong to kill the five to save the one. But the deontologist must bite the bullet here or reject the reversal principle—after all, this is a case where they can reverse an action they did before it affected anyone.
Argument 4 Huemer’s paradox of deontology
Huemer, in his paradox of deontology, starts out laying out two principles.
“Individuation Independence: Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
This is intuitive—how we classify the division between actions shouldn’t affect their moral significance.
Second
“Two Wrongs: If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.”
This is obvious. Huemer gives an example to justify it, but if properly understood, this principle is trivial. Now Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off.
However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong. But if both acts are wrong, then Two Wrongs means doing both of them is wrong, and Individuation Independence means that some action that decreases both of their torture by some amount—that comprises a single action—would be wrong. But that’s clearly false!
There are various solutions to this but they all fail. I won’t refute them all in detail—it’s better to wait for TruthTeller’s response.
Argument 5 The paralysis argument
The argument that deontologists shouldn’t move should really move deontologists. The argument is roughly as follows—every time a person drives their car or moves, they affect whether a vast number of people exist. If you talk to someone, that delays when they next have sex by a bit, which changes the identity of the future person. Thus, each of us causes millions of future people’s identities to change.
This means that each of us causes lots of extra murderers to be born, and prevents many from being born. While the consequences balance out in expectation, every time we drive, we are both causing and preventing many murders. On deontology, an action that has a 50% chance of causing an extra death, a 50% chance of preventing an extra death, and gives one trivial benefit is wrong—but this is what happens every time one drives.
One way of pressing the argument is to imagine the following. Each time you flip a coin, there’s a 50% chance it will save someone’s life, a 50% chance it will kill someone, and it will certainly give you five dollars. This seems analogous to going to the store, for trivial benefits—you might cause a death, you might save someone, and you definitely get a trivial reward.
Amos thought that one way to get around this is to say that if some action will change the identity of future people and cause some act to occur, they have only consequentialist reasons not to do it. But this runs into seeming counterexamples. Suppose you know that driving will cause one extra death by changing traffic patterns, but it will prevent a serial killer from killing five. Driving seems fine there. However, if you have a deontological reason not to cause the extra death from driving but only consequentialist reasons to save the five victims of the serial killer, this would turn out wrong.
Similarly, if you're choosing between two actions, one will set off a complex chain reaction that kills one person in 40 years, and the other will cause someone to be born who will kill two people in 40 years, the second seems worse. However, if you have non-consequentialist reason not to do the first and only consequentialist reasons not to do the second, then the first would be worse.
Argument 6 The prevention paradox
Deontology, as standardly formulated, holds that there are some acts that you shouldn’t take even for the greater good and indeed even to prevent more of such acts. For example, you shouldn’t kill to prevent two killings. But strangely, the deontologist is committed to thinking that, for many of these acts, you shouldn’t try to prevent others from taking them. Or so I’ll argue.
The argument against deontology is as follows (I’ll use a paradigm case of killing one to prevent two killings here).
If deontology is true, then you shouldn’t kill one person to prevent two killings.
If you shouldn’t kill one person to prevent two killings, then, all else equal, you should prevent another person from killing one person to prevent two killings.
All else equal, you should not prevent another person from killing one person to prevent two killings.
Therefore, deontology is false. I’ll defend each of the premises.
1
If deontology is true, then you shouldn’t kill one person to prevent two killings.
This is true by definition.
2
If you shouldn’t kill one person to prevent two killings, then, all else equal, you should prevent another person from killing one person to prevent two killings.
The idea here is pretty simple. It seems really obvious that you should prevent people from doing wrong things if you can at no personal cost. In fact, as this paper. which started the entire idea of this worry for deontology in my mind notes, this produces a strange result when it comes to deterrence. Presumably, if we think that killing one to save five is wrong, we’ll think that it’s a good thing that laws against murder prevent that. But if we think that third parties have no reason to prevent killing one to save five, then deterrence is not a reason to ban deontic rights violations with good outcomes.
If you have no reason to prevent organ harvesting, then it isn’t wrong. One should prevent wrongdoing, if all else is held equal. .
3
All else equal, you should not prevent another person from killing one person to prevent two killings.
This argument has a supporting argument.
Deontology either is or is not true.
If deontology is not true, you should not prevent another person from killing one person to prevent two killings.
If deontology is true, you should not prevent another person from killing one person to prevent two killings.
Therefore, you should not prevent another person from killing one person to prevent two killings.
1 and 2 are trivial—I’m using deontology to mean merely that there are constraints, so if there aren’t, then you should kill one to prevent two killings. The only point in dispute is 3.
There are five reasons to accept three. The first one is defended in detail by the original paper—deontology just has no principled explanation of why you should prevent deontological wrongs. After all, third parties aren’t responsible for deontological wrongs they don’t prevent, but they do have some reason to make things better. Deontologists agree that you should promote the good, all else equal. Thus, they should recognize that you have some reason not to prevent a killing to prevent two killings—a consequentialist one—and no other deontological reasons to prevent it. Thus, all things considered, there’s no account of why one ought to prevent it.
Second, Richard’s paradox decisively shows that even deontologists should hope that you kill one to prevent multiple killings. But if you ought to want something to happen, you shouldn’t prevent it from happening.
Third, there’s a clear deontic reason not to prevent one killing to prevent multiple killings. After all, this is causing two murders to prevent one. It is clearly wrong—on both deontology and other theories—to cause two murders to prevent one.
Fourth, if it’s wrong to cause something, it seems like you should prevent it, if you can do so costlessly. But the deontollogist has to deny this; after all, they think that you shouldn’t cause someone else to harvest organs, but it would be wrong to prevent it.
Fifth, let’s imagine that someone else commits one killing to prevent you from killing two. It seems obviously wrong, on deontology, to take some action that will result in you directly killing two to prevent someone else from killing one. Thus, this deontic wrong would be something that you have no reason to prevent.
Argument 7 Huemer’s Paradox of Deontology’s More Threatening Twin
Huemer has a paradox of deontology. The basic idea is the following. If you reduce a person’s suffering by 2 units at the cost of inflicting one unit of suffering, deontology says that’s wrong. However, if you do that to two people, the combination of those acts reduces everyone’s suffering and is clearly good. But this is what you get from two wrong actions—though if you do two wrong things, and each is wrong even conditional on the other, it shouldn’t result in something right.
This argument is, I think, quite tricky to get out of. But the idea is not that it’s obvious that taking the combination of actions is necessarily good, instead, it’s that the combination of actions is not different from one action that combines their effects, which is clearly good, because, when taking an action, it is irrelevant whether it’s considered one action or two actions.
But I think there’s another argument in this vicinity that is even more threatening. This argument may be enough to refute deontology single-handedly. The basic argument shows that rights violations for trivial benefits are sometimes fine.
Take the following example of theft. Suppose that there are 1,000 aliens, each of which has a stone. They can all steal the stone of their neighbor to decrease their suffering very slightly any number of times. The stone produces very minimal benefits—the primary benefit comes from stealing the stone.
The aliens are in unimaginable agony—experiencing, each second, more suffering than has existed in human history. Each time they steal a stone, their suffering decreases only slightly, so they have to steal it 100^100 times in order to drop their suffering to zero. It’s very obvious, in this case, that all of the aliens should steal the stones 100^100 times. If they all do that, rather than being in unimaginably agony, they won’t be badly off at all.
The following seem true.
If deontology is true, it is wrong to steal for the sake of minimal personal benefits.
If it is wrong to steal for the sake of minimal personal benefits, it is wrong to steal repeatedly where each theft considered individually is for the sake of minimal personal benefits.
In the alien case, it is not wrong to steal repeatedly where each theft considered individually is for the sake of minimal personal benefits.
Therefore, deontology is false.
1 is obvious enough. 2 is also obvious—if one thing is wrong to do once, then if lots of people do it repeatedly, that would be especially wrong. 3 was described above—if the aliens don’t steal repeatedly, they will end up in a state where they experience more suffering per second than has existed in all of history.
This also generalizes to any case of a rights violation that can be done repeatedly (E.g. the aliens could be grabbing each other’s legs without the others knowledge or consent).
Argument 8 Dual Process Theory
We have lots of scientific evidence that judgments favoring rights are caused by emotion, while careful reasoning makes people more utilitarian. Paxton et al 2014 show that more careful reflection leads to being more utilitarian.
People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.
Argument 9 Richard’s New Paradox of Deontology
This is entirely stolen from Richard’s excellent article on the subject. I originally started writing an explanation, but there's no need to reinvent the wheel, when the argument has already been put into
Scheffler’s classic “paradox of deontology” asks how it can be wrong to minimize morally objectionable actions: If killing is so bad, shouldn’t we endorse one killing to prevent five? But deontologists can naturally respond that they don’t see killing as a bad to be minimized, but as a wrong to be prohibited in each instance.
My latest draft paper, ‘Preference and Prevention: A New Paradox of Deontology’, argues that deontologists face a deeper problem: once an agent has gone ahead and killed one in an attempt to prevent five other killings, deontologists cannot accommodate how strongly we should hope that her attempt succeeds.
For clarity, here are the four possible outcomes to compare (against a common backdrop of Protagonist choosing whether to kill one to prevent five other killings):
Five Killings: Protagonist does nothing, so the five other murders proceed as expected.
One Killing to Prevent Five: Protagonist kills one as a means, thereby preventing the five other murders.
Failed Prevention: As above, Protagonist kills one as a means, but in this case fails to achieve her end of preventing the five other murders. So all six victims are killed.
Six Killings: Instead of attempting to save the five, Protagonist simply murders her victim for the sheer hell of it, just like the other five murderers. So all six victims are killed.
Here’s the argument in a nutshell (using ‘≻’ to indicate ideal preferability, and ‘≻≻’ to indicate vast preferability, where this is stipulated to mean preferability of a magnitude strictly greater than the extent to which we should prefer one less generic killing):
(1) Deontic constraint (for reductio): Protagonist acts wrongly in One Killing to Prevent Five, and ought instead to bring about the world of Five Killings.
(2) If an agent can bring about W1 or W2, and it would be wrong for them to bring about W1 (but not W2), then W2 ≻ W1. (key premise)
(3) Five Killings ≻ One Killing to Prevent Five. (from 1, 2)
(4) One Killing to Prevent Five ≻≻ Failed Prevention. (premise)
(5) Failed Prevention ⪰ Six Killings. (premise)
(6) Five Killings ≻ One Killing to Prevent Five ≻≻ Failed Prevention ⪰ Six Killings. (3 - 5, transitivity)
(7) It is not the case that Five Killings ≻≻ Six Killings. (definition of ‘≻≻’)
# Contradiction (6, 7, transitivity).
As I argue in the full paper, “we should regard (4) as an unassailable moral datum, the rejection of which would entail severe moral disrespect to the five extra murder victims.” After all, Failed Prevention contains everything that’s morally objectionable about One Killing to Prevent Five, plus five additional, completely gratuitous killings. There is no respect whatsoever in which Failed Prevention is morally preferable. So the only way to lack the preference demanded by (4) is to fail to much care about those five additional killings. I take that to be plainly morally unacceptable. So everyone must accept (4).
But once you prefer Five Killings ≻ One Killing to Prevent Five, there’s not enough “room” left between Five Killings and Six to fit in the stronger preference that (4) demands. The preference chain indicated in (6) is incoherent, entailing a contradiction. It turns out that we can only accommodate (4) if we instead accept the consequentialist’s preference ordering, on which One Killing to Prevent Five ≻ Five Killings.
I then argue that this is effectively to embrace consequentialism (in substance if not in name). In particular, I argue that:
(I) Denying (2)—e.g. by claiming that we should prefer wrong actions to be performed— would rob deontic verdicts of their normative authority.
(II) Insofar as one can distinguish narrowly “act-directed” from broader “state-directed” motivations, the latter have greater normative authority. (Compare ‘Constraints and Candy.’)
(III) Deontologists can’t escape my argument by withdrawing to a narrower/moralized conception of “preference”, or by refraining from making any claims about preferability at all, because (i) any true view must be coherently completable, and (ii) there are clearly moral truths about broad preferability, e.g. that a decent moral agent should prefer that a child not be struck by lightning rather than be struck (all else equal).
The only way out for deontologists, that I can see, would be to invoke rampant incommensurability: perhaps the deontic reasons can’t be meaningfully weighed against the reasons of beneficence, leaving us without any basis for forming all-things-considered preferences in cases where the two conflict. But, as I conclude:
Rather than affirming deontic constraints, this view transforms them into (indeterminate) moral dilemmas. One might say that we “deontologically ought” to respect the constraint, but it would be equally true to say we “consequentially ought” to violate it, and quietism rules out the claim that we definitively ought to care more about the constraint than about the consequent benefits of violating it. All we can say is that we ought to feel (irreparably) torn, which leaves us entirely lacking in practical normative guidance.
If we are to combine normative authority, normative guidance, and adequate respect and concern for the five rescuable victims (after the other one has already been killed as a means), then we need consequentialism.
Argument 10: Agency as a Force for Good
It can be difficult to resist the general consequentialist principle that more desirable outcomes are more worth bringing about. Put simply: if you have a choice between bringing about a better outcome or a worse one, surely it would be morally better to choose the better outcome?
—Richard, writing for utilitarianism.net.
Deontology holds that there are constraints on what you should do. But this produces a very strange result—it ends up forcing the conclusion that sometimes putting perfect people in charge of decisions makes things worse. Suppose that a person while unconscious sleepwalks and is about to kill someone and harvest their organs to save five people. Then they wake up and have a choice of whether to actually harvest their organs or not.
Given that harvesting organs makes things better—it causes one death instead of five—it would be bad that they woke up. But this is strange—it seems like putting people who always do the right things in charge of a situation can’t make things worse.
Argument 11
The argument here against deontology is roughly as follows.
If deontology is true, it’s impermissible to kill one person to harvest their organs and save five.
If it’s impermissible to kill one person to harvest their organs and save five, it’s impermissible flip the switch in the trolley problem.
It’s not impermissible to flip the switch in the trollye problem.
Therefore, deontology is false. Most people have the intuition that 3 is true, and 1 is true almost by definition. 2 is the only controversial premise.
Most people think that you should flip the switch in the illuminatingly titled switch version of the trolley problem. The basic scenario is this. A train is on a track that is going to hit five people. SAD! However, you can flip a switch to redirect it so that it will hit one other person on the track. Most people think that you should do so; we know this from poll results.
There’s a very natural explanation of why this is the case. The explanation is roughly the following: it’s good to bring about some number of deaths to prevent some greater number of deaths. One death is less bad than five deaths, so you should bring about a state of affairs that ends up with one death rather than five. It’s good to make things better, so you should flip the switch.
I like this account, and think it is true. However, this account does not fit all of our initial intuitions—consider the following two sets of intuitions.
Transplant: a doctor can harvest a healthy person’s organs to redistribute them and save five people.
Push: A big man is on a track. A train will hit five people unless you push the man off the track. He’ll be killed by the train, but he’ll stop the train from hitting the five.
Now, elsewhere I’ve argued that utilitarianism does, in fact, get the correct answer in Transplant and bridge. But here, I’ll argue for something more modest—the wrongness of flipping the switch is roughly the same as the wrongness of pushing the person and harvesting the person’s organs.
Suppose that flipping the switch was less wrong than pushing the guy off the bridge. In this case, we should expect that, if one is given the choice between the two actions, they ought to flip the switch. After all, it’s better to do less wrong things rather than more wrong things.
Thus, the argument is as follows.
If flipping the switch is significantly less wrong than pushing the man, then if given the choice between the two options, even if flipping the switch is made less choiceworthy, one ought to flip the switch.
If given the choice between the two options, even if flipping the switch is made less choiceworthy, one ought not flip the switch.
Therefore, flipping the switch is not significantly less wrong than pushing the man.
1 is very obvious. If it’s not seriously wrong to flip the switch but it’s seriously wrong to push the man, then there won’t be scenarios where you should push the man, rather than flip the switch, even if we equalize things and make flipping the switch less choiceworthy. If you think that pushing small children is very wrong but eating cake is not very wrong, then if given the choice between the two, even if pushing small children is made slightly more choiceworthy, you should still eat cake instead.
I’ll clarify what I mean more in the defense of premise 2. Imagine the following scenario. There’s a train that will hit five people unless you do something. There’s a man standing above, on a bridge. There’s a track that you can, by flipping a switch, redirect the train onto, which leads up to the man on the bridge above the track. However, the train moves very slowly, so if you do that, the man will be very slowly and painfully crushed.
However, you have another option. You can, by pushing the man, cause him to die painlessly as he hits the tracks, and he’ll stop the five people. Which should you do?
Now, the deontologist is in a bit of a pickle. On the one hand, they think that you should flip the switch in general to bring about one death while saving five, but you shouldn’t push a man to save five. But in this case, it seems obvious that, given that the man would be far, far better off, it’s much better to push him than it is to flip the switch.
One might say that in this case, the man would consent. However, this doesn’t have to be so. We could imagine the man refusing to consent to anything, or just not having the opportunity to consent to anything for some reason, you can’t ask him; perhaps he’s asleep.
They could just bite the bullet and say you should flip the switch. But this is obviously wrong. It makes the victim much worse off than he’d otherwise be. Whatever morality is, it mustn’t be harming the victims of your actions.
Consider another case that should bare out the intuition. Suppose that it’s like the previous scenario, except you can either push the man or flip the switch. However, if you push the man, it will save five people, while it will only save three if you flip the switch. If you push the man, it will save all three of the people that will be saved by the switch flipping as well as two others. In this case, it seems obvious that you should push the man. But the deontologist has to deny this. They have to deny this precisely because they think that the morally salient difference between flipping the switch and pushing the man is greater than the lives of two people—after all, they think that you should flip the switch to save two but you shouldn’t push the man to save five.
If this convinces you that there’s no particularly salient difference between flipping the switch and pushing the man, then it should also convince you that there’s no particularly salient difference between flipping the switch and harvesting the organs. To see this, suppose that the following is the case.
There are five people on a track that a train will hit. These people are immobilized because they’re temporarily missing organs. However, later in the day, they will get organs and be able to move. You can flip the switch to redirect the train to hit the one person—however, this will kill him painfully. Alternatively, you can very quickly harvest his organs and put them in the five people. He will die, but the five people will be able to move to evade the train. It seems very obvious that you should harvest the organs—that’s what the “victim” would rationally prefer. Yet deontology struggles with such a trivial claim.
We can also make it analogous to the second scenario that I talked about by making the organ harvest able to save not merely the five people on the tracks, but there would be organs to spare which would save two others. Either way, it’s a big problem for the deontologist.
Argument 12: The importance objection to deontology
Deontologists hold that you shouldn’t violate rights even to prevent more rights violations. This is not straightforwardly paradoxical—after all, the deontologists think that it matters more that you don’t violate other people’s rights than that you prevent rights violations.
But it seems that if we think about what’s fundamentally important, we realize that this judgment is flawed. It doesn’t really matter if you or someone else violates someone’s rights. From, to quote Sidgwick, the “point of view of the universe,” it seems unimportant whether it’s you violating rights.
Indeed, it seems very clear that in the organ harvesting case, it’s more important that five don’t die than it is that you don’t get your hands dirty. This seems like an unassailable moral datum. But morality should pick up on the things that really matter. As Richard notes
I'm much more confident of the deeper intuitions about what matters. I'm not particularly attached to my intuition that it's "wrong" to kill one to save five in the trolley bridge case, for example. I think there are obvious psychological confounders here (e.g. involving disparities of salience between the one and the five) that could be expected to distort my immediate intuitive judgment. And I'm not even confident that the "don't push" intuition speaks to the strengths of my reasons for action at all; it seems at least as plausible to me that I'm instead reacting negatively to the decision procedure or dispositions of character that would lead someone to be cavalier about killing innocent people. As R.M. Hare pointed out long ago, utilitarians can fully endorse the rejection of decision procedures that would lead one to engage in instrumental harm, for those seem unlikely to be the best decision procedures around. As a result, it's hard to see how our intuitive rejection of those decision procedures is supposed to count against the view.
Argument 13: A new paradox of moderate deontology
Moderate deontology holds that rights are valuable, but not infinitely so. While a radical deontologist would be against killing one person to save the world, a moderate deontologist would favor killing one to save the world. Extreme deontology is extremely crazy. An extreme deontologist friend of mine has even held that one shouldn’t steal a penny from Jeff Bezos to prevent infinite Auschwitz’s. This is deeply counterintuitive.
However, moderate deontology collapses to full fledged, foaming at the mouth, radical deontology. It is, as the title suggests, problematically explosive.
We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you the same options you were just given.
The 100th circle is comprised of psycho murderers who will take option one if the buck doesn’t stop reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication. If they accept this option then their view is problematically explosive, and lapses into extreme deontology. It holds that one shouldn’t kill one person in a way that would prevent oodles of deaths.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you shouldn’t kill one to save five. Thus, in order for them to hold that you should kill one to prevent five perfectly moral people from having two options, one of which is killing one, they’d have to reject the extra choice principle. This principle states that the fact that an action would give an all-knowing perfectly moral being more options can’t make the option less choice worthy. This is deeply intuitive. If the extra option is worse than the existing options then they won’t take it. If it is better, then it’s good for them to take it. Thus, this principle seems very hard to reject.
Well, the deontologist has to reject it. They hold that you shouldn’t kill one to prevent five perfectly moral person from having only option 1—the option of killing one person. However, they’d also have to hold that you should kill one person to prevent them from having options one and 2. Thus, giving them extra options is bad and makes buck passing less choiceworthy. This is deeply counterintuitive.
So deontology has to either reject an almost self-evident principle or be problematically explosive.
Argument 14: An unacceptable result
Suppose there are two possible states of affairs. In the first, one can push a person off a bridge to save a billion. Call this billion bridge. In the other, they can push a person off a bridge to save five. Call this five bridge.
Suppose that the person deciding whether to push the person is a perfectly moral person—they always do the right thing. Well, deontology will almost certainly hold that the world will be better if the person is pushed—it will only be permitted by consequentialism if it makes things better. As an evaluation of the state of affairs, it’s clearly better for one to die than five.
But this produces the strange result that billion bridge is better than five bridge. After all, in billion bridge, on moderate deontology, the person who is perfectly moral will push the person. In five bridge, they won’t, because it would be wrong. Thus, things are better when an extra 999,999,995 people are endangered. But this is absurd! The mere fact that hundreds of millions of extra people are in danger can’t make things go better.
Argument 15: Bias
There are various explanations of our deontological intuitions being produced by various biases. There are lots of biases that we should expect make us more willing to cause us to believe something like deontology even if it is false.
Status quo bias. This describes people’s tendency to prefer things as they are, in fact, going. For example, if you ask people whether they think a hypothetical person should invest in A or B, their answer will depend on whether you tell them that they’re actually investing in A or B. But this explains pretty much all of our deontological intuitions. All of them are intuitions about non-interference—about keeping things as they are. These are thus prime candidates for things that can be chalked up to status quo bias.
Loss Aversion. This describes a tendency for people to regard a loss of some harm as more significant than just losing out on a gain. Losing five dollars is seen as worse than not gaining an extra five dollars that one would have otherwise. But people being averse to causing extra losses, combined with the idea that various losses in, for example, Bridge are incorporated into their deliberation, means that they will be likely to oppose pushing the person.
Existence Bias. People treat the mere fact that something exists as evidence that it is good. But this intersects with status quo bias and explains why we don’t want to change things.
None of these are conclusive proofs. They do, however, give us some good reason to reject our deontological intuitions.
Conclusion
Here, I’ve laid out 15 worries with deontology. I think that most of these are decisive on their own—them in combination are utterly devastating. Or so I think. I’m interested in seeing the replies!
“Surely, they should prefer you kill one to prevent two killings to you killing one indiscriminately.
Thus, if you killed one indiscriminately, that would be no worse than killing one to prevent two killings, from the standpoint of a third party.”
This makes no sense to me. It’s obviously worse to kill someone for no reason than it is to kill someone to save two people’s lives. Is there a typo in that?
Are you a pure materialist? what do you think love is?