Introduction
Amos Wollen—one of my smartest friends—has a crucial flaw: he seriously entertains some scandalously wrong theories, including, notably, natural law theory. He’s even a theist—albeit for the best reasons to be a theist, namely, psychophysical harmony. Recently, he and I were discussing deontology, and so I decided to write a post providing the main problems with deontology. This list will not be comprehensive—otherwise it would be thousands of pages.
1 Bombs in the park
Suppose a person puts a bomb in a park for malevolent reasons. Then, they realize that they’ve done something immoral, and they decide to take the bombs out of the park. However, while they’re in the process, they realize that there are two other bombs planted by other people. They can either diffuse their own bomb, or the other two bombs. Each bomb will kill one person.
It seems very obvious that they shouldn’t diffuse their own bomb—they should instead diffuse the two others. But this is troubling—on the deontologist’s account, this is hard to make sense of. When choosing between diffusing their own bomb or two others, they are directly making a choice between making things such that they violate two people’s rights or violate another person’s rights.
To avoid this, the deontologist can try to make the following argument. They can claim that it’s wrong to take an action that violates rights, but if you take an action that causes you to violate rights, there are only consequentialist reasons counting against that action. But this runs into a problem. Suppose that you know that in one hour, you’ll be a consequentialist—though for non-rational reasons. You can either drive to a hospital now or not do so. You know that if you drive to the hospital now, when you’re a consequentialist, you’ll kill one person to harvest her organs and save five. In this case, driving to the hospital seems wrong. If it’s wrong to violate rights, then it’s wrong to take some action that will predictably result in your violating rights. But if that’s wrong, then it’s wrong to take out your bomb rather than the two others. But that’s clearly false.
Amos thought that there might be some doctrine of double effect way to avoid this. But I don’t think that this works. If you plant the bomb in a park and it kills someone, you have done an act with the worst possible intention—your intention when you planted it was killing someone. Thus, the intent to save people in cases like the organ harvesting case would make organ harvesting better, if it would have any effect, not worse.
This argument hasn’t been carefully vetted—I haven’t checked it over with lots of people so it might be wrong. I’d appreciate objections if anyone has them.
2 Deontology holds you should want people to do the wrong thing
This argument is a simpler and less comprehensive—also probably less decisive—version of Richard’s argument here. Definitely check out Richard’s excellent paper on the topic—he presses the argument in a more sophisticated way, giving it overwhelming force. This is probably the best argument against deontology.
Suppose you’re deciding whether or not to kill one person to prevent two killings. The deontologists hold that you shouldn’t. However, it can be shown that a third party should hope that you do. To illustrate this, suppose that a third party is deciding between you killing one person to prevent the two killings, or you simply joining the killing and killing one indiscriminately. Surely, they should prefer you kill one to prevent two killings to you killing one indiscriminately.
Thus, if you killed one indiscriminately, that would be no worse than killing one to prevent two killings, from the standpoint of a third party. But a third party should prefer you killing one indiscriminately to two other people each killing one indiscriminately. Therefore, by transitivity, they should prefer you kill one to prevent two killings to the two killings happening—thus they should prefer you kill one to prevent two. To see this let’s call you killing one indiscriminately YKOI, you killing one to prevent two killings YKOTPTK, and the two killings happening TKH.
YKOTPTK< YKOI<TKH. < represents being preferrable. Thus, the deontologist should want you to do the wrong thing sometimes—a perfectly moral third party should hope you do the wrong thing.
3 Two original paradoxes
I won’t lay this out too much again—I’ve already presented it here. There’s a different one that I’ve developed here.
4 Huemer’s paradox of deontology
This one is devastating—here’s a link to an article I’ve written about it. The paper is very worth reading.
5 The paralysis argument
The argument that deontologists shouldn’t move should really move deontologists. The argument is roughly as follows—every time a person drives their car or moves, they affect whether a vast number of people exist. If you talk to someone, that delays when they next have sex by a bit, which changes the identity of the future person. Thus, each of us causes millions of future people’s identities to change.
This means that each of us causes lots of extra murderers to be born, and prevents many from being born. While the consequences balance out in expectation, every time we drive, we are both causing and preventing many murders. On deontology, an action that has a 50% chance of causing an extra death, a 50% chance of preventing an extra death, and gives one trivial benefit is wrong—but this is what happens every time one drives.
One way of pressing the argument is to imagine the following. Each time you flip a coin, there’s a 50% chance it will save someone’s life, a 50% chance it will kill someone, and it will certainly give you five dollars. This seems analogous to going to the store, for trivial benefits—you might cause a death, you might save someone, and you definitely get a trivial reward.
Amos thought that one way to get around this is to say that if some action will change the identity of future people and cause some act to occur, they have only consequentialist reasons not to do it. But this runs into seeming counterexamples. Suppose you know that driving will cause one extra death by changing traffic patterns, but it will prevent a serial killer from killing five. Driving seems fine there. However, if you have a deontological reason not to cause the extra death from driving but only consequentialist reasons to save the five victims of the serial killer, this would turn out wrong.
Similarly, if you're choosing between two actions, one will set off a complex chain reaction that kills one person in 40 years, and the other will cause someone to be born who will kill two people in 40 years, the second seems worse. However, if you have non-consequentialist reason not to do the first and only consequentialist reasons not to do the second, then the first would be worse.
This is far from an exhaustive list, but it should be enough to give us serious doubts about deontology.
> Suppose that you know that in one hour, you’ll be a consequentialist
I would drive myself to the hospital and kill myself to donate my organs. This avoids both the 5 deaths and the infinitely worse consequence of me being a utilitarian.
More broadly. I flatly reject any hypothetical that requires assuming such radical shifts in personal mental states. It’s either incomparable with free will, which moots morality, or it requires mind control shenenganges which are plainly subject to different rules (and impossibility).
> moral person wants you to do the wrong thing
You’re sneaking in equivocation of terms. They are not making a moral judgment, they are evaluating world-states.
Would a perfectly moral person want you to make the world worse? That is the price of principle.
> Paralysis
No matter how much utilitarians fantasize over infinite future lives, you cannot violate the rights of someone who does not exist…
Most of these examples assume forms of deontology where there is more than one absolute non-overridable principle. Hierarchical or lexical forms of deontology don't necessarily suffer from the same problems.