Two More Pieces of Serious Scholarship Bolster the Case Against Deontology
Deontology gives one moral reasons to bury their head in the sand and gratuitously harm people
Introduction
An unfortunate consequence of the fantastic and breathtaking speed with which academia turns out papers is that one can never totally keep up with the existing literature. Thus, there are tons of papers produced which provide serious challenges to various prevailing doctrines—like objective list theory and deontology. The many, many papers providing serious challenges to deontology are often just ignored in the endless pile of scholarship. Here, however, I’ll highlight two more of them.
Ostrich
Ostriches bury their heads in the sand. However, this is a byproduct of instinct, not careful reasoning.
Morality should not require us to be like the ostrich, and bury our head in the sand, inflicting suffering on others to prevent ourselves from learning information. There are, of course, some cases in which one shouldn’t gain information—for example, if it will severely traumatize them. But it seems like information for its own sake is a good thing—something we should want there to be more of rather than less.
In this brilliant paper, Ryan Doody argues that deontology requires us to, like the ostrich, pay severe costs—including potentially inflicting them on others—to remain ignorant. The paper has, unfortunately, never been published, and only has two citations. Despite this, it poses really significant problems for deontology.
Doody starts with the following example.
Two Treatments. Twenty-six villagers in a remote area (Arden, Baldwin, Clay, …, and Zephyr) have contracted a terrible, life-threatening disease. If nothing is done, they will all die. There are two possible treatments that could be administered. The first treatment (Egalitrex) ensures that all twenty-six will survive, but there’s an unfortunate side-effect: paralysis of the left arm. The second treatment (Utilicycline), when it is effective, ensures a full recovery. You know that it will be effective for everyone but Arden, for whom (perhaps, let’s say, owing to a genetic anomaly) it will be ineffective. The only way to administer the treatment is by mixing it into their water supply and, unfortunately, the two treatments have deleterious effects if taken together. Your only options, then, are the following:
Many Non-Utilitarian moral views would regard it wrong to administer Utilicycline in this case. For while administering Egalitrex will make twenty-five out of the twenty-six worse off than they would be if you administered Utilicycline instead, it does so at the cost of Arden’s life. And it is not right to force a grave sacrifice on the few in order to secure comparatively smaller benefits for the many. This thought, or at least something in the neighborhood, is fairly common among Non-Utilitarians. Administering Utilicycline might generate a greater total amount of wellbeing than administering Egalitrex would, but, for many Non-Utilitarian moral views, this is not a decisive consideration.
But Doody argues that this, combined with the judgment that one should take an action if it is an ex ante improvement, results in people having an incentive to be an ostrich and bury their head in the sand. Suppose that, for any particular person, they can either be given a drug that would give them a 1/26 chance of death and a 25/26 chance of survival or a drug that would certainly paralyze their left arm but make them not die. It is plausible that each would be better off in expectation by being given the second drug.
If you should administer the drug—utilicline rather than egalitrex—to each person individually, then it seems clear that, if deciding whether to put one in the water supply, you should put in utilicline. Thus, the following is true if one accepts that on should take actions that are ex ante pareto improvements—that make everyone better off in expectation.
Secret administration: if one can either add utilicline or egalitrex to the water supply, knowing that utilicline will kill one person, but not knowing who it will kill, one should add utilicline
(Doody calls a case very much like this Opaque Two Treatments).
This follows from the following principle
But this is just the same as the original scenario given by Doody, except in this case, one doesn’t know who will be killed. This means that it is much better to be ignorant of who will die. Thus, if one is deciding whether to administer the drug, and they could be informed of who will die, they should bury their head in the sand, even at grave cost, because otherwise improving life for everyone in expectation will be impermissible. This is very implausible.
For example, suppose that one is perfectly moral. Thus, if they’re ignorant, they’ll give utilicline, but they’ll give egalitrex if they know who will be killed. Suppose that they have to have one of the following two things happen
They find out who will die from utilicline
They make utilicline also have the effect of causing everyone to have their finger paralyzed.
However, suppose that, despite this change, utilicline is still an ex ante improvement. This holds that one should paralyze the fingers of 25 people to prevent them from gaining information. This is very implausible.
One could reject that one should take actions that are ex ante improvements. But this runs into problems of its own. Doody shows that this follows from the following principle
Deference to Principal’s Perspective When making a decision that affects only one person (call them ‘Person X’), if you know that your decision will only affect Person X and that Person X would choose to ϕ were they in your position, you morally ought to ϕ
This means that, for any individual person, you should give them utilicline rather than egalitrex. But it’s totally obvious that, if you should give every individual person utilicline rather than egalitrex, then you should administer utilicline to be given to everyone rather than egalitrex.
One could just deny Deference to Principal’s perspective, but this isn’t plausible. If you knew that one would, if they knew what you knew, want you to take some action that only affects them and makes them better off in expectation, you obviously should do so.
Thus, deontology has to deny one of the following
You should partially paralyze 25 people to deprive yourself of information.
If giving people medicine, you should give them the medicine that makes them better off in expectation rather than worse off, all else equal.
If you should give all individual people medicine A rather than B, then if you can administer medicine A or medicine B to all people in some group, you should administer medicine A.
But these are all obvious. Thus, deontology is implausible.
People in suitcases
I’ve discussed this briefly before in my article on a different suitcase-based argument against deontology. Kowalczyk wrote a brilliant paper called people in suitcases. No one has ever cited the paper, and it’s slipped almost entirely under the radar, which is quite a shame given the force of the argument. I’ll explain the basics, though the full thing is very worth reading.
This argument also relates to ex ante deontology. In the original paper, it’s phrased as targeting two different construals of deontology, but really, it can be rephrased as just a deductive argument against deontology (or at least all the existing versions which say one should not push the person off a bridge to stop a train from running over five).
If deontology is true, one should not push a person off a bridge to stop a train from killing five people.
If one should not push a person off a bridge to stop a train from killing five people, they should not push a person off a bridge to stop a train from killing five people, if the people are all in suitcases and do not know whether they’re atop the bridge or at the bottom of the bridge.
One should push a person off a bridge to stop a train from killing five people, if the people are all in suitcases and do not know whether they’re atop the bridge or at the bottom of the bridge.
Therefore, deontology is false.
1 is true by definition.
premise 3
I’ll defend 3 before 2. 3 says that one should push a person off a bridge to stop a train from killing five people if the people are all in suitcases and don’t know whether they’re atop the bridge or at the bottom of the bridge. So the idea is that, if A, B, C, D, E, and F are in suitcases, none know whether they’re at the top of the bridge or not, you should push the person atop the bridge off. This is a bit intuitive—everyone would rationally vote for it if they could. They’re all made better off in expectation.
Additionally, it can be supported by the following argument given by Kowaczyk. This one is a bit complicated, so strap in. Suppose that there are two tracks. The first one is like the one described before—five people in suitcases, one atop a bridge, none know who they are. The second one is similar, but rather than having people, it has suitcases full of sand, one atop the bridge, the other five on the bottom of the bridge. Suppose that one of the suitcases full of sand is labeled suitcase A*, another suitcase B*, another suitcase C*, etc, through F*. Additionally, call the suitcase containing person A suitcase A.
Suppose that there are six levers. The first one would switch out suitcase A for suitcase A*, the second would switch out suitcase B for B*…through suitcase F for F*. You don’t know which suitcase is atop the bridge. Suppose that, if you press any lever, and, for example, switch suitcase A for suitcase A*, the side of the tracks that used to contain A* but now contains A will have the suitcase atop the tracks fall.
It seems you out to press the first lever—after all, this will reduce A’s risk of death from 5/6 to 1/6 and have no other effect. For the same reason, you should press lever 2—which reduces person B’s risk of death from 5/6 to 1/6, and so on through the sixth lever. Thus, you should take a sequence of actions which together result in the person atop the bridge in the original suitcase case being pushed off the bridge and also move them to a different track. But the following principles are plausible.
If you should take a sequence of actions that does some act, you should do that act.
In the suitcase case, if you should push the person atop the bridge onto the track, while moving all six people to a different track, in the suitcase case you should push the person atop the bridge onto the track even if it doesn’t move all six people to a different track.
But together with the earlier judgment, this entails that one should push the person off the bridge in the suitcase case. And each of these judgments are overwhelmingly plausible. Judgment described above follows from the idea that if one should take some action that reduces one’s risk of death from 5/6 to 1/6, they should do so, all else equal.
1 is pretty trivial. 2 is also trivial. Thus, premise 3 in the original argument has been vindicated. Premise 3 was, remember, this
One should push a person off a bridge to stop a train from killing five people, if the people are all in suitcases and do not know whether they’re atop the bridge or at the bottom of the bridge.
Premise 2
But what about premise 2? It says
If one should not push a person off a bridge to stop a train from killing five people, they should not push a person off a bridge to stop a train from killing five people, if the people are all in suitcases and do not know whether they’re atop the bridge or at the bottom of the bridge.
This can be supported in the following way. Suppose that at 12:00, all the people are in suitcases. By 1:00, they will all be out of their suitcases. One can either push the person at 12:00 or at 1:00. However, at 1:00, the person who will be killed will be killed less painfully, and everyone else will be less traumatized by their screaming. Thus, pushing them at 1:00 is clearly better than doing it at 12:00.
But remember, at 1:00, on the alternative view, it would be impermissible to push them, because at 1:00 it’s not an ex ante improvement—at 1:00, you know who will die. This means that even though pushing them at 1:00 is a pareto improvement over pushing them at 12:00, it’s right to push them at 12:00 but wrong to do it at 1:00. It also gets the utterly crazy result that, at 12:00, you should force yourself to take a future wrong act—namely, push them at 1:00. If you could make yourself be forced to push them at 1:00—which is guaranteed to be wrong, mind you—you should do so. Thus, this view requires denying that you should take actions that are guaranteed to be good for people and also that you should sometimes force yourself to do future immoral things. Such a result is absurd.
There are various clever maneuvers to try to avoid this result. None of them work, as is argued in great detail by Kowalczyk.
Conclusion
So we have two more arguments that form part of the utterly overwhelming cumulative case against deontology. These arguments should be paid much more attention. After all, these arguments explain why perhaps the dominant view in morality is wrong. And yet, unfortunately, they’re mostly ignored.
Have you read Caspar Hare's "Should we wish well to all?" - http://web.mit.edu/~casparh/www/Papers/CJHareWishingWell.pdf - I think it's the original source for both the Suitcases trolley cases and the medicine case you discuss here! Really great paper.
There's a slight typo in the article - in the numbered list just before the "people in suitcases" section, item 1 should have the word "shouldn't" instead of "should".