5 Comments
Mar 6, 2023·edited Mar 7, 2023

Deontology is a framework for evaluating actions. Utilitarianism evaluates actions by collapsing the distinction between actions and states of affairs, that is, we determine the goodness of an action by comparing the states of affairs that arise from taking or not taking the action. Deontology cannot be used to evaluate states of affairs, because states of affairs are not actions and do not take actions. So, deontology doesn't render verdicts on states of affairs, and therefore doesn't render the wrong verdict on states of affairs.

If the argument is that taking actions that are deontologically sanctioned brings about bad states of affairs, then you don't need to say anything more than: "Deontology positively evaluates actions that bring about states of affairs which, using a separate moral framework, I assess as bad," which you do in the first premise.

But that grants that you're evaluating states of affairs, and if you are evaluating states of affairs, then you are using (at least) something other than deontology to do so, and in your case, you are using utilitarianism.

Expand full comment
author

It is true that deontology is a framework for evaluating acts. However, any remotely plausible view will have judgments about states of affairs (e.g. heaven is better than hell). Given that the bridge case is supposed to be a case where consequentialism diverges from deontology, it has to be a case where pushing the person makes the world better but is impermissible.

Expand full comment

I think the idea here can be easily reframed in terms of evaluating an action.

The trolley is going to kill 1000 people. You're Moderate Deontologist A, standing next to a switch that will divert the trolley onto a track where it will kill 1,000,000 people. Further down the line, you see your colleague, Moderate Deontologist B, standing next to a pair of switches, which will be able to divert the trolley from either the 1000-person track or the 1,000,000-person track, onto a track where it will kill one person.

As a good MD, you have some threshold at which large consequences override your normal side constraint against acting to kill an innocent person. Your threshold is a 700:1 saved to killed ratio. However, you know that MDB is a stricter MD than you; her ratio is 3000:1. Therefore, if you allow the trolley to stay on the 1000-person track, she won't pull the lever and a thousand people will die. However, if you send the trolley to the 1,000,000-person track, MDB will pull the lever and only one person will die.

Based upon your personal threshold of 700:1, you are allowed to kill one innocent person to save a thousand. Is it morally acceptable for you to endanger a million people in order to overcome MDB's threshold and convince her to kill only one?

Expand full comment

Also, moderate deontology does not think, because it is a system for evaluating the moral status of actions, not a sentient agent.

Expand full comment

Great point. Matthew, he’s got you there m’boy

Expand full comment