Introduction
Here, I continue my debate with Truth teller (TT). In this debate, I’ll reply to this article. Don’t forget to check out the first two articles in my opening as well as the first article in his. In his most recent reply, I did not find any of the replies to any of the arguments I presented even remotely convincing. Here, I’ll explain why.
Interlude 1: Is a case which rests entirely on hypotheticals strong?
TT first worries that my case rests too much on hypotheticals and intuitions is weak. After all, our intuitions are often wrong. The intuitions I appeal to are generally some combination of very obvious and the types of intuitions that are most reliable—see Huemer’s article for an elaboration on which intuitions are most reliable. Given that what the theoretical virtues are is contested, and I’ve given a parsimonious view of morality—utilitarianism—it’s unclear what else would be part of an opening statement. Any argument with premises appeals to intuitions—why should we accept a premise beyond seemings?
Argument 1: Bombs in the park
I gave a case where you put a bomb in a park, before becoming moral. Then, you can either undo your own bomb or undo two other people’s bombs; if you undo your own bomb two will die, if you undo two others, one will die. However, on deontology, it’s more important that you don’t violate rights than that you prevent other people’s rights violations, so the deontologist must hold that you should offset two other bombs.
TT does not, by his own admission, grasp the force of the argument.
You have a duty to diffuse any of the bomb's you are able to diffuse, and I don't even think that's for consequentialist reasons. Suppose there is a bomb in your vicinity, say one of the one's that was planted, then that fact confers obligation on you to diffuse it, to ignore it would be immoral, as it violates your greatest obligation, respect for humanity as such.
But on deontology, you should regard your own rights violations as more significant, in your decision making, than others. After all, you shouldn’t violate one person’s rights to prevent ten identical rights violation. Thus, the verdict about this case stands.
If you are not able to diffuse all the bombs, and yours goes off, then that's just to say you aren't responsible, since you did your moral duty by diffusing the other bombs. Obviously, you are responsible insofar as you planted the bomb, but that's just to say the separate act of you planting it was immoral, but in respect of the act of diffusing the other bombs you are not responsible for your failure to diffuse yours given that you couldn't diffuse all three.
If you diffuse your bomb, you will, all in all, be responsible for only an omission, and it’s an omission when there’s some other action that you took instead that was praiseworthy. If you diffuse two other bombs, you will be responsible for pRilanting a bomb and killing someone. The second is clearly worse on this account. But this is absurd—the first person is clearly more blameworthy.
Argument 2: Preference
In my second argument, I showed that the deontologist is committed to thinking you should hope that people do the wrong thing. This is because a benevolent third party should prefer you kill one to save five to them killing one indiscriminately, and they should prefer you killing one person indiscriminately to five people each killing one indiscriminately, so by transitivity, they should prefer you kill one to prevent five killings to you not doing that and five killings happening.
TT first says “Well, as a deontologist, I'm not committed to a position about what people "should hope" or "should prefer".” Perhaps qua deontology he is not comitted to a position about what we should prefer, but if there are reasons to think that one should prefer one takes non-deontological actions—which there are, as I argued—then the deontologist has to think that perfectly moral beings should think “oh darn, this person’s doing the right thing—I really hope they stop doing the right thing.” This is a very strange consequence of a theory.
Next, TT says
Another way of understanding it, is that when we calculate the total value of each state of affairs from the third person. The state of affairs involving one person being killed to prevent two deaths is all-things-considered better than the state of affairs wherein one killing happens. So, for this reason you ought to prefer the killing to save 5 to happen. Yet, the deontologist thinks you shouldn't kill one to prevent two killings. If this is all that's being said, then what's being said doesn't seem to be much more than a restatement of deontological commitments.
Aside from the rather strange locution, this mistakes the argument. There’s no reason why a third party has to prefer a better state of affairs—they could, in theory, want people to be deontologists. If deontology requires saying that you shouldn’t harvest organs, but you should really hope that other people harvest organs, all the worse for deontology.
Additionally, as Richard has argued, this robs morality of its genuine importance.
Could a deontological Protagonist prefer One Killing to Prevent Five over Five Killings, whilst still maintaining that it would be wrong for her to kill? Such a combination of attitudes seems of questionable coherence. For consider the other emotional states and attitudes that go along with all-things-considered preferences. In regarding One Killing to Prevent Five as preferable, it seems that Protagonist would also need to hope that she chooses to realize this state of affairs, and subsequently feel regret and disappointment if she does not. This seems incompatible with regarding that choice as truly wrong (at least in any sense that matters, implying authoritative and decisive normative reasons to avoid so acting).
Our concept of mattering seems intimately connected with preferability or what’s worth caring about. So even if deontic constraints could be coherently combined with utilitarian preferences, the upshot would seem to be that deontic constraints don’t really matter. Sure, the deontologist may maintain that there is an “obligation” not to kill. But this would seem a merely verbal victory if it turns out that we shouldn’t really want agents to fulfill such obligations, and that what’s truly preferable is to kill one to save five. Put another way: if we’re all agreed that maximizing happiness is what we should most want and care about, then any residual disagreements about “obligation” would seem no more threatening to the utilitarian than residual disagreements about what’s “honourable” (when we all agree that we’ve no reason to care about “honour” as such).
So, if deontic constraints are to truly matter, we cannot generally prefer that they be violated.
Argument 3: The Reversal Principle
In my third argument, I provide a series of cases where you do something that deontology says is wrong like organ harvesting to save five, but you can undo it. I argue that, if there’s a time delay, you shouldn’t undo it, which shows it isn’t really wrong, because you should undo wrong things if you can do so before they affect anyone. The reversal principle is the name of the principle that says that you should undo wrong things, all else equal, if you can before they’ve affected anyone.
I feel myself under no initial pressure to accept the reversal principle. There seems to be no reason to think that such a principle is a necessary truth that governs all morally salient actions.
Well perhaps you feel under no rational pressure to accept it, but this has nothing to do with whether there are good reasons to accept it. This is a sweeping principle that has deep intuitive appeal—of course you should undo wrong things. They’re wrong, so you should undo them. It’s also supported by an appeal to a wide range of cases—if you set a bomb in a park and you can remove it at no cost, you should, because setting bombs in parks are wrong. If the principle is implausible, there should be one counterexample, which TT didn’t produce.
Think about how strange of a moral view this is. TT thinks that it’s wrong to push people off bridges, but if you do so, you shouldn’t undo. Wrong actions are horrible, but not so horrible that you should stop them from having happened.
At best what looks plausible to me is the following principle;
Weak Reversal Principle: if an action is wrong to do, then if you do it, and you have the option to undo it before it has had any effect on anyone, that fact is some defeasible reason to think you should undo it.
The weak reversal principle is phrased in a misleading way. Of course if you do a wrong thing, undoing it shouldn’t necessarily be done. If, for example, the only way to get rid of the bombs you planted earlier is to massacre 700,000 Egyptian women, you shouldn’t do it. But if you can undo it at no cost, you obviously should.
Suppose it were the case that undoing some wrong action would involve violating someone's autonomy, murder, cultivating really terrible character-traits and dispositions, it seems like considerations like these, can clearly outweigh the fact that the act you are undoing is a wrong act when deliberating whether the reversal act is right to do.
But then this wouldn’t be holding all else equal which is what’s stipulated by the reversal principle.
. There either seems to be a tu quoque objection here, or we should be skeptical of our intuition in these cases. Does reversal mean the action is already done, so you go back and undo the effects of your action? Then it looks like a consequentialist should be committed to thinking there are cases where you shouldn't reverse wrong actions too. Suppose you harvested 1 person's organ's to save 5, then you realize one of the 5 you saved is a depraved serial killer, and will kill, rape, and torture many more victims thanks to your action. So, all-things-considered, the action was wrong on consequentialism. Suppose you can undo it, but the act of doing so, has massive causal ramifications, inscrutable to you, which would eventually lead to the birth of Super Hitler, who will initiate a global nuclear holocaust. Strange scenario, but there you have it, a case where undoing a wrong action would be wrong on consequentialism.
But this plainly rests on an equation of subjective and objective wrongness. The principle only applies in cases of objective wrongness, where it’s a thing you really shouldn’t have done, not just a thing you thought at the time that you shouldn’t have done. Thus, of course you should sometimes undo subjectively wrong things, if you later discover that they aren’t wrong.
Argument 4: Huemer’s Paradox
I already summarized Huemer’s paradox, it’s slightly complicated so I won’t burn word count doing it again. Huemer shows that if we accept
Individuation Independence: Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.
And
Two Wrongs: If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.
then we get the conclusion that, on deontology, it’s wrong to decrease the amount that two people are being tortured. This is clearly a crazy result, so either deontology or one of the two principles has to be false. TT says they only seem plausible at first blush, but I don’t think this is true. These really strike me, when I reflet, as basic features of rationality, far more reliable than the emotional reaction I feel to a case like organ harvesting. They’re also more trustworthy because they apply to more cases—if they were false, they’d have clear counterexamples. It turns out they don’t, so it’s exceptionally unlikely that they are false.
With that noted, I'm skeptical of the 'Individuation Independence' principle. There seems to be no strong reason to think, that if two or more actions are individually wrong, it thereby follows that a conjunction which contains both actions will thereby also be wrong.
I assume he meant two wrong, because his comments only make sense when applied to that. Obviously things can have emergent properties, but it doesn’t seem like if two actions are wrong, explicitly conditional on the existence of the other, then it can be wrong to do them both. It seems bizarre to think that what you really should do is multiple wrong things, each of which are wrong even given that you’ll do the others.
TT objects to the scenario, which involves two dials that can reduce another’s torture by some amount by causing someone else half as much torture as strange. But it’s no stranger than the trolley problem—in the real world, fat men don’t stop trains. However, despite this, it’s painfully obvious (pun intended) that you should reduce everyone’s torture.
2) Focusing on the kinds of beliefs that lead us to judge Mary's action as right and putting pressure on those. For example, the action Mary commits' are one's that involve violating consent, but the result is good for both parties. But plausibly, we shouldn't force someone to stop smoking, even if it's good for them. We shouldn't promote paternalism for the good of everyone. So we shouldn't, generally, violate consent for a rationally informed agent to bring about that which is good for them.
But in this case, we can stipulate that neither of them are given the opportunity to consent. In this case, it’s extremely obvious that you should decrease both of their tortures.
Fortunately, TT admits that “This argument is strong but not insuperable.” Now, I think that it is “insuperable,” given that the alternative view requires saying that you should either do sequences of wrong things or not reduce people’s torture at no cost.
Argument 5: Paralysis
The idea behind the paralysis argument is that, on deontology, it’s wrong to take actions that predictably violate rights. But all of our actions have tons of consequences, leading to changing the identity of enormous numbers of future people, thus leading to the birth of enormous numbers of rights violations. Thus, on deontology, every action we take is analogous to flipping a coin that saves a life if you get heads, kills someone if you get tails, and gives you five dollars whenever you flip it. Only utilitarianism can explain why flipping it is permissible. TT tries to avoid this by saying
What I completely reject, is that it is wrong to cause rights violations and people to be harmed as a long-term, unintended, and unforeseen outcome.
But this principle doesn’t solve it, because it’s not long-term, unintended, and unforeseen. After this argument is presented, it’s no longer unforeseen—now we all know that we are constantly causing tons of rights violations. Now, perhaps one thinks that it’s fine to cause long-term unintended consequences, but deontology must deny this. In, for example, the coinflip scenario, if your intent is to get money, and the deaths and lives saved are still in the future, deontology still seems to proscribe that flipping it is impermissible.
TT next tries to refute this by saying that it’s not foreseen in that “The agent performing the act did not foresee any particular bad outcome that was to occur as a result of the act.” But this is totally irrelevant—in the coinflip case, even if you don’t foresee any particular bad act, if you know your flip has a 50% chance of killing one and a 50% chance of saving someone, it still seems prohibited by deontology. So we’re back to square one. He gives two other reasons why people aren’t blameworthy for just taking ordinary actions that cause and avert harms
1) It wasn't intended
2) No one was directly caused to be harmed or used as mere means in the process of the act.
But these also apply to the coinflip scenario. No one is being used as a mere means—it’s just a long side effect, and the intent is to get money. Thus, deontology is still in the unenviable position of having no defense of the moral permissibility of driving to the store—a rather more significant problem than farcical situations involving trolleys.
Argument 6 Prevention
The argument I gave is as follows.
1. If deontology is true, then you shouldn’t kill one person to prevent two killings.
2. If you shouldn’t kill one person to prevent two killings, then, all else equal, you should prevent another person from killing one person to prevent two killings.
3. All else equal, you should not prevent another person from killing one person to prevent two killings
Therefore, deontology is false
TT thinks there are good arguments for 3, so ends up rejecting 2. But this is totally absurd. Think about how strange this conception of morality is—you shouldn’t kill people to save multiple others, but you shouldn’t stop other people from doing it, and you should want other people to do it too. This strips morality of its importance—if it really matters that people do the right thing, then if you can costlessly stop them, you should. I gave two reasons to accept 2 which TT replies to.
1. "It's obvious", but this is ineffective, I don't think it's obvious. You have, at most, a defeasible reason, but I think we have good reasons to believe it is overturned in this case.
What’s obvious is in the eyes of the beholder. But this seems really obvious. I’d imagine that if you asked most freshman undergraduates whether they thought you should take away the doctor’s scalpel that they’d use to kill one to save five if you could do so at no cost, they’d say yes. If wrongness really matters, if it isn’t just some trivial norm like norms of honor, then it’s worth preventing.
2. It creates problems for deterrence. But this is also ineffective. I reject that deterrence is the reason we ban killing 1 to save 5, and I reject that legality is morality applied. Laws in the minimal sense should exist to guarantee people's freedoms and rights. So, it's obvious that we should ban murder, even in cases where 5 is saved. Also, Consequentialism holds that the act of killing 1 to save 5 is actually good, but presumably we shouldn't deter good actions by making them illegal. If that's a bad argument against consequentialism, which I think it is, then it's also a bad argument against deontology.
I don’t think deterrence is the only reason we ban killing 1 to save 5, but it certainly is a reason. The deontologist here has to think something rather bizarre—suppose that harvesting organs actually does have good consequences because the side effects are stipulated away, but, as per deontology, it’s still wrong. Then the deontologist has to think “oh darn it, this law makes people do fewer bad things, if only it didn’t.” In short, because they think that it is bad to prevent wrong things with good outcomes, it’s unfortunate that laws make fewer people to objectively wrong things.
Let’s just give one more case to hammer home how weird this is. You see that the fat man will be pushed off the bridge by someone. Deontology, on this account, thinks you shouldn’t stop them from pushing the person off the bridge even if you can at no cost, despite the fact that it’s seriously wrong to push people off bridges.
If it looks like a duck and talks like a duck, it’s probably a duck. If you should want it to happen, and you shouldn’t prevent it, and you shouldn’t undo it, it’s probably a good act.
Argument 7 Huemer’s Paradox of deontology’s twin
In this case, I gave a scenario in which lots of people can repeatedly harm others slightly to benefit themselves slightly more. On this account, each act is wrong, but the conjunct of them makes otherwise horrifically miserable creatures all super well off, so it’s clearly good. TT first repeats his skepticism of one of Huemer’s principles—I already addressed that. Next, he says “This hypothetical scenario is the most abstract and bizarre yet, so our intuitions here probably don't count for much.” I think it counts for a lot—specifically, it shows that deontology leaves an arbitrarily large number of people all individually experiencing more suffering than has been experienced during the entire holocaust for the benefit of no one. This is absurd.
Finally, he says the following
3. There may be a tu quoque concern here. Suppose you randomly select someone from the global population and give them a medical needle jab, the action is wrong, you caused them minor pain and didn't benefit them in any way. But you keep doing this a billion times, and doing so, basically statistically guarantee's that at-least 1 person will benefit from the jab, and have horrifically painful diseases many orders of magnitude worse than the pain caused by all jabs combined prevented. The individual act is wrong, and you are not in a position to know any of the acts, on an individual basis, wouldn't be wrong (indeed statistically they probably are wrong), yet the conjunctive act is right on consequentialism. So, if Matthew accepts the principle that if an action is wrong, the same action in repetition must also be wrong, which he needs for 2 to be motivated, it looks like his own position is under some heat.
I’m not sure if I quite understand what’s being said, but I’d imagine that most of the jabs are objectively wrong, but many are maybe subjectively right—depends on the details. But if each of them were wrong, then the conjunct would obviously be wrong.
Argument 8 Dual Process Theory
Here, I presented lots of evidence that utilitarian judgments were more likely to be the result of careful rational reflection rather than unreliable emotional reactions. TT replies
It is case by case, I'd imagine. A general hurdle for alot of these studies is that there are relevant differences between the cases presented, and alot of it might be accounted for by people reacting emotionally to the relevant differences. It could be that the same people use different parts of the brain when making judgements about different scenario's, utilitarian's and deontologists alike. Rather than it's just being the case that utilitarian's are cold rational thinkers and deontologists emotional thinkers.
This doesn’t explain all the data. For example, one study found that giving people medication that dulls their emotions makes them more utilitarian. Additionally, it seems to count in favor of utilitarianism if, as the largest study on the topic suggests, more careful reflection makes people more utilitarian.
I also have my doubts that the fact that people with damage to the pre-frontal cortex have a positive correlation with utilitarianism is something that says much in utilitarianisms favor. It's not implausible that emotional ability allows agents to better empathize, and is an important part of moral reasoning.
This was only a bit of evidence that I presented, but I think this does serve as evidence that emotions—which distort rational analysis—make us more deontological. TT ends up saying that if I’m right this is maybe some evidence but not totally dispositive, which I’m happy to grant. This is just a small part of the cumulative case.
Argument 9: Richard’s Paradox
I won’t spent too many words here—Richard shows that deontologists are committed to thinking people should hope others act wrongly. TT doesn’t see a problem with this—I explained what the problem was earlier in this article, see section 2.
Argument 10: Putting perfect beings in charge of things makes things worse
My original argument is summarizeable as follows.
Deontology holds that there are constraints on what you should do. But this produces a very strange result—it ends up forcing the conclusion that sometimes putting perfect people in charge of decisions makes things worse. Suppose that a person while unconscious sleepwalks and is about to kill someone and harvest their organs to save five people. Then they wake up and have a choice of whether to actually harvest their organs or not.
Given that harvesting organs makes things better—it causes one death instead of five—it would be bad that they woke up. But this is strange—it seems like putting people who always do the right things in charge of a situation can’t make things worse.
TT says he doesn’t really feel the force of this—this strikes me as bizarre. It really is strange to think that it can be a terrible thing that perfectly good people are put in charge of various things. If they always do the right thing, it’s very strange that putting them in control of something is unfortunate. We should prefer perfect people make decisions to the blind vicissitudes of fate.
TT replies by saying that consequentialism is self-effacing. This strikes me as a bizarre non-sequitur. There’s a lot to say about it, and I don’t think that it is true in general, but it doesn’t seem to count against a theory if it is. All moral theories will be self-effacing for some people—if Jon would kill himself if convinced of any moral theory because it would make him sad, one shouldn’t promulgate that moral theory.
Argument 11: Combined Bridge and Trolley
I argue that there’s not a big gulf between bridge and trolley, where you can either flip the switch to kill one and save five or push the man to kill one and save five. I do this by sketching out a scenario where a man is standing on top of a bridge and you can either flip a switch to redirect the train up to him killing him more painfully or push him, killing him less painfully. It seems you should push him—this would be better for him and worse for no one. But this shows, as I argue, that there isn’t a big difference between the two acts and, if as most agree flipping the switch is fine, so too is pushing the man.
TT ends up biting the bullet and saying that you should flip the switch not push him. But this is absurd—it makes him die more painfully and benefits no one. Any moral system with a formulaic set of rules that make people worse off for no reason will be false. TT doesn’t think this is very unintuitive—I think that the principle that you shouldn’t harm one person to benefit no one is very plausible, so I disagree. But he doesn’t agree that it’s a bit weird and is at least some reason to reject deontology.
Argument 12: Morality should pick up on what matters
TT quotes me saying
It doesn’t really matter if you or someone else violates someone’s rights. From, to quote Sidgwick, the “point of view of the universe,” it seems unimportant whether it’s you violating rights.
Indeed, it seems very clear that in the organ harvesting case, it’s more important that five don’t die than it is that you don’t get your hands dirty. This seems like an unassailable moral datum. But morality should pick up on the things that really matter.
He replies
This intuition is completely at home with my view. I grant that whether you, or someone else acts wrongly and violates someone's rights is not better or worse for the world. Where I disagree is that morality "should pick up on the things that really matter" in the sense Matthew means. Morality is not axiology. It tells you what actions we should and shouldn't do, not what is truly valuable from the point of view of the universe, if there even is such a thing, which I doubt. But even if there is, why should we expect moral judgements about the rightness or wrongness of human actions to track that? It seems we should only expect this if, antecedently, we are consequentialists, which I'm not.
This is, once again, a difference of intuitions. Still though, I think it’s worth saying something about the intuitions that motivate consequentialists. The consequentialist picture seems attractive because it picks up on what really matters. Not just in an axiological sense—it doesn’t seem genuinely important whether it’s you or someone else who violates the rights of another. This seems unattractive about deontology—an analogy that Richard has given that I’ll coopt is that of norms of honor.
In an honor society, there may be intuitions about acts being dishonorable. But this seems problematic—it doesn’t seem like those things really matter. Morality is left impotent, inert, and unimportant if it’s not about what’s really important.
Arguments 13-14: The moderate’s dilemma
These arguments both were just arguments against moderate deontology; however, TT has confirmed that he’s an absolute deontologist. Thus, he thinks you shouldn’t kill one person to prevent an infinite number of people from being tortured in increasingly horrible ways. But this verdict is, on its face, absurd. So, his view is false. If we imagine you committing homicide was the only way to prevent everyone from being tortured in the worst ways imaginable for all of eternity, it’s very obvious you should do that.
Additionally, absolute deontology has issues with risk. Huemer explains this well. If you say that one reason—namely, your reasons not to violate rights—is infinitely more important than your reasons to promote the good, then you should never do anything, because anything risks violating rights, and that’s infinitely more important than anything else. One can avoid this by positing that there’s a threshold risk level at which actions become impermissible, but this runs into an issue described by Huemer.
We might posit a threshold level of probability, t (call it the ‘risk threshold’), such that when p < t, A may be rendered permissible if the consequences are sufficiently favorable, but when p > t, A must not be performed regardless of the consequences. This theory leads to paradoxical cases in which it is permissible to perform act A (whether or not one performs B), and it is permissible to perform B (whether or not one performs A), and yet it is impermissible to perform both A and B. For suppose there is a probability slightly below t that A is of kind K, and a probability slightly below t that B is of kind K. Suppose further that A and B each would (regardless of whether the other is performed) produce extremely good consequences, so that each is permissible according to the risk threshold theory. If, however, one performs both A and B, there is a probability greater than t that one will perform at least one action of kind K. Depending on the details of the deontological theory, this will typically lead to the conclusion that performing A and B is absolutely prohibited.
So we have yet another way that deontology permits you from doing anything.
Arguments 15: Bias galore
In my opening I provided three biases that explain our deontological intuitions, our tendency to keep things as they are—status quo bias, existence bias, and loss aversion. TT admits he is “willing to concede this is some evidence for utilitarianism over deontology.”
For one, I have my doubts that these bias's (at least completely) explains all, or even most deontological intuitions. Intuitions about the overriding wrongness of violating autonomy/self-determination, torture etc. even in cases where the outcomes are overall worse. As well as, the intuition that the wrongness of, say, promise-breaking being over and above the bad outcomes don't seem to be explained by any of these bias's.
I’d agree that these don’t explain all of the intuitions, but they explain a good many of them. The rest can be explained by a combination of reliance on heuristics and sloppy thinking.
For two, this seems to cut both ways to some extent. Utilitarian's seem to ignore many seemingly salient factors in ethical decision-making, they only judge actions based on the state of affairs brought about by it. In general, all human's are prone to bias, and it is doubtful that utilitarianism truly solves it, rather than merely being a framework for cloaking many of our pre-existing moral bias's under the guise of mathematical calculation and certainty.
This doesn’t seem to cut both ways. TT just points out that all humans have bias. Of course, this is true. But deontology has a set of specific biases that explain our intuitions that are sympathetic to the theory, so it is much more undermined by biasing factors.
For three, most people who we would expect to be most reflective about the subject, and thus least prone to bias, as well as most knowledgeable e.g professional normative ethicists are non-consequentialist with more being deontologists than any particular other moral theory (although I think the divide between consequentialism and non-consequentialism is much more significant than the divide between deontology and virtue ethics, which I take to collapse in many ways) and consequentialism being the least popular of the main three only a sub-set of which will be hedonic act utilitarians (Matthew's view). Of course they will be prone to bias too, as all humans are, but I think this consideration is enough to greatly off-set the force of Matthew's point.
I’m happy to grant this is some evidence. But I don’t think that most ethicists are very good judges of this. Very few of them have read the serious arguments—from Huemer, Chappell, and so on—against deontology. Very few of them realize the utterly bizarre and appalling implications of deontology. And Parfit was on our side, and was sufficiently brilliant to offset the rest of ethicists put together.
Conclusion
TT claims in his conclusion that “Most of Matthew's argument's seemingly have little to no force.” Au contraire. Instead, I think my arguments have shown that if deontology is true you should diffuse your own bombs in a park instead of multiple other people’s bombs, want people to do the wrong thing, not reverse wrong actions, not prevent gratuitous torture at no cost, not move or doing anything ever, not prevent others from doing wrong things, not avert arbitrarily large amounts of gratuitous torture, not improve the fate of victims of trolley mishaps if you can at no cost and only benefit to them. Additionally, deontology requires holding that perfect beings being in charge of things is often a terrible thing, and there’s no motivation for it, because there are specific biases that explain away most of our deontological intuitions. TT admits many of these things. I think it’s thus clear which is that deontology is an utter non-starter.