My good friend Amos Wollen, a clever Oxford gentleman, has an article up arguing for utter lunacy. In this case, the lunacy is the standard nonsense that people often say about the impermissibility of harvesting people’s organs, contradicting the utilitarian account. If this initially strikes one as absurd they need read no further—it’s always possible to rig up arguments for crazy skeptical conclusions like the nonexistence of the external world, the falsity of modal rationalism (actually, as Amos has recently discovered, it isn’t possible to find arguments for that), the non-existence of other minds, and the impermissibility of killing people to harvest their organs and save multiple others. But some of us have to do the difficult work of replying to this utter lunacy—just as some like Huemer spend time arguing that we are not, in fact, brains in vats, and I suppose I’m as good a person as any.
Amos has written a nice little article replying to my decisive proof that we have decisive—more than decisive, overwhelming—reason to bite the bullet on the Transplant scenario. There’s a lot to like about the article, particularly the part where Amos describes me as a “good friend, gentleman-scholar, and author of a widely-read Substack.” Many people have said such things. However, this is interspersed with totally wrong arguments—which we’ll get to—as well as various slanderous allegations, including, notably that my substack “defends bad things.” This from the person who opposes homicide and organ harvesting. Amos wisely does not—for he cannot—reply to my general arguments against deontology. Still, he describes his aim as to return “Bentham’s lapdog” “to his kennel.” Unfortunately for those who have a perverse and seemingly pathological desire to retain their organs, he does no such thing.
Amos starts by replying to the first account I give which explains many of our intuitions.
[T]here’s a way to explain our organ harvesting judgments away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes.
Amos replies
Matthew’s first just-so story has a veneer of plausibility, but it’s a just-so story nonetheless. Absent some strong reason to doubt our intuition against killing in this particular case, we shouldn’t doubt it just because, as a society, we have a strong aversion to killing generally.
Consider, by analogy, an ethical egoist who uses a similar story to argue that our intuitions against killing people when it maximizes personal gain are unreliable, claiming that justifiably, as individuals, we’ve formed a strong (though sometimes irrational) prejudice against killing people, because killing people isn’t usually in our long-term self-interest. This is possible, but it isn’t plausible: without some good reason to accept ethical egoism, we have no reason to believe the egoist’s just-so story. The same goes for utilitarians who use the “society-has-an-indiscriminate-prejudice-against-killing” just-so story when trying to explain our intuitions about Organ Harvesting.
Of course it will be the case that this argument by itself is not enough to single-handedly knock down the intuition. Rather, the aim of it is merely to undercut the intuition—particularly given that it is paired with around two dozen other arguments. If each of the two dozen arguments has a decent amount of force, then the organ harvesting intuition is in rather poor shape indeed—and I find this account to be one of the weaker debunking accounts.
But the analogy that Amos gives of the ethical egoist is utterly unbelievable. I provided an account of why one would expect us to arrive at a distinction between the two naturally—if we regarded innocent deaths as just as bad as murder, society would break down. Things go much better when we respect deontic norms, even if they’re not intrinsically what matters. This is one of a relatively small set of norms we’d expect to emerge on the hypothesis that consequentialism is correct—we would, for the reason described, expect people to intuitively draw a significant distinction between killing an letting die.
In contrast, in the egoism case, it’s utterly implausible that this would be the norm that arises. To illustrate this, let’s look at what both utilitarianism and egoism say about the norms. Utilitarianism says that the norms are things that one should basically never really violate—it’s not at all clear that there have ever been real cases when people have violated lots of rights in ways that are optimific. In contrast, in the egoist case, there have been enormous numbers of cases where being kind to others is not in one’s own best interest. For example, if you can steal money and get away with it, lie for personal benefit, and so on, it’s permissible on egoism. So, the utilitarian account of deontic heuristics makes sense, because there are roughly zero real-world cases where one should actually violate them. In contrast, there are literally billions of real-world cases where, on egoism, people should violate egoistic norms.
In my original article, I gave a second explanation of our intuitions.
[W]e have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctor’s offices for fear of being murdered. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5.
Amos replies.
Not only could we say same the same about the second story; I think we can also see that it’s false. If it were true, we’d expect our intuitions to change if we modified the case so that the person facing the dilemma is not a doctor, and made it very, very clear that this is an isolated case that will never be heard of by wider society. (Imagine, if you like, that the story is set on a distant planet, far removed from any other humans.) I take it that your intuition probably doesn’t change when we make these modifications. Thus, the second story probably isn’t true.
I’d agree that if one’s intuitions don’t change, then maybe this scenario wouldn’t be a good explanation. But I think our intuitions do change. Richard gives an example of a scenario in which this is not the case.
Martians, it turns out, are rational, intelligent gelatinous blobs whose sensory organs are incapable of detecting humans. They generally live happy and peaceful lives, except for practicing ritual sacrifice. Upon occasion, a roving mob will impound six of their fellow citizens and treat them as follows. Five have a portion of their “vital gel” removed and ritually burned. As a result, they die within a day, unless the sixth chooses to sacrifice himself—redistributing the entirety of his vital gel to the other five. Whoever is alive at the end of the day is released back into Martian society, and lives happily ever after.
You find yourself within the impounding facility (with a note on your smartwatch indicating that you will be automatically teleported back to Earth within an hour). Five devitalized Martians are wallowing in medical pods, while the sixth is studiously ignoring the ‘sacrifice’ button. You could press the button, and no-one would ever be the wiser. If you do, the Martian medical machinery will (quickly and painlessly) kill the sixth, and use his vital gel to save the other five.
Should you press the button?
In this case, it’s very obvious that our intuitions are different. Thus, when we make the scenario such that we really internalize that the person is not a doctor and it won’t be observed, it certainly weakens the intuition. Then Amos replies to two scenarios I give. The first is one in which a doctor can harvest the organs of one family member so that five don’t die. It seems obvious that you should want them to—one death rather than five, when we care about everyone equally. Amos just says he doesn’t share the intuition, strangely.
Amos similarly responds to a case where 10% of people can have their organs harvested to save 90% and just says he doesn’t have the intuition. I do, and I think a lot of people do, but perhaps if he doesn’t, this shouldn’t move him.
Finally, Amos replies to a scenario I copied from Savulescu.
Epidemic. Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
Amos replies.
This is a better case. My feelings on it are mixed. According to Savulescu, Epidemic is more philosophically probative than Organ Harvesting, because our intuitions about Organ Harvesting are muddied by our intuitions about other things (the general badness of killing, a fear that killing the one man will trigger a slippery slope leading to other killings, etc.), whereas our intuitions about Epidemic are not so muddied.
While Savulescu is right that our intuitions about Epidemic might not be muddied by the same specific factors – the general badness of killing, the prospect of a slippery slope, etc. – that he thinks are playing a role in our intuitions about Organ Harvesting, our intuitions about Epidemic are, plausibly, muddied by other factors. One factor I’m thinking of especially is the difficulty of imagining a very, very, very large number of unconscious people in one’s head, and simultaneously keeping the distinction between the medical conditions of the minority (the ones who’ll wake up if left alone) and the majority (the ones who are on a trajectory towards death) separated clearly in the forefront of one’s mind.
But this shouldn’t affect our intuitions very much. Suppose that a person said that killing large numbers of people isn’t worse than killing small numbers. I reply by giving a case where they can either kill 1 billion or seven. They can reply that we are bad at imagining lots of unconscious people. It’s true that it’s impossible to form a mental image of a billion unconscious people simultaneously, but fortunately, we don’t need to do so to reason properly about morality. To reflect on the case, one doesn’t need to from a vivid mental image of it.
Additionally, despite what I said about Matthew’s case where you can kill 10% of the population to save the other 90%, it could be that, in the back of our minds, we suspect that the consequences of losing five-sixths of the world’s population would be so, so disastrous that it would justify killing a whole bunch of people to prevent those consequences, even on a non-consequentialist theory.
But then if this were true, one would have the intuition in the other 90% case that it’s permissible—an intuition that Amos says he doesn’t have. I think people imagine the utter horror—and compare it to the still horrific but comparatively smaller state of affairs in which only 1/6 die—but there’s no reason to think that they’re really, unbeknownst to them, doing a complex consequentialist calculation.
Amos gives a new case
Blood Harvesting: Five people are dying of Covid-23. Dr. Dre – who is not, in fact, a doctor, and who never swore the Hippocratic Oath – can save them by painlessly extracting the antibody-rich blood of a sleeping person and giving it to the five. This kills the sleeping person but saves the five from death. All of this is happening in perfect isolation, so, if the Good Doctor kills the sleeping person, this will have no adverse affects on anyone in the outside world.
In this case, it seems far less clear – to me at least – that killing the sleeping person is completely kosher. In fact, it seems wholly haram. And, given that this case lacks the distorting factors present in Epidemic, even if you had the intuition there that the 1 in 6 should be killed, we should put greater stock in our intuitions about Blood Harvesting than in our intuitions about Epidemic. According to a peer-reviewed survey that I conducted privately in my head and can show you if you send me a £250 Greggs voucher in the mail, most people will have the same intuitions about Blood Harvesting as me.
But this still is sufficiently similar to the organ harvesting case to evoke our intuitions about it. As Savulescu notes about transplant
But this is a dirty example. For example, that doctors should not kill their patients, that those with organ failure are old while the healthy donor is young, that those with organ failure are somehow responsible for their illness, that this will lead to a slippery slope of more widespread killings, that this will induce widespread terror at the prospect of being chosen, etc, etc
If killing people to save multiple others would be disastrous if done widely by society, then our intuitions are very plausibly explainable on rule consequentialist grounds. Thus, to avoid this, we should imagine a scenario in which we have good reason to think that following the deontological rule won’t be optimific. Which is why transplant is a worse scenario than pandemic.
Ploughing on, Matthew writes:
A fourth objection is that, upon reflection, it becomes clear that the action of the doctor wouldn’t be wrong. After all, in this case, there are [five] more lives saved by the organ harvesting. It seems quite clear that the lives of [five] people are fundamentally more important than the doctor not sullying themself.
Upon reflection, it does not seem to me that the doctor’s action wouldn’t be wrong. Axiologically, it would be better to lose one person than five. But, morally, it would be wrong to violate rights in the process.
I think that if we think about this in terms of what’s important about the case, it becomes clear that they should harvest the organs. On the one hand, we have the opportunity for four extra lives to be saved—four living breathing humans, who will not go home to their families if you don’t do it. On the other hand, we have some nebulous deontic rule. It’s obvious which of those really matters more—which is more important. And I’m not using important in the axiological sense—I’m using it to discuss what really matters. And morality should be about what’s important.
Finally, Amos quotes me saying
[I]f we use the veil of ignorance, and imagine ourself not knowing which of the six people we were, we’d prefer saving five at the cost of one, because it would give us a 5/6ths, rather than a 1/6ths chance of survival.
He replies
This is the problem with veil of ignorance reasoning generally. If you design a social arrangement from behind a veil of ignorance, with the single-minded aim that you yourself would have the best chance of being well-off were you randomly dropped in it, then you will sometimes design arrangements that involve violating the rights of the few for the good of the many. But this doesn’t show we really should violate the rights of the few for the good of the many. It just shows that a single-minded focus on your own self-interest will sometimes lead to immoral outcomes, something we knew already.
Well, if we imagine impartial people all being in favor of doing X, this gives some reason to think one should do X. Most people seem to think that the veil of ignorance tracks something important—but if it does, then we’d be utilitarians, assuming we follow the pareto principle, that we should do things that are better for everyone—see here for more on this.
Thus, I conclude that, as is to be expected, truth and justice have prevailed, and Amos’ arguments FAIL!
The Martian example is excellent. Another useful feature of it is that is that it presents the scenario as normalized, something expected to happen every once in a while. This neuters the "Dr. Frankenstein's ego is so big he thinks he can single-mindedly change the way things have always been!" aspect of anti-Transplant intuitions.
A stellar piece. However, in the words of one M. Hijab, "Contradiction! CONTRADICTION!" Expect my crushing response in a finite number of business days.