Michael Huemer has an unfortunate reluctance to kill people and harvest their organs. This reluctance is so strong that he thinks that utilitarianisms toleration of killing people and harvesting their organs counts against it. Huemer gives the following case
“a. Organ harvesting Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?”
Of course. It’s notoriously difficult to disentangle the morality of particular acts and the morality of people who would take those acts. Those who would actually kill patients and harvest their organs would almost certainly be evil—in the real world we can’t have certainty that it will save five lives. Thus, this thought experiment is counterintuitive because it has a few characteristics.
1 It pits our sacred values of not killing people against some preventing some amorphous extra deaths. Excess deaths are far too common for people to get bent out of shape about them.
2 It presumes that extraordinary recklessness turns out to have good results.
3 It has the murder word which people tend to be averse to.
Maybe you’re still not convinced. Perhaps you believe in
ghosts rights, and you think that not violating rights is morally more important than preventing other rights violations. However, beliefs in rights are wrong, so let’s right the wrong as is our right.
1 It seems a world without any rights would still matter morally. For example, imagine a world with sentient plants, who can’t move, where all harm is the byproduct of nature. It seems plants being harmed, despite their rights not being violated, is bad. We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that beings matter if and only if they can experience happiness. This serves as strong evidence for utilitarianism.
One could object that rights are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this requires deeply implausible strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. They are purely the result of atoms functioning. However, strongly emergent properties are not reducible to weaker properties. Chalmers argues that there is only one thing in the universe that is strongly emergent; consciousness. Whether or not this is true, it does prove the broader principle that strong emergence is prima facie unlikely. Rights are clearly not reducible to happiness; no amount of happiness magically turns into a right. This renders this claim deeply implausible.
2 Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, we generally think it would be a violation of rights to create huge amounts of pollution, such that a million people die, but not a violation of rights to light a candle that kills no people. The difference is just in the harm caused. Additionally, if things that we currently don’t think of as rights began to maximize happiness to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights to look at people.
3 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our magic rights based forcefields from infringement that produces no harm for us. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
4 A reductionist account is not especially counterintuitive and does not rob our understanding or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
5 An additional objection can be given to rights. We generally think that it matters more to not violate rights than it does to prevent other rights violations. We intuitively think that we shouldn’t kill one innocent person to prevent two murders. Preventing a murder is no more morally relevant than preventing another death. A doctor should not try any harder to save a person's life on the basis of them being shot, than on the basis of them having a disease not caused by malevolent actors. I shall give a counterexample to this. Suppose we have people in a circle each with two guns that will each shoot the person next to them. They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person. Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person. However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot. If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot. World two would have no one shot, and world one would have everyone shot. World one seems clearly worse.
Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them. This is obviously false.
6 We’ve already seen that all of the things that we thought of as rights are conducive to happiness generally. However, this is not the extent of the parity. The things we don’t think of as rights would start being treated as rights if it were conducive to utility. Imagine a world where every-time you talked to someone it busted their eardrums and caused immense suffering. In that world, talking would and should be considered a rights violation. Thus, being conducive to utility is both a necessary and sufficient condition for something to be a right.
7 We have decisive scientific reasons to distrust the existence of rights, which is an argument for utilitarianism generally. As Greene et al argue “A substantial body of evidence indicates that utilitarian judgments (favoring the greater good) made in response to difficult moral dilemmas are preferentially supported by controlled, reflective processes, whereas deontological judgments (favoring rights/duties) in such cases are preferentially supported by automatic, intuitive processes.”
People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. While there is some dispute about this thesis, the largest data set from (Fornasier et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies. They write “The influential DPM of moral judgment makes a basic prediction about individual differences: those who reason more should tend to make more utilitarian moral judgments. Nearly 20 years after the theory was proposed, this empirical connection remains disputed. Here, we assemble the largest and most comprehensive empirical survey to date of this putative relationship, and we find strong evidence in its favor.”
8 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders should obviously prefer the one killed to prevent 5 murders. An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
9 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you the same options you were just given.
The person in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
Rights have been officially debunked. They are no more. Yet there are some more specific objections to the organ harvesting objection.
First, there’s a way to explain it away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes. Thus, there’s a strong sociological reason for us to regard killing as worse than letting people die. However, this developed as a result of societal norms, rather than as a result of accurate moral truth tracking processes. This intuition about the badness of killing only exists in areas where killing to save people is usually not conducive to happiness. Many of us would agree that the government could kill an innocent person in a drone strike, to kill a terrorist who would otherwise kill ten people. The reason for the divergence in intuitions is that medical killings are very often a bad thing, while government killings via drone strikes are often perceived to be justified.
Second, we have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctors offices for fear of being murdered. Cases of one being murdered to save five would be publicized by the media resulting in mass terror. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5. It can also be reasonably argued that things would go worse if doctors had the disposition to kill one and save five. Given that a utilitarian's goal is to take the acts, and follow the principles who make things go best in the long term, a more valuable principle that entails that one does not take this act, can be justified on utilitarian grounds.
Third, we can imagine several modifications of the case that makes the conclusion less counterintuitive.
A) Imagine that the six people in the hospital were family members, who you cared about equally. Surely we would intuitively want the doctor to bring about the death of one to save five. The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.
It could be objected that even with family members the intuition is the same. Yet this doesn’t seem plausible, particularly if no one had any knowledge of the doctor's action. If no one knew that the doctor had killed the one to save the other five, surely it would be better for this to happen. An entire family being killed would clearly be less bad than one family member dying.
It could be objected that adding in family makes the decision making worse, by adding in personal biases. Yet this is not so. Making it more personal requires us to think about it in a more personal way. It is very easy to neglect the interests of the affected parties, when we don’t care much about them. Making it entirely about close family members matters, because we care about family. If we care about what is good for our family, then making the situation entirely about our family is a good way to figure out what is good, all things considered. Yet this is not the only case that undercuts the intuition.
B) Suppose that a Doctor was on their way to a hospital with organ transplants that would save 5 people, who would otherwise die. They see on the side of the road a murder that they could prevent, yet it would require a long delay, that would cause the death of the five people in the hospital. It seems clear that the doctor should continue to the hospital. Thus, when we merely add up the badness of allowing 5 to die, versus one murder the badness of five people dying outweighs.
C) Imagine that 90% of the world needed organs, and we could harvest one person's organs to save 9 others, who would live a perfect life. It seems clear that it would be better to kill the ten percent, rather than to let the other 90% die.
Finally, let’s investigate the principle behind not harvesting organs.
We could adopt the view NK, which says that one ought not kill innocent people. Yet view NK is clearly subject to many counterexamples. If the only way to stop a terrorist from killing a million people was by killing one innocent person, we should surely kill the innocent person. And most people would agree that if you could kill a person and harvest every cell in their bodies to save a million people, that action would be permissible.
WE could adopt view NKU, which says one ought not kill unless there is an overriding concern that involves vast numbers of people. Yet this view also seems to run into a problem.
It seems the objection differs depending on the context in which a person is killed. A terrorist using a human shield who is about to kill five people could be killed, yet it seems less intuitive to kill the person to harvest their organs. Thus, the badness of killing is context specific. This adds credence to the utilitarian view, in that the context seems to generally follow rules that determine if killing in most similar cases would make things go best.
We could take the view DSK, which says Doctors shouldn’t kill. However, this view is once again very easily explainable sociologically; it is very good for society that doctors don’t generally kill people. But in a deeper meta ethical sense it seems to make less sense.
We can consider a similar case that doesn’t seem to go against the obligations of a doctor. Suppose that a doctor is injecting their patients with a drug that will cure disease in low doses, but in too high of doses, it will kill someone. Midway through injecting the drug, they realize that they’ve given a lethal dose to five other patients, in another room. The only way that they can save them is by leaving immediately, and allowing their current patient from getting too high of a dose, who will die. It seems intuitive that the doctor should save the five, rather than the one.
One might object that the crucial difference is that the doctor is killing, rather than merely failing to save. However, we can consider another case, where the doctor realizes they’ve given a placebo to five people, rather than life saving medicine. The only way to give the life saving medicine would be to abandon the room that the doctor is in, much like in the previous example. It seems very much like the doctor should go to the other room, even though it will result in a death, caused by their injection. It seems clear that the cause of the lethal condition shouldn’t matter in terms of what they should do. As Shelly Kagan has argued that there is no plausible doing vs allowing distinction that survives rigorous scrutiny. Given the repeated failure to generate a plausible counter theory, we have reason to accept the utilitarian conclusion.
Additionally, imagine a situation where very frequently people were afflicted by flying explosives, which would blow up and kill five surrounding people, unless they were killed. In a world where that frequently happened, it starts to seem less intuitive to think we shouldn’t kill one to save five.
Humans follow patterns. Enough times of seeing “murder is bad,” leaves them to conclude that generally murder is bad. This corrupts their judgement about particular cases where the bad things about murder cease.
Alphazero was a chess playing AI that learned to play chess by playing against itself many times—never influenced by human strategy. As a result, it didn’t follow the common strategies and heuristics of humans, and it played much better than other AI. Humans were baffled by the strategies of alphazero and described it as playing chess like an alien. Utilitarian ethics is somewhat similar. It’s the optimal ethical system, but it often ignores the rules that humans use for ethics. Our bafflement at its seemingly strange mandates is no more surprising than bafflement at the chess of alphazero. Sometimes, heuristics hold us back. Yet as I hope to have shown, despite utilitarianism playing chess like an alien, it’s playing better chess.
So let's Start in order to refute your points.
1. A. [World without any rights] This point is clearly true - assuming that you don't simply declare happiness and absence of suffering as rights in themselves, there are clearly non-rights things that matter.
1. B. [Robot] A robot that does not experience happiness or suffering would still have rights. I'm not sure where you get the idea that such would not be the case. If you mean that the robot is not *conscious*, (whatever that means), then sure, it's not a being and so has no rights.
1. C. [Emergence] Your refutation of this cannot be described as a real argument. Your only actual analysis here is that "no amount of happiness magically turns into a right". That's asserting away the objection. Regardless, the question isn't a threshold of happiness for a right, it would be that *any* Happiness suffices to create rights, which seems plausible because happiness is a product of consciousness, which is strongly emergent. Regardless, even if you're right here, it doesn't matter because happiness is not essential to rights.
2. A. [Right to Life] We don't only think that life should be preserved as a means to produce happiness. If you were told that a person had a 60% chance of living a life with -1 happiness and a 40% chance of living a life of 1.4 happiness, you would not be authorized to kill them.
2. B. [Shooting small children] This argument makes no sense. Shooting people violates their right to bodily autonomy and to life. Soundwaves to not impact either of those rights, and we usually have implicit permission to speak to people by virtue of living in a society. Speaking to someone who does not want to be spoken to may be a rights violation as well.
2. C. [Climate Change] Rights apply to hurting people. Killing 1 million people violates their right to life. Lighting a candle hurting no one, does not.
2. D. [Looking at People] The premise is incorrect. It's not a right violation to look at people because we either have implicit permission to stare or they're in an area where they don't have a privacy right. Looking at someone in their home, even if they don't know, could be a rights violation. Similarly, looking at child pornography, even if the person doesn't know, is probably also a rights violation.
3. A. [Leg Grabbing Aliens] This scenario is incoherent. A person's leg cannot be grabbed without their knowledge or consent. Assuming that minor problem away (somehow), furthermore, if the torture was severe enough, then grabbing legs may be justified, even if it is a rights violation, because leg-grabbing is not an absolute rule to avoid. If the scenario assumes that a truly, absurdly, incomprehensibly vast number of legs need to be grabbed to alleviate the torture, then I would bite the bullet and say that the grabbing is wrong.
3. B. [World with lots of rights violations] I agree. I would consider a world where everyone was forcibly subjected to wireheading a worse world.
4. [Innocent until proven Guilty] This is comparing apples to oranges. No one says the rules describe reality, but rather that when we are deciding if someone is innocent or guilty we should start with the assumption of innocence and work our way to Guilt. I doubt that this is even a "right" in the sense that you use elsewhere.
5. A [Satanic Circle of Death] In this case preventing two guns from shooting would be the right decision, because it's the *only* way to save that person's life. In contrast to the usual trolley problem, the "one" person who you normally have to kill, is guaranteed to die unless you pick the other choice. This is the same reason why it's a good thing to shove someone out of the way of an oncoming train.
5. B [Medical Malpractice] You're trying to tie in two separate actions here. Once you have taken the active step of poisoning, you should view all six people equally to be saved. Regardless, I don't think that an objection that presupposes you having a radical change in ethical views is an effective response to a system of ethics.
6. [What if Talking killed] In this case we don't have to define a "new" right. The right to your own person and not getting your eardrums blown out, combined with the nonexistence of the implicit license to talk that exists in such a world, would render this the same rights violation that it is now. Your assertion that being conductive to utility is necessary for right has no backing whatsoever. A person has a right to not be thrown into a wireheading Machine.
7. [Human Bias] This is all irrelevant to the actual question, though I will note that the Author of this blog post has equal, if not more personal baggage behind them than these generic studies have on deontologists.
8. [3rd Party Observer] This assumes that a 3rd Party observer would want a "better world" and that is how they make judgements. However under a human rights Framework such an observer would not observe worlds in toto, but instead judge actions. In that case, they could very well say that all 6 killings are bad acts.
9. [SUPER DUPER SATANIC CIRCLE(S)] This scenario is incoherent - if option two were given to the 98th circle, they would be unable to choose it, because then the 99th circle would be allowed to give the 100th circle both options, which is contradicted by the question itself. If you try to immunize the argument by amendment the second thing to be "2 options unless they are on the outside", then I would say that it's still incoherent, because the people in the 3rd circle wouldn't have the actual sae option, even if the text of the option was the same (for them the people who wouldn't get the choice are only 97 steps away, instead of 98). Furthermore, it is clear to the person in the middle that since everyone in the circle is moral, they will choose the same kind of option as them. Therefore if they pick option two, they know that 1.5777218 x 10^69 people will die. Comparatively, they know that if they kill 1 person, only 1 person will die. Therefore, since rights are not necessarily infinitely valuable, they should choose to kill 1.
9. B. [Family Members] I don't understand your point here. The only reason that family could be relevant is the intuition that someone may want to sacrifice themselves to save others. To the extent that's correct, and they waive their rights, you can ask them. If they say no however, then killing them is just as wrong as usual. If *all* of them would die unless you killed one, then I think that the situation is different, since someone is going to immediately die either way, then the rights violation to you doing the killing would be relatively minor, and probably outweighed by saving 5 lives.
9. C. [Roadside Murder] You aren't the one doing the murdering, thus you are not violating anyone's rights by saving 5. Additionally, those five people probably have a right to Medical care that they are relying on from you.
9. D. [90% of people need Organs] This is simply you restating the original thesis of utilitarianism in a different way. Maybe killing 1 to save 9 is justified under a rights mode, but if it isn't, merely stating the question doesn't get you anywhere.
10. [AlphaZero] A smart person has told me that AI will destroy the world and lead to unimaginable suffering. It seems that Utilitarianism will do so as well.