Introduction
After my debate with Ben Burgis about organ harvesting, I published my opening statement. Ben wrote a response to my opening statement, which I’ll respond to in this article. The quotes from my original article are in italics.
A few preliminary notes. First, there may be some spelling errors—I wrote this from roughly 12-4 in the morning, and am not going to check the grammar of my more than 20,000 word article.
Second, big thanks to Ben for engaging me on this topic. I think the utilitarians have the much better side of the argument, so it’s nice to be consider the arguments in depth.
Matthew starts with some general arguments against the concept of moral rights.
As I’ve been known to do!
Objection 1
Here’s the first one:
1 Everything that we think of as a right is reducible to utility considerations. For example, we think people have the right to life, which obviously makes people’s lives better. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
It’s generally true about moral theories that partisans of one moral theory will try to show how their theory can capture what’s plausible about another. How successful any given attempt is, of course, has to be evaluated case-by-case. So for instance I claim that much of what’s plausible about utilitarianism can be (better) explained by Rawls’s theory of justice. (Consequences are given their due, but we’re still treating individuals as the unit of moral evaluation in a way we’re just not when we throw all harms and all benefits onto the scales as if they were all being experienced by one great big hive mind.) Matthew, meanwhile, claims that the appearance of (non-reducible) moral rights can be better explained by the moral importance of good and bad consequences.
As I’ve previously shown, consistent Rawlsians would be utilitarians—disproportionately caring about the badly off makes no sense. John Harsanyi>John Rawls.
On a more pressing note, Ben’s response misses the point. My claim wasn’t merely that consequentialism captures what we want about rights, it was that consequentialism is the only way to explain what rights we have in the first place. Consequentialism says that X should be enshrined as a right iff1 doing so has good consequences. Thus, this explains what rights are, and which things are rights. Other accounts of rights, I argue, are wildly implausible, lacking firm ontological foundation.
A straightforward moral rights intuition is that we have a right not to be shot that’s much more serious than our right not to have certain soundwaves impact our ears. It should be noted that everyone, on reflection, actually does believe that we have at least some rights against people producing soundwaves that will enter our ears without our consent — -hence noise laws. Even if we limit ourselves to talking, if the talking in question is an ex-boyfriend who won’t stop following around his ex-girlfriend telling her about how much he loves her and how he won’t be able to live without her, how he might commit suicide if she doesn’t come back, etc., that’s very much something we all actually do think on reflection the ex-girlfriend has a right against. But Matthew’s general point stands.
The point stands far more as a result of the examples. Making sound is a rights violation iff it causes enough harm to be worth calling one. The same is true of the ex boyfriend case. The badness of making loud sound isn’t about the nature of the sound, it’s about the nature of the harm. If the sounds were half as loud, but they caused more harm because humans had more sensitive ears, emitting them would be a graver rights violation.
We all consider our rights against being shot much weightier than our rights against hearing noises we don’t want to hear. For one thing, we tend to think — not always but in a very broad range of normal situations — that the onus is on the person who objects to some noise to say “hey could you turn that down?” or “please leave me alone, I’m not interested in talking” while we don’t think the onus is usually on the person who’s dodging gunshots to say “please stop shooting at me.” And in the noise case there’s a broad range of cases where we think the person who doesn’t want to be exposed to some noise is being unreasonable and they’ll have to suck it up and the range of cases where we’d say something parallel about bullets is, at the very least, much narrower.
So — what’s the difference?
Matthew thinks the only difference is that the consequences of being shot are worse than the consequences of someone talking to you. He further thinks that if it’s the only difference, we have no reason to believe in (non-reducible) moral rights. Both of these inferences are, I think, far too quick, and my contention is that neither really holds up to scrutiny.
I don’t claim that that is the only difference. I merely claim that this is the best explanation of a wide range of rights that exist. Take another example—you have the right not to let other people enter your house, but you don’t have the right not to let people look at your house. Consequentialism naturally explains this. It additionally explains, quite naturally, why in the noise case, the threshold at which banning noise seems reasonable is when it starts to cause significant harm. It also explains why the harm threshold is the threshold for banworthy pollution, but it makes sense to tax pollution instead of banning it in general; having a kid, which is bad iff the kid is expected to live a terrible life; owning pets, it’s fine to own pets but not to torture them; the permissibility of wars; and political authority.
Now, depending on what kind of noise we’re talking about, the context in which you’re hearing it, etc., noises can cause all sorts of harms — irritation, certainly, but also lost sleep, headaches, or even hearing loss. But the effect of bullets entering your body are typically way worse! Fair enough. But is this the only difference between firing soundwaves and bullets without prior consent?
It’s really not. For example, one difference that’s relevant from a rights perspective is that a great many normal cases of Person A talking to Person B when Person B would rather they didn’t to them are cases where Person A holds a reasonable belief that there’s at least a non-negligible chance that Person B will welcome being talked to. (In fact, I suspect that the great majority of cases are like this.) Cases where Person A shoots bullets into Person B while holding the reasonable but mistaken belief that Person B would welcome being shot are…very rare.
Several points are worth making. First, utilitarianism also makes sense of this distinction. After all, given that rights are societal heuristics, if they turned out to be wildly unpredictable, such that one had no idea whether they were violating rights, that would be a disaster from a consequentialist perspective. Thus, consequentialism explains, not assumes, why this is a salient distinction. Second, A can only predict that B will enjoy being talked to if most people like B enjoy being talked to. However, if most people like being talked to, that means that the justifiability of talking to people relates to it fulfilling people’s desires—making them better off—which is nonetheless a consequentialist notion. Finally, it’s permissible to talk even in cases where one doesn’t think the other person will appreciate what they say. For example, if I tell my child not to have any more desert, I may predict that they won’t enjoy it, but that action is permissible.
Another relevant difference is that it’s often difficult or even impossible to secure permission to talk to someone without, well, talking to them. Shooting people isn’t like that. You don’t have to shoot a couple of bullets at someone first to see if they like it. You can just ask, “Would you by any chance be amendable to me shooting you?” and then you’re talking not shooting.
This difference is practically relevant; however, it doesn’t reflect what is fundamentally salient about the distinction. Imagine, for a moment, that we lived in a society without a word for gun or bullet. Thus, the only way to ask whether someone wanted to be shot was to shoot them and ask them if they want to repeat what just happened. In this world, shooting random people would still be impermissible.
A third relevant difference, especially if we’re talking about soundwaves in general and not just talking, is that we often feel that there are (at best) competing rights claims at play in soundwave situations in a way that typically isn’t true when people shoot each other. If John stays out until dawn drinking whiskey on Friday night and a few hours after he goes to sleep he’s woken up by the noise of his neighbor Jerry mowing his (Jerry’s) lawn, we tend to think that however little John might like it, he’ll just have to get over it if there’s nothing he can do on his end to block out the noise — because we think Jerry has a right to mow his own lawn. And notice that this seems correct even though the bad consequences for Jerry from his lawn not being mowed that day might well be far less than the bad consequences for John from being woken up so soon! For example, John might experience a pounding headache for hours and Jerry might simply be vaguely displeased about his grass not being completely even.
Two points are worth making. First, my case was specifically about talking, which one doesn’t have a right to do—particularly in ways that violate rights. If talking were, much like shooting people, a rights violation, then doing it would be impermissible, whatever its utility. Second, frequently in society we don’t have total access to information, and rights are imperfect instruments. Thus, there will be cases when rights don’t rule out egregious harm (e.g. causing suicide), and others when they rule out harmless action (e.g. intellectual property rights, for property that I haven’t already purchased).
Thus, rights don’t always maximize well-being—rather, they are best explained as well-being maximizing heuristics. If, however, we want to capture the morally salient features of the situation, it would seem that John waking Jerry is pretty immoral. If John knew about the harm to Jerry, and acted anyways, he’d be a total asshole2.
Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
The first sentence is just wrong. There are plenty of things that might maximize hedonic value that no one would normally think should be enshrined as rights. (Enshrining the right of people otherwise in danger of dying of kidney failure to a spare kidney would plausibly maximize hedonic value.) The second, though, is under-described but plausibly correct — because we think we having “horrific” suffering inflicted on us is exactly the sort of thing against which we have a right.
We’ll return to that point in a moment, but first note that we have rights against some kinds of harms but not against others. You have a right against being killed, for example, but you don’t have a right against having your heart broken. And degree of harm is often very relevant to which things you can plausibly be said to have rights against.
My first sentence is too broad—one could imagine it being optimific to give the utility monster the right to free food from the flesh of every human, however, that would nonetheless not be considered a right by most people. This is one of the few things that Ben got right, however, not for the reason that he gave. The right to bodily autonomy entails that the government can’t take ones organs, and that is a more important rights, that generally has better outcomes.
The rights against being killed, rather than heartbreak, are best explained by utilitarianism. While heartbreak can be very painful—certainly worse than assault in some cases—a world in which breaking someone’s heart was outlawed would be very, very bad, from the standpoint of utility. The right to break another’s heart is important for relationships—relationships would be dysfunctional if one could never leave. Thus, like nearly every other point discussed so far, the best account is a utilitarian one.
Even when we’re specifically talking about bodily autonomy, that comes in degrees. It doesn’t seem crazy, for instance, to say that laws against abortion are a far more profound violation of bodily autonomy (and hence far less likely to be justifiable by weighing competing values) than vaccine mandates. It might be reasonable to (a) refuse to let a mental patient leave the hospital because you judge them to be a threat to themselves and/or others but (b) have moral reservations (even if we’re assuming a time and a place where it’s entirely up to the doctor) about giving that same patient electroshock therapy against their will even if it’s absolutely true that they would benefit from it.
But the amount of bodily autonomy doesn’t seem relevant to the strength of the right to bodily autonomy. This can be seen with the following example. Suppose that we had control over a massive limb that stretched into the fourth dimension, just sort of flapping around. However, we can direct it with our minds, if we want to. If this limb were causing harm, even if this limb was 99.9999% of our body weight, it wouldn’t be a violation of rights to destroy this limb. This is because the right to bodily autonomy only matters if it makes people well off. The only reason to care about a right—apart from irrational rule worship—is if it makes people well off.
I don’t really have the intuition about there being an asymmetry between the justification for beneficial electroshock therapy and restraining someone. One cause of the asymmetric intuition may just be a negative reaction to electroshock therapy, given its horrific history of being used in gay conversion therapy. Additionally, even if treatment is beneficial, there’s a difference between something being beneficial to do and beneficial to compel. A patient may individually benefit from treatment, but giving the government the right to treat people without their permission would make people terrified to go in, genuinely seeking help. There is also significant value to upholding the liberal value involved in allowing people to choose how their life goes, even if they choose poorly.
The difference between a pure utilitarian framework where you assume all we’re doing is weighing harms against benefits and a rights framework in which we (non-reducibly) have a right against being harmed in certain ways is nicely demonstrated by thinking about the debate started by Peter Singer about whether it’s wrong not to donate excess income to famine relief charities. Whatever position you take on that issue, we’re all going to agree — I suppose some particularly ferocious bullet-biter might disagree? — that it would definitely be wrong to fly to a famine-stricken country and shoot some people yourself. Even if we could be sure that the people you shot would have starved to death — even if, as is plausible if you’re a good enough shot, they would have suffered more if they’d starved instead of being shot — you still can’t do that. In fact, I suspect that most people who Singer has convinced that it’s wrong not to donate excess income to famine relief charities would still think an expert marksman could at delivering headshots that will kill people quickly flying to a famine-stricken country to shoot some famine victims would in fact be much much much worse than just not donating.
In response to the first part, I’ll just quote the section of my book in progress on this very topic.
Some like (Carson, 1983) have argued, utilitarianism has the unacceptable consequence that we should kill people who have a negative quality of life. If their quality of life is negative, they overall detract from the hedonic sum, such that things would be better if they were killed. However, this conclusion strikes many as counterintuitive.
There are two crucial questions worth distinguishing. The first is whether one should kill unhappy people and the second is whether the world is better when unhappy people die. Utilitarianism would, in nearly all cases, answer no to the first. One can never be particularly confident in their judgments about the hedonic value of another person and killing damages the soul and undermines desirable societal norms. Thus, the cases in which utilitarianism prescribes that people who are unhappy should be killed are not cases which are likely to arise in the real world, which trigger our intuitions. They are cases in which there are no spillover effects, no one would ever find out about the killing, the killing won’t undermine one’s character, and you can know with absolute certainty or near absolute certainty that the person has a bad life which will remain bad for the entirety of its existence.
Thus the cases in which utilitarianism prescribes that sad people should be killed are ones in which the relevant question is far more like the second one posed above, namely, whether the world is better because of the deaths of some people, based purely on facts about the life of the person. This caveat is to distinguish the case from one in which a person is killed to prevent other bad things from happening, such as would occur from killing Hitler. If we imagine a scenario in which there are aliens who know with absolute certainty that a person will be miserable for the next year and then die, who can bring about their death in a way that will be believed to be an accident, producing zero negative effects on the aliens, and maximizing the sum total of well-being in the world, the utilitarian account begins to seem more intuitive.
Thus, the question primarily boils down to the question of whether a person with more sadness than happiness is better off dead. If people are better off dead, then killing them makes them better off. The intuition against killing seems dependent on the notion that it makes the victim worse off. Common sense would seem to hold that people can be better off dead, at least in some cases. If a person is about to undergo unfathomable torture, it seems plausible that they’d be better off dead. So utilitarianism and common sense morality are in 100% agreement that some people are better off dead. The disagreement is merely about whether this applies to all sad people.
However, it seems all theories of well-being would have similar conclusions. Objective list theory would hold that a person is better off dead if the badness of their life is greater than their pursuit of goods on the objective list. Desire theory holds a person is better off dead if they have more things in their life that they don’t desire than things they do desire.
Thus, the argument can be formulated as follows.
Premise 1 If killing people makes them better off, then you should kill people, assuming that there will be no additional negative side effects. This principle has independent plausibility; a morality that isn’t concerned with making victims better off seems to be in error. Additionally, this premise follows from the intuitive Pareto principle, which states that something is worth causing if it’s better for some and worse for none.
Premise 2 Killing people who have a negative quality of life makes them better off. This seems almost true by definition and follows from all theories of well-being.
Therefore, you should kill people who have a negative quality of life, assuming that there will be no additional side effects.
Premise 3 If you should kill people who have a negative quality of life, assuming that there will be no additional side effects, then the hedonistic utilitarian judgments about killing people who have more sadness than happiness is plausible. The objective list theory and desire theories also hold you should kill people under similar circumstances, relating to desire and objective list fulfillment, therefore, this plagues all theories and is not a reason to reject hedonistic utilitarianism.
Therefore, the hedonistic utilitarian judgments about killing people who have more sadness than happiness is plausible.
One might object that a notion of rights would prevent this action. I’ve already provided a litany of arguments against rights. However, even if one believes in rights, it’s hard to make sense of a notion of rights that doesn’t apply in cases of making people better off. For example, it is not a violation of rights to give unconscious people surgery to save their life, even if they haven’t consented, by virtue of their being unconscious. Thus, a notion of rights is insufficient to salvage this objection. A right that will never make anyone better off doesn’t seem to be very plausible.
Additionally, I’ve previously argued that there is no deep distinction between creating a new person with a good life and increasing the happiness of existing people. If this is true and it’s bad to create miserable people, then it would seem to be permissible to kill miserable people, given the extended ceteris paribus clause that has been stipulated.
Several more arguments can be provided for this conclusion. Imagine that a person is going to have a bad dream. It seems reasonable to make them not have that dream, if one had the ability to make their sleep dreamless, rather than containing a miserable dream. Similarly, if a person would be miserable for a day, it seems reasonable to make them not experience that day, as long as that would produce no undesirable consequence. However, there’s no reason this principle should only apply to limited time durations. If it’s moral to cause a person not to experience a day because it would contain more misery than joy, then it would also seem reasonable to make them not experience their entire life.
Additionally, it seems reasonable to kill people if they consent to being killed and live a terrible life. However, if they don’t consent because of error on their part, then it would be reasonable to fix the error, much like it would be reasonable to force a deluded child to get vaccinated. Killing them seems plausibly analogous in this case.
Finally, there are many features of the scenario that undermine the reliability of our intuitions. First, there’s status quo bias, given that one upsets the status quo by killing someone. It seems much more intuitive that a person is better off dead if that occurs naturally than being killed by a particular person, which shows the influence of status quo bias. Second, we rightly have a strong aversion to killing. Third, it’s very easy to imagine acts becoming societal practices when evaluating their morality. However, killing unhappy people would clearly be a bad societal practice. Fourth, there’s an intuitive connection between killing innocent people and viciousness, showing that character judgments may be behind the intuition. Fifth, the scenario is deeply unrealistic, involving total certainty about claims that we can’t really know in the real world, meaning our intuitions about the world are unlikely to be reliable. It also requires stipulating that a person will never be able to be helped for their misery. Sixth, this prescription is the type that has the potential to backfire, given that it would be bad if people acted on it in any realistic situation. Seventh, this principle seems somewhat related to the ethics of suicide, which people naturally have a strong aversion to.
Burgis’ notion seems to be reliant on some notion of a distinction between doing and allowing something. You can allow rights to be violated, as long as you are not the one doing the rights violating. However, such an account is untenable, as has been argued at length by Bennett, Kagan, and most notably me. To see my response to this, look under “Objection 1: The No Good Account Objection.”
The case of the marksman would obviously not be justified by utilitarianism unless we make wildly implausible stipulations. We’d have to assume that it’s guaranteed no one will find about what they’re doing, they know that the people won’t survive and will live net negative lives, they have the ability to be a perfect marksman, and so on.
The reason we may find this unintuitive is likely because we imagine that it is, in some way, bad for the children. However, on both objective list theory and hedonism—which together represent the overwhelmingly majority view—this aciton would be good for the children. The only other theory of the things that make people well off is desire theory, which is wildly implausible as I argue here and here.
If one doesn’t hold that the badness of killing relates to it being bad for the victim then there’s a rather puzzling verdict. It begins to seem self centered, for one to hold that it would be good for a person if they were killed, but nonetheless, they shouldn’t be killed. The badness of killing does seem to be grounded in facts about the victim.
Additionally, if we hold that it’s good for the victims, and thus overall, if they die painlessly, then it seems as though we should hope for the victims to die. However, if we should hope for the deaths of the victims, then it seems that it would be good to kill the victims. After all, it would be very strange for the correct morality to hold that one should hope that something happens, but that one isn’t supposed to bring it about.
There is a reason that it makes sense to terminate the life of the terminally ill who are in immense pain. That reason is grounded in it being good for them. If a person is about to starve to death, they are functionally terminally ill—and in immense pain. It thus makes sense to terminate their life.
To consider this more impartially, imagine that we knew that lightyears away some aliens were about to die. They were in immense pain. We could bring about their inevitable death, leading to less pain, and no one would ever find out about it. It seems immensely clear in this case that doing so would be the right thing. This is especially true if it would make the aliens better off. However, it’s quite trivial that ending painlessly of the lives of those with bad remaining segments of their life makes them better off.
Another of Matthew’s examples is looking at a house vs. entering the house. We have a right against people entering our homes without our permission, but not against them looking at our homes. True! But why? Matthew thinks the difference is about harm but that doesn’t really seem to capture our intuitions about these cases — we don’t typically think people have a right against having their homes entered only when they’ll experience some sort of harm as a result. If Jim enjoys watching Jane sleep, for example, and he knows she’s a very heavy sleeper who won’t hear him slip in her window and pulling up a chair by her bed to watch — and he leaves long before she wakes up — this is surely the kind of thing Jane has a very strong right against. Part of the difference between that and looking at her house is about property rights (the kind even socialists believe in, the right to personal as opposed to productive property!) but there’s part of it that’s not about that and we can draw out that distinction nicely by imagining that he’s watching her sleep through high-powered binoculars from just off her property. Jane may have a legal right against this, and she certainly has a moral rights against it, because it’s an invasion of privacy — even if she experiences no harm whatsoever as a result.
Before moving on to (2) in Matthew’s list of general arguments against rights, one quick note about methodology that the Jane/Jim case brings out nicely. If two moral views both have the same result in some instance — for example, there are many cases in which we normally think people have some right where that can be explained either in utilitarian terms or in terms of non-reducible rights — a useful way of deciding between them is to consider cases (which might have to be hypothetical ones) where the frameworks would diverge. In act-utilitarian terms, it’s a little tricky to explain what’s wrong with Jim’s actions. There are moves the act-utilitarian could make here, but it’s a little tricky. In terms of bog-standard rights assumptions, though, the wrongness is straightforward.
In my book in progress, I also argue that utilitarianism gives the best account of privacy rights. This is thus largely quoting the book.
One possible objection to utilitarianism is that it doesn't adequate account for the value of privacy. After all, utilitarianism would hold that violations of privacy are bad if and only if they cause harm which negatively impact the mental states of people. If there were people spying on you all the time, gaining immense joy from it and you never found out, utilitarianism holds that that state of affairs would be good overall. Many people find this a good reason to reject utilitarianism. I’m not convinced!!
Objection 1: Aliens
Suppose that there were a trillion aliens who, experienced per second, the sum total of all suffering experienced during the holocaust, by all of the victims. They could violate every single person’s privacy up to one quadrillion times per second, resulting in the possibility of about 7.6 septillion privacy rights violations every second (wow!). Each time they violate privacy their suffering diminishes slightly, such that violating the privacy rights of people a trillion times reduces their hedonic state to a neutral amount. Thus, if they violate people’s privacy 7.6 septillion times, they’ll be in a state of unfathomable bliss, experiencing more per second satisfaction than all humans ever. The humans never find out about these privacy violations.
If privacy rights really are intrinsically valuable independent of hedonic considerations, then, given that the aliens violate privacy 7.6 septillion times per second, they would be committing the single worst act in history. The badness of the holocaust, slavery, and a global nuclear war killing everyone would pale in comparison to the badness of their action. However, not only does it not seem like their action wouldn’t be the worst action ever, it would be positively good. If their actions produce no negative mental states for anyone, it seems rather cruel to condemn them to undergo the total suffering of the holocaust every single second.
One might have the intuition that the aliens are acting seriously morally wrong. If they do, perhaps there might be a fundamental divergence of intuitions. However, several additional things are worth reflecting on.
First, imagine changing the scenario so that humans were the aliens. In order to avoid enduring the total suffering of the holocaust every second, we had to constantly spy on lots and lots of aliens. I wouldn’t want to endure the holocaust every second, and it would seem quite morally appropriate to spy on lots of aliens without their knowledge to avoid such a grisly fate.
Second, even if one has the intuition that it would be morally wrong, it’s hard to imagine having the intuition that it would be the single worst act in history by an unfathomable number of orders of magnitude. The holocaust, Jim Crow, slavery, and many other things seem clearly worse.
Third, imagine that the aliens, without the humans knowing that there was a connection between the two actions, offered the humans half of the utility that they gained from spying on them. If this happened, merely by distributing the gains, the humans would be now experiencing more joy than has been experienced so far in history. It’s hard to imagine that they’d be worse off. If both the humans and aliens are better off, it’s hard to imagine how the action would be wrong--especially if they were unfathomably better off.
One might object that the amount of utility gained is worth a privacy violation, even though privacy violations are bad. This, however, misunderstands the scenario. Each privacy violation has virtually no impact on utility. The only reason the privacy violations have a significant effect is because they violate rights septillions of times per second. Each privacy violation produces virtually no utility.
One might object that privacy violations have declining marginal disvalue. This, however, runs into a few issues.
1 This intuition is what we’d expect on utilitarianism. There is declining marginal harm in terms of utility caused by one more rights violation.
2 This doesn’t seem to track our intuitions very well if there are distinct types of privacy violations. For example, if one was spied on in the restroom by a creep who put a Camera in, that doesn’t seem to undermine the harm of NSA surveillance. We can thus stipulate for the purpose of the thought experiment that each time rights are violated, it’s done in a new way, to avoid repetition (these aliens are very creative, finding billions of different ways to violate privacy every second!).
3 To avoid concerns about decreasing marginal value, we could suppose that the privacy violations did not repeat. Instead of the aliens spying on us.they spied on a much larger alien civilization, with googolplex aliens, each of the spying aliens only spying on each individual alien once, but spying on 7.6 septillion aliens per second. The larger aliens never find out about it, and in fact it would be metaphysically impossible for them to ever find out about it.
We could imagine a related scenario. Imagine if the air was conscious and was spying on us every second. It nevertheless seems like the air would not be acting particularly immorally, and certainly would not be committing the worst atrocity in human history.
Objection 2: Viciousness
Normative judgments are often intricately linked with character judgments. Thus, when thinking of violations of privacy rights, our judgments may be influenced by thinking about whether the person who violates privacy rights is a bad person, which they usually are in the real world. This viciousness account explains many of our real world moral judgments about when privacy matters. Even if we’re not harmed, a person who is reckless and violates our privacy in a way that would bother us if we found out about it seems like a bad person and has certainly acted poorly. It is largely analogous to drunk driving. Drunk driving isn’t always harmful, however, one shouldn’t drive while drunk because it’s reckless, even if no accidents actually happen.
If we consider real world scenarios, the viciousness account combined with a consequentialist account seems to explain our judgments. We don’t mind that parents change their children’s diapers or that people look at other people, often gaining important information about them, because in the real world such things usually have good consequences and don’t indicate viciousness, or anything else defective about one's character.
Additionally, the badness of privacy violations seems desire dependent. If one waives their privacy rights, we don’t generally think they’re worse off. It’s only when one doesn’t consent and expects to be harmed that privacy violations start to seem bad.
Objection 3: Heuristics
Our moral judgments are also largely explainable in terms of heuristics. In the real world privacy violations are often harmful, for example those done by government agencies or private people. Thus, it’s not surprising that we’d find privacy to be intrinsically valuable. If every time a person violates privacy in the real world it’s bad, we’d develop the judgment that it’s always bad, even in counterfactual scenarios in which it’s not harmful.
If every time a person pressed a button bad things happened, we might find it morally bad to press the button, even in scenarios in which pressing the button wouldn’t actually be harmful. The drunk driving case above is a prime example of this.
Objection 4: What if everyone did it?
Imagine we were deciding between the following worlds.
World 1: Everybody constantly (dozens of times per second) violates everybody else's privacy, and has very high positive utility. .
World 2: No one violates privacy, but everyone is miserable.
World 1 seems clearly better. However, if privacy violations were intrinsically very bad, then a world where everyone was miserable could be worse than one in which people’s privacy rights were violated constantly. This shows that our appreciation of privacy rights is merely instrumental--privacy rights don’t seem to matter in and of themselves.
All of the violations of privacy rights that are found objectionable involve scenarios in which violations of rights cause lots of harm. However, if we stipulate that there’s a pareto improvement from the standpoint of well-being from lots of privacy violations, it begins to seem much more intuitive that people should violate each other's privacy rights.
Thus, it seems like reflection on what fundamentally matters reveals that privacy does not matter intrinsically--it only matters as a means to an end.
There Is No Adequate Principled Defense of Privacy
It’s unclear what makes privacy intrinsically valuable or how to maintain the intrinsic value of privacy in the absence of utilitarian considerations. Merriam Webster’s defines privacy as “a: the quality or state of being apart from company or observation.” However, this seems clearly not to be intrinsically valuable. There’s nothing intrinsically immoral about observing people in public, for example. It also seems odd to say that inviting friends over undermines your privacy.
The next definition they give is “b : freedom from unauthorized intrusion.” However, this doesn’t seem intrinsically valuable either, depending on how privacy is defined. When a person observes another in public, their observation is not authorized. However, looking at people in public places is clearly not an objectionable violation of privacy.
(DeCew, 2018) defined privacy as
“1 Intrusion upon a person’s seclusion or solitude, or into his private affairs.
2 Public disclosure of embarrassing private facts about an individual.
3 Publicity placing one in a false light in the public eye.
4 Appropriation of one’s likeness for the advantage of another (Prosser 1960, 389).”
However, this is clearly not an adequate basis of what fundamentally matters about privacy. The first one, isn’t clear--a definition of privacy can’t very well appeal to private affairs or seclusion or solitude. Those concepts seem to be roughly synonyms of privacy. What determines whether something is a “private affair?”
The second feature is consistent with utilitarianism. Obviously public disclosure of embarrassing facts causes hedonic harm. The third and fourth also can be explained by utilitarian considerations.
(Parent, 1983) attempts to provide a definition of privacy (p.269)
“Privacy is the condition of not having undocumented personal knowledge about one possessed by others. A person's privacy is diminished exactly to the degree that others possess this kind of knowledge about him.”
Parent clarifies (p.269-271)
“A full explication of the personal knowledge definition requires that we clarify the concept of personal information. My suggestion is that it be understood to consist of facts about a person' which most individuals in a given society at a given time do not want widely known about themselves. They may not be concerned that a few close friends, relatives, or professional associates know these facts, but they would be very much concerned if the information passed beyond this limited circle. In contemporary America facts about a person's sexual preferences, drinking or drug habits, income, the state of his or her marriage and health belong to the class of personal information. Ten years from now some of these facts may be a part of everyday conversation; if so their disclosure would not diminish individual privacy. This account of personal information, which makes it a function of existing cultural norms and social practices, needs to be broadened a bit to accommodate a particular and unusual class of cases of the following sort. Most of us don't care if our height, say, is widely known. But there are a few persons who are extremely sensitive about their height (or weight or voice pitch).2 They might take extreme measures to ensure that other people not find it out. For such individuals height is a very personal matter. Were someone to find it out by ingenious snooping we should not hesitate to talk about an invasion of privacy. Let us, then, say that personal information consists of facts which most persons in a given society choose not to reveal about themselves (except to close friends, family, . . .) or of facts about which a particular individual is acutely sensitive and which he therefore does not choose to reveal about himself, even though most people don't care if these same facts are widely known about themselves. Here we can question the status of information belonging to the public record, that is, information to be found in newspapers, court proceedings, and other official documents open to public inspection. (We might discover, for example, that Jones and Smith were arrested many years ago for engaging in homosexual activities.) Should such information be excluded from the category of personal information? The answer is that it should not. There is, after all, nothing extraordinary about public documents containing some very personal information. I will hereafter refer to personal facts belonging to the public record as documented. My definition of privacy excludes knowledge of documented personal information. I do this for a simple reason. Suppose that A is browsing through some old newspapers and happens to see B's name in a story about child prodigies who unaccountably failed to succeed as adults. B had become an obsessive gambler and an alcoholic. Should we accuse A of invading B's privacy? No. An affirmative answer blurs the distinction between the public and the private. What belongs to the public domain cannot without glaring paradox be called private; consequently it should not be incorporated within our concept of privacy. But, someone might object, A might decide to turn the information about B's gambling and drinking problems over to a reporter who then publishes it in a popular news magazine. Isn't B's privacy diminished by this occurrence?3 No. I would certainly say that his reputation might well suffer from it. And I would also say that the publication is a form of gratuitous exploitation. But to challenge it as an invasion of privacy is not at all reasonable since the information revealed was publicly available and could have been found out by anyone, without resort to snooping or prying. In this crucial respect, the story about B no more diminished his privacy than would have disclosures about his property interests, say, or about any other facts concerning him that belonged to the public domain.”
This account isn’t very different from the utilitarian account. By having privacy relate generally to information that most people wouldn’t want revealed, it captures privacy as a heuristic that generally produces good outcomes. However, by broadening the definition to include unwanted private encroachment (E.G. about the height of one who is very sensitive about their height), the definition maintains that discovering such harmful information is immoral. However, where this does differ, it is flawed.
Imagine the following case. A person is walking in public and says something rude to someone else. That other person records them and posts a video online. Billions of people subsequently find out about it, and the billions turn against that individual person. It very much seems like their privacy is violated. This is because they were severely harmed by the disclosure. Or consider a case in which a magazine publishes private information about someone. By this definition, reposting that information to a billion people, even if it’s very embarrassing, wouldn’t be a violation of privacy, because that information is already on the public record.
Additionally, consider a case in which a person is convicted in a court of law of a heinous crime. That is on the public record. However, it would still be a privacy violation to broadcast to a billion people that the person was convicted of a particular crime.
Additionally, Parent defends the desirability of privacy primarily by appealing to consequentialist considerations, writing (P. 276-277)
“Lest you now begin to wonder whether privacy has any value at all, let me quickly point to several very good reasons why people in societies like ours desire privacy as I have defined it. First of all, if others manage to obtain sensitive personal knowledge about us they will by that very fact acquire power over us. Their power could then be used to our disadvantage. The possibilities for exploitation become very real. The definite connection between harm and the invasion of privacy explains why we place a value on not having undocumented personal information about ourselves widely known.
“Second, as long as we live in a society where individuals are generally intolerant of life styles, habits, and ways of thinking that differ significantly from their own, and where human foibles tend to become the object of scorn and ridicule, our desire for privacy will continue unabated. No one wants to be laughed at and made to feel ashamed of himself. And we all have things about us which, if known, might very well trigger these kinds of unfeeling and wholly unwarranted responses.
“Third, we desire privacy out of a sincere conviction that there are certain facts about us which other people, particularly strangers and casual acquaintances, are not entitled to know. This conviction is constitutive of "the liberal ethic," a conviction centering on the basic thesis that individuals are not to be treated as mere property of the state but instead are to be respected as autonomous, independent beings with unique aims to fulfill. These aims, in turn, will perforce lead people down life's separate paths. Those of us educated under this liberal ideology feel that our lives are our own business (hence the importance of personal liberty) and that personal facts about our lives are for the most part ours alone to know. The suggestion that all personal facts should be made available for public inspection is contrary to this view. Thus, our desire for privacy is to a large extent a matter of principle.'8”
The first two arguments are clearly consequentialist. The third argument is less clearly consequentialist, but it can still be made by a consequentialist. Society is better off when strangers are barred from knowing deep, intimate information about people. The liberal ethic is plausibly optimific.
If the third point is non-consequentialist, then it is hard to make sense of it in combination with the definition of privacy. Why would the information that strangers are not entitled to know depend on whether it being known would cause harm, on whether most people wouldn’t want to be known, or even on whether it’s currently on some obscure public record.
Objection 2
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are 100 trillion aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are significant, then the aliens grabbing the legs of the humans, in ways that harm no one, would be morally bad. The amount of rights violations would outweigh and not only be bad, but they would be the worst thing in the world. However, it doesn’t seem plausible that the aliens should have to experience being burned alive, when no humans even find out about what’s happening, much less are harmed. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically tortured all of the time but where there are no rights violations.
This one can be dispensed with much more easily. The conclusion just straightforwardly doesn’t follow from the premise. To see that, let’s strip the example down to a simpler and easier to follow version that (I think, see below) preserves the key point.
As Matthew himself reasonably pointed out to me in a different discussion, our intuitive grasp on situations gets hazier when we get up to truly absurdly large numbers, so let’s at least reduce both sides of the equation. 100 trillion is a million times more than 100 million. One human leg being non-consensually but harmlessly grabbed by an alien will mean a million aliens won’t experience the sensation of being burned alive. Matthew thinks the alien should grab away. I agree! In fact, it’s not clear that the human’s rights would be violated at all, considering that any remotely psychologically normal (or really even psychologically imaginable) human would retroactively consent to having their leg grabbed for unfathomably more trivial harm-prevention reasons. But even if we do assume that the one human is having his rights violated, that assumption just gets you to “any rights we might have against certain extremely trivial violations of personal space are non-absolute,” not “there are no non-reducible moral rights.”
In this case, Ben has totally missed the point of the case, hence his confused claim that this can “be dispensed with much more easily.” It is not that 100 trillion aliens need to grab the leg of 100 million humans total to avoid experiencing being burned alive—it’s that they need to grab the leg of 100 million humans each, however, each time without the humans finding out about it. Thus, each marginal leg grab produces only miniscule benefit, but the collective leg grabs produce enough benefit to avert horrific torture.
Ben also ignores the second case I gave in this example. “If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically tortured all of the time but where there are no rights violations.”
To see why not, think about a more familiar and less exciting example — pushing a large man off a footbridge to save five people from a trolley. Here, the harm is of the same type (death) and five times as much of it will happen if you don’t push him, but most people’s intuition about this case is that it would be wrong to push anyway. That strongly suggests that most of us think there are indeed moral rights that can’t be explained away as heuristics for utility calculations.
Contrast that to a trolley case structurally more like Matthew’s aliens-in-agony scenario although at a vastly smaller scale. As always, five people are on a trolley track. As in a familiar variant, there’s a lever that can be pulled to divert the train onto a secondary track. But in this version (a) the second track is empty, so you aren’t killing anyone by pulling the lever. But there happens to be someone standing in front of you with his hand idly resting on the lever. His eyes are closed and he’s listening to loud music on his Airpods and he has no idea what’s going on. By the time you got his attention, the five people would be dead. So you grab his hand and yank it.
If we were just considering this last example, you could end up drawing utilitarian conclusions..but the example just before nicely demonstrates why that would be a mistake.
What I have said is an argument against the intuitive judgment about the organ harvesting case. Thus, in turn, it is an argument against the judgment that you should push the fat man in the trolley case. Pointing to the judgment that you should push the fat man in the trolley case, in response to an argument against this very conclusion, would be begging the question.
A final thought about Matthew’s point 2 before moving onto 3 — rereading some of his formulations quoted above (particularly the one about the amount of rights violations involved in his original version of the example allegedly being something someone who believes in non-reducible rights would have to regard as the worst thing in the world) maybe my simplification of 100 trillion aliens not experiencing the sensation of burning alive vs. 100 million humans not having their legs grabbed into a million aliens and one humans missed something important in Matthew’s example. Maybe his idea goes something like this:
“Sure, grabbing one leg to save a million aliens from unspeakable torment might make sense given standard non-utilitarian assumptions about rights. But remember in each individual instance of leg-grabbing in the original example, the effect of that individual act will be to reduce each aliens’ collective suffering by one one hundred millionth — the aliens would barely notice — so when we consider each one individually, it would be too trivial to justify the rights violation.”
This is not what I mean. Once again, what I meant is that each alien needs to grab 100 million legs to avert their own torture.
If so, I’d say two things. First, just as Matthew is correct to point out that intuitions can be confused when we’re talking about very large numbers, it’s similarly hard to gage with very small fractions. In this case, I’m not sure there even is such a thing as a one one-hundred-millionth reduction of the sensation of being burned alive. I suspect that sensations don’t work like that Perhaps in some way that’s totally inaccessible to human minds, it does work like that for aliens. At any rate, I don’t really know what “they’ll experience one one hundred millionth less suffering than the senation of being burned alive” means, and frankly neither do you, so asking me to have a moral intuition one way or the other about whether it’s significant enough to justify grabbing someone’s leg without their knowledge or consent is deeply unlikely to shed much light on my overall network of moral intuitions.
This seems like a dramatic failure of imagination. We’re all perfectly comfortable with the notion of hurting less. In fact, things may hurt less in a way that’s so imperceptible that we don’t notice the difference. A reduction in torture of 1 in 100 million would be an imperceptible reduction in suffering, such that the suffering would be brought to zero if done 100 million times.
However, even if one finds the aforementioned notion baffling, it can be replaced, for the purposes of the hypothetical with a mere 1 in 100 million chance of eliminating the torture each time.
Second, even it couldn’t be morally justified in weighing rights against consequences when each individual leg-grabbing was considered in isolation, it just wouldn’t follow that all hundred million leg-grabbings were unjustified when considered in tandem. “This will be the impact of doing this a hundred million times, including this one” is morally relevant information.
Given that Ben has misunderstood my argument, this misses the point, and it has nothing that I need to respond to.
Objection 3
3 A reductionist account is not especially counterintuitive and does not rob our understanding of or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person’s innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
It’s not counterintuitive until we start to think about the many examples (like the first of the two trolley cases above) where it has wildly counterintuitive consequences! The Innocent Until Proven Guilty, I think, starts to look less helpful the more we poke into it. One thing IUPG is absolutely not is a heuristic or anything like one. It’s not a useful rule of thumb — it’s an unbending legal rule, that someone (who may or may not be actually innocent) who hasn’t been proven guilty has the legal status of an innocent person.
This is false; innocent until proven guilty, much like the concept of rights, is a useful heuristic in two senses. First, treating people as innocent until proven guilty generally produces good consequences. Second, in a purely descriptive sense, most people who have not been proven guilty are, in fact, innocent. Thus, it is a useful rule of thumb, that is also enshrined as a legally inviolable right. This is for good reason—making it a right that’s firmly locked in produces good outcomes.
While we’re talking about IUPG, by the way, it’s worth pausing to ask whether pure utilitarianism can make sense of why it should be the legal standard. Think about the classic justification for it — Blackstone’s Ratio (it’s better for ten guilty persons to go free than one innocent person to be imprisoned). That makes perfect sense if we think there’s something like a categorical moral prohibition on the state punishing the innocent that’s so important it can outweigh the benefits of saving the victims of those ten guilty people. But it’s at the very least not obvious that the utility calculus will work out that way.
Utilitarianism has a very good explanation of the morality of the criminal justice system. The horrific mistreatment in the criminal justice system, resulting in rape on a mass scale, for example, gives us very good reason to not risk incarcerating innocent people. Utilitarianism, unlike deontology, explains why it’s worth risking incarcerating some innocent people to incarcerate some guilty people.
Additionally, being in prison makes people more likely to reoffend. Prison causes vast amounts of suffering, which makes urgent criminal justice reform needed. To decide whether or not eliminating the innocent until proven guilty standard should be eliminated, we’d need to compare the minimal deterrence factor to the vast harm from incarcerating innocent people. It is quite clear—which is the reason EA organizations have been sponsoring criminal justice reform—that at least some criminal justice reforms end up being beneficial.
The claim that utilitarianism justifies being too tough on criminals is just a baffling one. It’s the retributivists, who think that people deserve to suffer for doing bad things, whatever the consequences, that advocate aggressive criminal justice reform. Utilitarians in the real world, unlike Ben’s assessments of what they should advocate, do in fact tend to be the most progressive segment of the population on criminal justice reform.
Everyone—utilitarian and non-utilitarian alike—agrees that a crucial role of the criminal justice is to deter crime, keep dangerous people locked up, and make people better members of society. Non-utilitarian theories of criminal justice are more punitive, because they add extra roles for the cjs, that utilitarianism has no need for.
Objection 4
4 We generally think that it matters more to not violate rights than it does to prevent other rights violations, so one shouldn’t kill one innocent person to prevent two murders. If that’s the case, then if a malicious doctor poisons someone’s food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them, on this view. This seems deeply implausible. Similarly, this view entails that it’s more important for a person to eliminate one landmine that will kill a child set down by themself, rather than eliminating five landmines set down by other people — another unintuitive view.
No. None of this actually follows from belief in rights per se, or even from the view that it’s important not to violate rights than to prevent rights violations (which itself is a substantive extra commitment on top of belief in rights). Here’s the trick: The attempt at drawing out a counterintuitive consequence relies on the rights-believer seeing “poisoning food and then not stopping it from being eaten” as a single action (or “setting a landmine and not eliminating it”) as a single action, but the intuition itself relies on thinking of them as two separate actions, so that the poisoning/landmine setting is in the background of the decision, and now we’re thinking about a new decision about which poison/landmine to save who from and it seems arbitrary to save the victims of your own past self as opposed to someone else’s victims.
This response is totally confused. The relevant question is not whether it counts as one action or two actions—instead, the relevant question is whether or not it’s a rights violation. Laying down a land mine will be a rights violation iff you don’t destroy it before it harms anyone—same with poisoning someone’s food. Thus, if you don’t take the action of preventing the person from eating your food, rather than the other five people’s food, you will have violated one person’s rights, but allowed five other rights to be violated. However, if you prevent the one person from eating your food, then you’ll have violated no one’s rights, but allowed five rights to be violated. If choosing between a world in which you violate one person’s rights, or one in which five others violate one person’s rights each, the deontologist holds that the second world is the one which you should prefer.
But here’s the thing: Whichever you think is the right way to cut up what counts as the same action or a new one, you really do have to pick. If you consistently think of these as two separate actions, the rights-believer has no reason to believe the counterintuitive thing Matthew attributes to them. On this view, they’re not choosing between killing and letting die. They’ve committed attempted murder in the past but now they’re choosing who to let die and none of the options would constitute killing.
It doesn’t have to be attempted murder. Suppose I put a landmine down for some good reason, and I know with total certainty that I’ll be able to eliminate it in the future. This wouldn’t be attempted murder, because I predict I’ll eliminate the landmine.
On the other hand, if we somehow manage to truly feel this in our bones as one action (which I don’t know how to do, btw — it seems like two to me), I’m not so sure we’d have the intuition Matthew wants us to have. To see why not, think about a nearby question. Who would you judge more positively — someone who goes to a war zone, intentionally kills one child with a landmine (while simultaneously deciding to save four others from other people’s landmines) or someone who never travels to the war zone in the first place, spending the war engaged in normal peacetime activities, and thus neither commits nor foils a single war crime? “OK, but I saved more children than I killed” would not, I think, get you much moral approval from any ordinary human being.
Whether it’s divided into one or two actions seems morally irrelevant, as Huemer points out. In terms of character evaluations, I’d regard the first as better. However, in terms of actions, the second one would be better, for the reasons I described in the previous article. Given that I am arguing that there is no salient distinction between violating rights and permitting other violations of rights, merely giving the original case to which this was a counterexample does not advance the dialectic. The whole point of this argument is to disprove the notion that the second action should be judged to be better.
Objection 5
People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.
The studies, I’m sure, are accurately reported here, but the inference from them is as wrong as wrong could be. I won’t go into this in too much depth here because this was a major theme of my first book (Give Them An Argument) but basically:
All moral judgments without exception are rooted in moral feelings. Moral reasoning, like any other kind of reasoning, is always reasoning from some premises, which can be supplied by factual information, moral intuition (i.e. emotional feelings of approval or disapproval) or some combination of the two, but moral intuition is always in the mix any time you’re validly deriving moral conclusions. There’s just no other place for your most basic premises to come from, ever, and couldn’t be. I don’t doubt that people whose initial emotional reactions (thinking about good and bad consequences) lead them to endorse moral principles and who henceforth reason in very emotionless ways end up sticking to utilitarianism more than people who open themselves to ordinary human moral intuitions about things like organ harvesting examples. For precisely similar reasons, I’d be pretty shocked if people with damaged VMPCs weren’t far more likely to be deontic libertarians than people more likely to have regular emotional reactions. (No clue if anyone’s done a study on that, but if you’re a researcher in relevant areas you can have the idea for free!)
Ben claims that the studies are accurately reported, and then he goes on to contradict the results of the studies! No, not all of our moral judgments are caused by emotions. Some are caused by careful prolonged reflection on what matters. We have significant evidence that when people carefully reflect on what matters, they become way more utilitarian. There’s a reason that people whose VMPC’s are damaged, such that they are less emotional, become more utilitarian. What’s Ben’s account of that, if it’s all just emotions? For more on this, see this article.
Ben’s guess that people would be more likely to be radical libertarians is just wrong—people become 6x more likely to push the guy off the bridge after they have damaged VMPC’s. The dual process theory, which says that being more careful and reflective makes people more utilitarian, has a mountain of evidence, not just the VMPC case—much of which I discuss in the article linked above.
Objection 6
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders, should obviously choose the world in which the one person does the murder to prevent 5.
There’s absolutely nothing obvious about that! Is a Benevolent Third-Party Observer benevolent because they want everyone to do the right thing, or benevolent because they want the best outcome? Unless you question-beggingly (in the context of an argument against rights) assume that the right thing is whatever leads to the best outcomes, those goals will be in tension, so if the BT-PO holds both we need to find out what principle they’re using to weigh the two goals or resolve conflicts before we can even begin to have the slightest idea what a BT-PO might say about a case where rights violations lead to good consequences.
Ben cut this off before I provided the argument for why they’d have this preference.
Matthew continues the point:
An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.To put what I said earlier slightly differently:
Unless you beg the question against the rights-believer by assuming these can’t come apart, you have to pick whether the BT-PO wants whatever makes the world better or whatever’s morally preferable (or perhaps goes back and forth between preferring these depending on some further consideration?). If the BT-PO’s consistent principle is to prefer whatever makes the world better, then bringing them up has zero possible argumentative weight against belief in a non-consequentialist notion of rights — that there can be such conflicts and that rights should at least sometimes win is what anyone who says there are non-consequentialist rights is saying. If the BT-PO’s consistent principle is to prefer that everyone does the right thing, on the other hand, then it’s not clear what the source of counter-intuitiveness for the rights-believer is supposed to be here. And that’s still true if the BT-PO’s principle is to apply some further consideration to navigate conflicts between rights and good consequences.
This misstates my argument. No premise in my argument assumes they’d always prefer the better thing. My claim is merely that they would prefer 1 killing to prevent 5 to 5 indiscriminate killings. This is because 1 indiscriminate killing is worse than or equal to 1 killing to prevent 5 killings, and 5 killings are worse than 1 killing, thus, by transitivity, 5 killings would be judged to be worse by them than 1 killing to rpevent 5.
There’s a primitive notion of a moral third party—imagine god as the paradigm case. The third party observer should hope you do the right thing, but they should hope you kill one to prevent 5 killings, thus killing one to prevent 5 killings is the right thing.
Objection 7
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The people in the hundredth circle will be only given the first option if the buck doesn’t stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 5⁹⁹ murders, when the alternative actions could have resulted in only one murder, because they’d keep passing the buck until the 100th circle. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being, who always chooses correctly.
While my instinct is to agree that whatever you can say about killing vs. letting die you can say about killing vs. not preventing killing, the first thing to note about this is that the “5⁹⁹ murders” once you get to the outermost circle aren’t actually murders at all, since by stipulation they’re involuntary. So (at least if we’re considering the morality of everyone in the first 99 circles refusing to murder) this reduces to a classic but extreme killing vs. letting die dilemma — it’s no different from stipulating that, say, the entire human race other than you and the large man on the bridge has been shrunken down to microscopic size by a supervillain who then put a container containing all however-many-billion people on the trolley track. Anti-utilitarian intuitions generally crumble in the face of sufficiently awe-inspiring numbers and that’s what Matthew is relying on here. There’s an interesting question here about whether to take that as an instance of the general problem of humans having a hard time fitting their heads around scenarios involving sufficiently large numbers or whether to take this as a straightforward intuition in favor of a sort of “moral state of exception” whereby an imperative to prevent genocide-level amounts of death overrides the moral principles that would apply in other cases. (Which of these two is correct? Here’s the good news: You don’t really need to decide because nothing remotely like this will ever come up and both answers are compatible with anti-utilitarian intuitions about smaller-scale cases.)
Ben has totally missed the case. As I say in my article, which I linked in my original blog post, which Ben quotes my linking of
If you’re currently thinking that “moderate deontology says you shouldn’t kill one to save five but should kill one to save 1.5777218 x 10^69,” read the argument more carefully. The argument shows that moderate deontology is internally inconsistent. If you think the argument is just question begging or that the deontologist should obviously accept option 1, as some deontologists have who have heard the argument before I explained it to them more carefully, read the argument again.
I explicitly preempt Ben’s confusion, and yet he nonetheless carries it out!
In case it wasn’t clear, I’ll include the clearer section of my book where I describe the case.
We can imagine a case with a very large series of rings of perfectly moral decision makers, who always make the correct decision. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Murder: Kill one person
2 Buck Pass: Give the five people in the circle outside of you corresponding to you the same options you were just given, if there are any. If not, this option is not available, and you must take option one. Thus, the people in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would to pass the buck. However, if this is the case then a cluster of perfectly moral people would bring about 5^99 murders, when then alternative actions could have resulted in only one murder because they’d keep passing the buck until the 100th circle. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
This follows from the following principle.
Optionality: Giving perfectly moral people extra options can’t make things worse.
If optionality is true, then killing one to prevent five killings will be at least as bad as killing one to prevent five perfectly moral people from having two options, one of which is killing one person.
Next, Ben says
But as with the leg-grabbing aliens above, the apparent difference between this and a simple dilemma between killing one innocent and letting 5⁹⁹ innocents die is that, considered in isolation, standard anti-utilitarian moral intuitions would seem to recommend individual decisions that, in aggregate, would amount to permitting the deaths of 5⁹⁹ people.
It’s not merely that they would in aggregate create bad outcomes. It’s that the person in the first circle knows with total certainty that it would kill 5^99 people. Thus, the moderate deontologist would seem to be unable to take the first horn of the dilemma, which says that you shouldn’t take the buck passing option. However, they also can’t say you should kill, because that violates optionality, a nearly self evident principle.
But (as Larry Temkin emphasizes in his response to “money pump” arguments for transitivity) it’s irrational not to reason about a series of decisions with aggregate effects…in aggregate.
I’ve argued for transitivity here.
A reasonable principle is that the first person in the first ring should do whatever all of the saints in all the rings would agree on if they had a chance to talk it all through together to decide on a collective course of action. If we assume that the “moral state of exception” view is correct, they would presumably all want the person in the first ring to kill the five people in the second one.
But this violates optionality, as I explain, and Ben ignores.
(Just for fun, by the way, since that “everyone” would include the five victims, in this scenario it would be more like assisted suicide than murder.)
No—it involves murdering other people.
If it’s not correct, then I suppose they would all abstain and it would be the fault of whatever demon set this all up rather than any of his victims.
But then this violates threshold deontology, because it holds you shouldn’t kill one person to prevent an evil demon from killing 5^99 people.
As I mentioned in my first conversation with Matthew, I’m also very open to the possibility that this could just be a moral tragedy with no right answer — as an extremely convoluted scenario designed precisely to make moral principles that seem obviously correct in simpler cases difficult to apply, if anything’s an unanswerable moral tragedy, this strikes me as a good candidate on its face!
The fact that it’s a tragedy in no way implies that there’s no fact of the matter of what you should do. However, if there’s no fact of the matter in the case of the 100 rings, and there’s also no fact of matter with 1000 rings, then, by transitivity, one would be indifferent between 5^99 deaths and 5^999 deaths. To illustrate this they find
Passing the buck when there are 100 circles = killing one
Killing one = Passing the buck when there are 1000 circles
Thus, by transitivity, passing the buck when there are 100 circles=passing the buck when there are 1000 circles. In one case, there are 5^99 deaths, in the other, there are 5^999 deaths.
Also, the notion that there’s no fact of the matter about whether one should kill one to save 5^99 or not is a ridiculous notion. You should obviously kill one to save the world.
But no matter which of these three answers you go with (do kill the five in the name of a moral state of exception, refuse to play the demon’s game, or just roll with “both answers are indefensibly wrong, its an unanswerable moral tragedy”) I have a hard time seeing how any of those three roads are supposed to lead me to abandoning normal rights claims. At absolute worst, normally applicable rights are overridden in this scenario. Even if that’s the case, that gives me no particular reason to think they’re overridden in, say, ordinary trolley or organ harvesting cases.
This may be because Ben didn’t adequately appreciate the argument.
Oh, and it’s worth saying a brief word about this:
Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse.At this point I’ve had multiple conversations with Matthew where this has come up and I still have no idea why he thinks this is true, never mind obviously true. It vaguely sounds like the sort of thing that could turn out to be true, but the same can be said of plenty of (mutually inconsistent) moral principles. When you’re thinking about a principle this abstract, it’s easy to nod along but right methodology is to test it out by applying it to cases — like this one!
Here’s the argument
1 Perfectly moral beings would only choose extra options if they are good options
-this is true by definition
2 If perfectly moral beings don’t choose extra options, they don’t make things worse
Therefore, if extra options are not good, they don’t make things worse.
3 If extra options are good, then choosing them makes things better.
-this is just what a good choice is. Note, this doesn’t assume consequentialism, because I’m using making things worse in a broad sense, that would potentially sanction deontology claiming that killing one to save five makes things worse.
4 Making things better doesn’t make things worse
Therefore, giving perfectly moral beings extra options doesn’t make things worse.
Objection 8
8 Let’s start assuming one holds the following view
Deontological Bridge Principle: This view states that you shouldn’t push one person off a bridge to stop a trolley from killing five people.
This is obviously not morally different from
Deontological Switch Principle: You shouldn’t push a person off a bridge to cause them to fall on a button which would lift the five people to safety, but they would not be able to stop the trolley.
In both cases you’re pushing a person off a bridge to save five. Whether their body stops the train or pushes a button to save other people is not morally relevant.
Suppose additionally that one is in the Switch scenario. They’re deciding whether to make the decision and a genie appears to them and gives them the following choice. He’ll push the person off the bridge onto the button, but then freeze the passage of time in the external world so that the decision maker can have ten minutes to think about it. At the end of the ten minutes, they can either lift the one person who was originally on the bridge back up or they can let the five people be lifted up.
It seems reasonable to accept the Genie’s offer. If, at the end of ten minutes, they decide that they shouldn’t push the person, then they can just lift the person back up such that nothing actually changes in the external world. However, if they decide not to then they’ve just killed one to save five. This action is functionally identical to pushing the person in switch. Thus, accepting the genie’s offer is functionally identical to just giving them more time to deliberate.
It’s thus reasonable to suppose that they ought to accept the genie’s offer. However, at the end of the ten minutes they have two options. They can either lift up one person who they pushed before to prevent that person from being run over, or they do nothing and save five people. Obviously they should do nothing and save five people. But this is identical to the switch case, which is morally the same as bridge.
It looks to me like, once again, Matthew is trying to have it both ways here. Either the genie’s offer just delays the decision (which is what we need to assume for that breezy “it’s reasonable to accept the genie’s offer” to make sense) or it is a morally significant decision in itself. This in turn reduces to the same issue noted above in the poison and landmine cases — if you do something and then deliberate about whether to reverse the effect, does “doing it and then deciding not to reverse it” count as one big action or does it separate into two actions? The “reasonable to accept the genie’s offer” claim makes sense if (and only if) you accept “the one big action” analysis, but the “obviously they should do nothing” claim only makes sense given the “two distinct actions” view. If it’s two distinct actions, accepting the genie’s offer was wrong (in the way setting a landmine even if you might decide to change your mind and decide to save the child from later would be wrong). If it’s one big action, then Matthew’s “obviously” claim doesn’t get off the ground.
Whether or not it’s two distinct actions or one distinct action isn’t relevant, and Ben doesn’t clearly reject any part of the argument. The genie’s offer makes it so that the status quo changes as a temporary placeholder, but it can be reversed! However, after the ten minutes, the intuition is very strong that the status quo should be maintained.
We can consider a parallel case with the trolley problem. Suppose one is in the trolley problem and a genie offers them the option for them to flip the switch and then have ten minutes to deliberate on whether or not to flip it back. It seems obvious they should take the genie’s offer.
Again: Only obvious given the “one big action” view.
Well at the end of ten minutes they’re in a situation where they can flip the switch back, in which case the train will kill five people instead of one person, given that it’s already primed to hit one person. It seems obvious in this case that they shouldn’t flip the switch back.
Thus, deontology has to hold that taking an action and then reversing that action such that nothing in the external world is different from if they hadn’t taken and then reversed the action, is seriously morally wrong.Again: This claim about what’s it’s “obvious” that the rights-believer has to endorse is only obvious given the “two distinct actions” view.
This is an astounding claim—particularly the second one, though I have no idea how the one big action view makes the first intuitive. On the second one, after ten minutes, a person has set up a situation in which a train is going to kill five people. However, they can flip the switch to kill one person. Obviously they shouldn’t flip the switch. Really visualize the scenario—the answer becomes obvious.
If flipping the switch is wrong, then it seems that flipping the switch to delay the decision ten minutes, but then not reversing the decision, is wrong. However, flipping the switch to delay the decision ten minutes and then not reversing the decision is not wrong. Therefore, flipping the switch is not wrong.
Maybe you hold that there’s some normative significance to flipping the switch and then flipping it back, making it so that you should refuse the genie’s offer. This runs into issues of its own. If it’s seriously morally wrong to flip the switch and then to flip it back, then flipping it an arbitrarily large number of times would be arbitrarily wrong. Thus, an indecisive person who froze time and then flipped the switch back and forth googolplex times, would have committed the single worst act in history by quite a wide margin. This seems deeply implausible.
This part relies on an assumption about how wrongness aggregates between actions that, at least in my experience, most non-utilitarian moral philosophers will emphatically reject. In fact, my impression at least is that the intuition that wrongness doesn’t aggregate in this way plays a key role in why so many of the people who’ve thought most about utilitarianism reject it.
Now, it could be that the non-utilitarian moral philosophers are wrong to reject aggregation. But even if so, once utilitarianism has been rejected and rights have been affirmed, it’s just a further question whether the wrongness of (initially attempted then reversed) rights violations can accumulate in this way.
Wrongness must aggregate—if it’s wrong to do something once, then doing it more times is even worse. I argue that this conclusion is undeniable here.
Either way, deontology seems committed to the bizarre principle that taking an action and then undoing it can be very bad. This is quite unintuitive. If you undo an action, such that the action had no effect on anything because it was cancelled out, that can’t be very morally wrong. Much like writing can’t be bad if one hits the undo button and replaces it with good writing, it seems like actions that are annulled can’t be morally bad.
It’s worth just briefly registering that this is a pretty eccentric judgment. To adapt an old Judith Jarvis Thomson example, if I put poison in my wife’s coffee then felt an attack of remorse and dumped it out and replaced it with unpoisoned coffee before she drank it, my guess is that very few humans would disagree that my initial action was very wrong. Deep and abiding guilt would be morally appropriate despite my change of heart.
In this case, perhaps the initial action would be wrong, in that it would be reckless, dangerous, and indicative of vicious character. This is often what we mean by wrong, as I argue here. However, if we ask whether it would be wrong in the deeper sense of better for it never to have been, the answer is obviously no. The action, while reckless, did not end up harming people. The action may have been wrong, but it certainly wasn’t bad, if we stipulate that it didn’t cause the person any guilt or damage their character at all.
To put a little bow on this part, it’s worth pointing out that we coud adapt this case very slightly and make it a straightforward moral luck example. Call the version of the case where I dump out the coffee and stealthily replace it with unpoisoned coffee “Coffee (Reversed)”. Now consider a second version — “Coffee (Unreversed)” — where I don’t have the attack of remorse in time because I’m distracting by the UPS delivery guy ringing the doorbell, and my wife is thus killed.
Intuitively, these cases are messy, but part of what makes the moral luck problem such a problem is that at least one of the significant intuitions at play in moral luck cases is that the difference between Coffee (Reversed) and Coffee (Unreversed) isn’t the kind of difference that should make any difference in your moral evaluation of me.
Obviously coffee unreversed is bad. In coffee reversed, while you clearly acted wrongly, you action wasn’t bad overall—it didn’t harm anyone. Remember, my principle concerns badness, rather than wrongness.
It also runs afoul of another super intuitive principle, according to which if an act is bad, it’s good to undo that act. On deontological accounts, it can be bad to flip the switch, but also bad to unflip the switch. This is extremely counterintuitive.
If we read the “super intuitive principle” as “if an act is bad, all else being equal it’s good to undo such an act,” then I can understand why Matthew finds it so intuitive. If we read it as “if an act is bad, then it’s always good to undo it” or to put a finer point on it “if an act is bad, then the morally best decision on balance is always to undo it,” I’m a whole lot less sure. In fact, given that last reading, Matthew himself doesn’t agree with it, given that he thinks that in the landmine and poison cases them morally best decision is to save the more numerous victims of other malefactors rather than to undo your own bad act.
Let me make the statement more precise, and it will be super obvious. If an act is bad, then all else equal it’s always good to undo such an act.
Objection 9
9 (Huemer, 2009) gives another paradox for deontology which starts by laying out two principles (p. 2)
“Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
This is intuitive — how we classify the division between actions shouldn’t affect their moral significance.
We’ve already seen several times above that this principle is wrong, and we could just leave Huemer there, but there’s another point of interest coming up, so let’s take a look at the remainder of 9:
Ben’s claims previously have had no basis, so this isn’t a response to the argument. Clearly reflect on the principle—can the morality of an act really depend on whether we think of it as you doing two actions vs one action?
Second (p.3) “If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.”
Now Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good — everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong. However, this clearly wouldn’t be morally wrong.
The rights-believer — I keep translating Matthew’s references to “the deontologist” this way because these are all supposed to be general arguments against rights, and because you don’t have to be a pure deontologist to believe that considerations about rights are morally important — is only committed to all of this given the assumption we’ve already considered and rejected in the discussions of the leg-grabbing aliens and the circles of saints above, which is that there can’t be cases where individual actions can’t be justified by some set of moral principles when considered separately but can be when considered together. “The overall effect will be to reduce everyone’s harm” is morally relevant information and Temkin’s point about aggregate reasoning is a good one.
Good translation! The overall effect will be making everyone better off seems to rely on the pareto principle, which Ben will go on to reject. However, this is just explaining why it’s a tough pill to swallow—it requires rejecting that making every better off is good. The deontologist has to reject either
A) That you shouldn’t violate rights to produce greater benefit
B) Benefiting all the people in the torture scenario is good.
C) “Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
D) If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong. However, this clearly wouldn’t be morally wrong.
A would seem to require rejecting rights—the deontologist has to find another of the super plausible principles to reject.
Objection 10
10 Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, (Mogenson and Macaskill, 2021) argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people, by changing very slightly the time in which lots of other people have sex.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist, and no doubt some will violate rights in significant ways and others will have their rights violated in ways caused by you. Mogenson and Macaskill argue that consequentialism is the only way to account for why it’s not wrong take most mundane, banal actions, which change the distribution of future people, thus violating (and preventing) vast numbers of rights violations over the course of your life.
This is a fun one, but this seems like precisely the opposite of the right conclusion. This case, if we think about it a little harder, actually cuts pretty hard against utilitarianism (and consequentialism in general).
To see why, start by noticing that from a rights-based perspective — especially straight-up deontology! — pressing a button that will itself either save or kill someone (and give you $5 either way) is absolutely nothing like engaging in an ordinary action that might indirectly and unintentionally lead (along with many other factors) to someone coming into existence who will either kill someone or save somebody from being killed.
Ben says “might.” Well, it’s overwhelmingly likely. Being indirect isn’t morally salient—an arms dealer who sells arms, knowing they’ll be used to kill kids, would still be violating rights. Additionally, we can stipulate that the five dollar button works in mysterious, indirect ways. That wouldn’t seem to affect our moral judgment of the situation. Ben says that it’s unintentional. Well, it may not be intended, but it’s foreseen as a side effect now that this act has been pointed out. Pressing the button would be impermissible even if the person mostly intended to get the money and didn’t care about the side effects of their actions.
The whole point of deontology is to put the moral focus on the character of actions rather than their consequences. If deontologists are right, there’s a moral galaxy of difference between acting to violate someone’s rights and acting in a way that leads to someone else violating someone’s rights (or, if we’re going to be precisely accurate about the case here, since what we’re talking about is bringing someone into existence who will then decide to violate someone else’s rights, “leads to someone else violating someone’s rights” should reall be “is a necessary but not a condition for someone else violating someone’s rights”).
We can just slightly modify the button case, and the moral situation is no different. If pressing the button had a 50% chance of causing a murder, a 50% chance of preventing a murder, and would certainly give five dollars, it seems structurally analogous. If you know that your action will cause a murder, such that had you not taken the action there wouldn’t be a murder—or some other non-murder, yet still very bad crime—that’s clearly a rights violation on deontology. For more on this, you can read the MacAskill and Mogenson paper that was linked.
If, on the other hand, consequences are all that matter, it’s much harder to see a morally significant difference between unintentionally setting in motion a chain of events that ends with someone else making a decision to do X and just doing X! The button, in other words, is a good analogy for getting into your car if utilitarianism is right, but not if deontology is right.
This was refuted above.
Also note that if there’s a 50% chance that any given button-pushing will save someone and a 50% chance that it will kill someone, over the “sufficiently long run” appealed to by statisticians, it’ll all balance out and thus be utilitarian-ishly neutral — but there’s absolutely no guarantee that killings and savings in any given lifetime of metaphorical button-pushing will balance out! You might well happen to “kill” far more people than you “save.” If we assume that the character of your action is beside the point because consequences are all that matter, your spending a lifetime taking car trips and contributing to traffic patterns, etc., might well add up to some serious heavy-duty moral wrongness.
True, but it might also turn out to be very good. Utilitarianism correctly identifies the morally salient effects of your actions, based on your state of mind—those wash out in the long run. Ben’s objection seems to be that utilitarianism problematically holds that driving your car might be really bad. Well, here’s the thing; it might be. It won’t be really wrong, because wrongness relates to the available information. But if your driving caused baby Hitler to be born, then your driving is really bad, such that it would be better if you’d never driven. This is the judgment of utilitarianism, and it’s not unintuitive.
Objection 11
11 The pareto principle, which says that if something is good for some and bad for no one then it is good, is widely accepted.
It’s widely accepted by economists for deciding what counts as better utility. It’s not widely accepted among non-utilitarian moral philosophers as a standard for what constitutes a morally good action, for obvious reasons — it assumes (once read as a principle about how to morally evaluate actions) that only the consequences of actions are morally relevant!
This is disastrously and almost scandalously false! One can both hold that making everyone better off is a good thing and that people have rights that shouldn’t be violated. It’s very intuitive that doing things that only make people well off is good. I have no polling date on the pareto principle, but in my experience, most people agree with it. Even if one ultimately rejects it, that’s a cost of a theory, given its prima facie plausibility.
It’s hard to deny that something which makes people better off and harms literally no one is morally good. However, from the Pareto principle, we can derive that organ harvesting is morally the same as the trolley problem.
This is a pet peeve and a little bit off-topic, but it drives me crazy. The trolley “problem” as originally formulated by Thomson (who coined the phrase “the trolley problem”) was precisely that, if we’re just looking at outcomes, pushing the large man (or, and this was actually Thomson’s preferred example for dramatizing the problem, harvesting a healthy patient’s organs to save five people who need transplants) is indistinguishable from pulling the lever…and yet the vast majority of people who share the intuition that lever-pulling is morally legitimate don’t have a parallel intuition about those other cases. The “problem” was supposed to be how to reconcile those two seemingly incompatible intuitive reactions. Anyway, let’s keep going.
Sorry!
Suppose one is in a scenario that’s a mix of the trolley problem and the organ harvesting case. There’s a train that will hit five people. You can flip the switch to redirect the train to kill one person. However, you can also kill the person and harvest their organs, which would cause the 5 people to be able to move out of the way. Those two actions seem equal, if we accept the Pareto principle. Both of them result in all six of the people being equally well off. If the organ harvesting action created any extra utility for anyone, it would be a Pareto improvement over the trolley situation.
This nicely demonstrates exactly why “the situation created if you do X is a Pareto improvement over the situation created if you do Y” doesn’t entail “doing X is no worse morally than doing Y” without hardcore consequentialist assumptions about how morality works. While it should be noted that many non-utilitarian philosophers bite the bullet on the first version of the trolley case and conclude (on the basis of their far stronger intuitive reaction to Thomson’s other cases) that pulling the lever in the first case is wrong, there are ways of consistently avoiding giving up either of the initial intuitions. (Whether any of these ways are fully convincing is, of course, super-duper controversial.) For example, one of the solutions to the Trolley Problem that Thomson herself briefly floats in one of her several papers about it is a Kantian one — that sending a train to a track where it unfortunately will kill the person there doesn’t involve reducing them to the status of a mere means to your end in the way that actually using their body weight to block the trolley (or flip the switch in Switch) or harvesting their organs does. To see the distinction drawn in this solution (which is roughly Doctrine of Double Effect-ish), notice that if you turned out to be wrong in your assumption that the workman on the second track wouldn’t be able to get out of the way and he did in fact manage to scamper off the track before the trolley would have squashed him, that wouldn’t mess up your plan for saving the five — whereas if the large man survived the fall and rolled off the track, that would mess up your plan, because your plan involved using him as a mere means rather than setting something in motion which would have the foreseen but not intended side effect of killing him.
Look, obviously pointing out that my conclusions are consequentialists is not any sort of response to my argument. My point is that, if we try to reason by looking at which principles are the most plausible, to hold that the switch version of the trolley problem is different from the organ harvesting case, one must deny the super intuitive pareto principle.
Now, maybe you find that convincing and maybe you don’t. But it doesn’t seem obviously wrong to me — and if it’s at all plausible, the fact that murdering the workman on the second track and harvesting his organs would be a pareto improvement from diverting the train to the second track (thus causing his death) wouldn’t be sufficient to settle the question of whether the organ harvesting was wrong in a way that diverting the train wasn’t.
Here’s the conclusion of 11:
Premise 1 One should flip the switch in the trolley problemPremise 2 Organ harvesting, in the scenario described above, plus giving a random child a candy bar is a pareto improvement over flipping the switch in the trolley problem
Premise 3 If action X is a pareto improvement over an action that should be taken, then action X should be taken
Therefore, organ harvesting plus giving a random child a candy bar is a action that should be taken
This is a very noisy version of what could be in one sentence:
“If consequences are all that matter, saving the five through organ harvesting is no worse than saving them through pulling the lever, and doing the latter plus doing things that cause other good consequences is better.”
No, not if consequences are all that matter—if the pareto principle, something widely accepted and deeply plausible upon prolonged reflection, is true!
But here’s the thing — that has no argumentative force whatsoever against deontologists and other non-utilitarians, since critics of utilitarianism are generally split between (a) people who think even pulling the lever is wrong, and (b) people who think pulling the lever might be defensible but Thomson’s other examples that are equivalent to pulling the lever in terms of consequences are still definitely wrong. It’s hard to see how a partisan of either position would or should be moved by this argument (which remember was in a list of arguments against any sort of belief in rights understood as real rights and not heuristics for utility calculations).
Finally, Matthew’s opening statement ends with a few more specific responses to the organ harvesting counterexample to utilitarianism.
People who think that the pareto principle is plausible and that one should flip the switch should be swayed. This is plausibly most people, as most think one should flip the switch.
Now we turn to the objections to the organ harvesting case, even assuming we accept rights
Finally, Matthew’s opening statement ends with a few more specific responses to the organ harvesting counterexample to utilitarianism.
Objection 1
First, there’s a way to explain our organ harvesting judgments away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes.
This is the sort of point that might bother a hardcore moral realist who believed that (some of) our moral intuitions are somehow caused by an externally existing moral reality, and some are caused by other things and should thus be disregarded. But I just find that view of meta-ethics deeply implausible — I won’t run through all this here, but I’ll just say that above and beyond the usual ontological simplicity concerns about the idea of a separate moral realm external to our moral intuitions, I have epistemic and semantic concerns about this picture. How exactly are our intuitions making contact with this realm? What plausible semantic story could we tell about how our moral terms came to refer to elements of this underlying moral reality?
This shouldn’t just trouble moral realists. If the reason we’re opposed to organ harvesting is because of our brain overgeneralizing based on other patterns, such that if we really, carefully reflected, we’d revise the judgment, then that’s everyone’s problem, including the anti-realist.
On the point about moral realism, anti-realism is wildly implausible, as I argue here. For more on this, see “On What Matters” and “Ethical Intuitionism.” The semantic account and the epistemological account would both be similar to how we came to know about and talk about other abstract realms, such as math and sets. We can reason about such things, and they can explain particular moral judgments that we make, which feature in our moral language.
The sort of view I’m attracted to instead says, basically, that the project of moral reasoning is precisely to hammer our moral intuitions (or as many of them as possible) into a coherent picture so we can act on them. Where our moral intuitions come from is an interesting question but not really a morally relevant one. What we’re trying to figure out is which goals we care about, not the empirical backstory of how we came to care about them.
This view is, as expressed, crazy. Suppose that the only reason you had the view that taxes should increase is because you were hypnotized by a criminal. That would give you good reason to revise such a view. If our moral judgments don’t reflect what we’d value, upon reflection, that gives us a really good reason to revise them.
Objection 2
Second, we have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctor’s offices for fear of being murdered. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5.
Not sure what the Hippocratic oath is supposed to have to do with anything — presumably, in a world with routine organ-harvesting, doctors would just take a different oath in the first place! But the point about going to the doctor’s office for checkups is a good one. To test whether that’s the source of our revulsion, we should consider other kinds of organ-harvesting scenarios. For example, we could just make everyone register for random selection for organ harvesting the way we make boys reaching adulthood register for the Selective Service. There would be an orderly process to randomly pick winners, and the only doctors who had anything to do with it would be doctors employed by the state for this purpose — they would have nothing to do with the GPs you saw when you went in for a checkup, so we wouldn’t have the bad consequences of preventable diseases not being prevented. We’d still have a fair amount of fear, of course, but (especially if this system actually made organ failure vanishingly rare) I don’t know that it’s obvious a priori that the level of fear generated would outweigh the good consequences of wiping out death from organ failure in the utility calculus.
I’m pretty sure that it would cause overall harm, particularly because in the real world, where a large percentage of organs are rejected, and people don’t have enough useful organs to save multiple lives. This was argued in much greater detail here.
A further point on this:
The claim that our reaction to extremely distant hypothetical scenarios where organ harvesting was routinely and widely known about somehow explains our reaction to far more grounded hypothetical scenarios where it was a one-off done in secret is…odd. What’s the epistemic story here? What’s the reason for believing that when people think they’re having an immediate intuitive reaction to the latter they’re….subconsciously running through the far more fanciful hypothetical that they’ve somehow mixed up with it and thus forming a confused judgment about the former? I guess I just don’t buy this move at all.
I’ll just quote my explanation that I wrote here.
Let’s begin with an example—the organ harvesting case. A doctor can kill a patient and harvest their organs to save five people. Should they? Our intuitions generally say no.
What’s going on in our brains—what’s the reason we oppose this? Well, we know that social factors and evolution dramatically shape our moral intuitions. So, if there’s some social factor that would result in strong pressure to hold to the view that the doctor shouldn’t kill the person, it’s very obvious that this would affect our intuitions. Are there?
Well, of course. A society in which people went around killing other people for the greater good would be a much worse society. We have good rules to place strong prohibitions on murder, even for the allegedly greater good.
Additionally, it is a practical necessity that we accept, as a society, some doing allowing distinction. Given that doing the maximally good thing all the time would be far too demanding, as a society, we treat there as being some fundamental distinction between doing and allowing. Society would collapse if we treated murder as being only a little bit bad. Thus, it’s super important that we treat murder as very bad. But given that we can’t treat failing to do something unfathomably demanding as horrendous—equivalent to murder—we have to treat there as being some distinction between doing and allowing.
After this distinction is in place, our intuitions about organ harvesting are very obviously explainable. If killing is treated as unfathomably evil, while not saving isn’t, then killing to save will be seen as horrendous.
To see this, imagine if things were the other way. Imagine if we were living in a world in which every person will kill one person per day, in an alternative multiverse segment, unless they fast during that day. Additionally, imagine that, in this world, each person saved dozens of people per day in alternative multiverse segment, unless they take drastic action. In this world, it seems clear that failing to save would be seen as much worse than killing, given that saving is easy, but failing to kill is very difficult. Additionally, imagine that these people saw those who they were saving, and they felt empathy for them. Thus, not saving someone would provoke similar internal emotional reactions in that world as killing does in ours.
So what do we learn from this. Well, to state it maximally bluntly and concisely, many of our non-utilitarian intuitions are the results of social norms that we design to have good consequences, which we then take to be significant independently of their good consequences. These distinctions are never derivable from plausible first principles, never have clear delineations, and always result in ridiculous reductios. They are more epiphenomena—an unnecessary byproduct of correct moral reasoning. We correctly see that society needs to enshrine rights as a legal concept, and then incorrectly feel an attachment to them as an intrinsic feature of morality.
When we’re taught moral norms as a child, we’re instructed with rigid norms like “don’t take other people’s things.” We try to reach reflective equilibrium with those intuitions, carefully reflecting until they form coherent networks of moral beliefs. Then, later in life, we take them as the moral truth, rather than derivative heuristics.
Objection 3
Third, we can imagine several modifications of the case that makes the conclusion less counterintuitive.
First, imagine that the six people in the hospital were family members, who you cared about equally. Surely we would intuitively want the doctor to bring about the death of one to save five. The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.
This one really floored me in the debate. I guess I could be wrong but my assumption would be that no more than one or two out of any one hundred million human beings — not one or two million out of a hundred million, but literally one or two — would be more friendly to murdering a member of their own family to carve them up for their organs than doing the same to a complete stranger.
I’d be curious to hear from people in the chat—just intuitively, what would you hope for in this case. For me, I’d definitely prefer for more of my family members to be saved, rather than fewer. The question didn’t ask about murdering a member of ones family—it asked about whether they’d hope that a doctor would murder one of their family so that the other five members don’t die.
A second objection to this counterexample comes from Savulescu (2013), who designs a scenario to avoid unreliable intuitions. In this scenario there’s a pandemic that affects every single person and makes people become unconscious. One in six people who become unconscious will wake up — the other 5/6ths won’t wake up. However, if the one sixth of people have their blood extracted and distributed, thus killing them, then the five will wake up and live a normal life. It seems in this case that it’s obviously worth extracting the blood to save 5/6ths of those affected, rather than only 1/6ths of those affected.
Similarly, if we imagine that 90% of the world needed organs, and we could harvest one person’s organs to save 9 others, it seems clear it would be better to wipe out 10% of people, rather than 90%.
This is just “moral state of emergency” stuff. All the comments about those intuitions made above apply here.
It’s not a moral state of emergency—in each case, the ratio of rights violated to rights protected is only one to five. To see this case more clearly, I’ll quote Savuluscu (partly because I hadn’t the time in the original debate).
In Transplant, a doctor contemplates killing one innocent person and harvesting his/her organs to save 5 people with organ failure. This is John Harris’ survival lottery.
But this is a dirty example. Transplant imports many intutions. For example, that doctors should not kill their patients, that those with organ failure are old while the healthy donor is young, that those with organ failure are somehow responsible for their illness, that this will lead to a slippery slope of more widespread killings, that this will induce widespread terror at the prospect of being chosen, etc, etc
A better version of Transplant is Epidemic.
Epidemic. Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
If you were a person in Epidemic, which policy would you vote for? The first policy, Inaction, is one in which nothing is done. One in six of the world’s population surives. The second policy is Extraction, which kills one but saves five others. There is no way to predict who will be an antibody producer. You don’t know if you will be one of the six who can mount an immune reaction or one of the five in six who don’t manage to mount an immune response and would die without the antibody serum.
Put simply, you don’t know whether you will be one who could survive or one who would die without treatment. All you know for certain is that you will catch the disease and fall unconscious. You may recover or you may die while unconscious. Inaction gives you a 1 in 6 chance of being a survivor. Extraction gives you a five in 6 chance.
It is easy for consequentialists. Extraction saves 5 times as many lives and should be adopted. But which would you choose, behind the Rawlsian Veil of Ignorance, not knowing whether you would be immunocompetent or immunodeficient?
I would choose Extraction. I would definitely become unconscious, like others, and then there would be a 5 in 6 chance of waking up to a normal life. This policy could also be endorsed on Kantian contractualist grounds. Not only would rational self-interest behind a Veil of Ignorance endorse it, but it could willed as a universal law.
Consequentialism and contractualism converge. I believe other moral theories would endorse Extraction.
Since Extraction in Epidemic is the hardest moral case of killing one to save 5, if it is permissible (indeed morally obligatory), then all cases of killing one innocent to save five others are permissible, at least on consequentialist and contractualist grounds.
There is no moral distinction between killing and letting die, despite many people having intuitions to the contrary.
Objection 4
A fourth objection is that, upon reflection, it becomes clear that the action of the doctor wouldn’t be wrong. After all, in this case, there are four more lives saved by the organ harvesting. It seems quite clear that the lives of four people are fundamentally more important than the doctor not sullying themself.
That’s not a further objection. That’s just banging the table and insisting that the only moral principles that are relevant are consequentialist ones — which is, of course, precisely the issue in dispute. Also worth pausing here to note the relevant higher-order evidence. As far as I know, utilitarianism is a distinct minority position among professional philosophers who have ethics as their primary academic specialization (i.e. the people who are most likely to have done extensive reflection on this!).
No—I was explaining how, as a consequentialist, when I consider the morally salient features of the situation, the things that are actually important, that actually matter to people, consequentialism seems to fundamentally capture what’s most important. The notion that we have most reason to kill the person and harvest their organs is not an implausible one.
Objection 5
Fifth, we would expect the correct view to diverge from our intuitions in a wide range of cases, the persistence of moral disagreement and the fact that throughout history we’ve gotten lots of things morally wrong show that the correct view would sometimes diverge from our moral intuitions. Thus, finding some case where they diverge from our intuitions is precisely zero evidence against utilitarianism, because we’d expect the correct view to be counterintuitive sometimes. However, when it’s counterintuitive, we’d expect careful reflection to make our intuitions become more in line with the correct moral view, which is the case, as I’ve argued here.
The comments on meta-ethics above are relevant here. I’ll just add three things here. First, moral judgments and moral intuitions aren’t the same thing. An intuition is an immediate non-inferential judgment. Other kinds of judgments are indirectly and partially based on moral intuitions as well as morally relevant factual information and so on. One big problem with appealing to people having moral judgments in the past that seem obviously crazy to us now as evidence that moral intuitions can steer us wrong is that we have way more access to what moral judgments people made in the past than how much those judgments were informed by immediate intuitions that differed from ours (like, they would have had different feelings of immediate approval and disapproval about particular cases) and how much they were informed by, for example, wacky factual assumptions (e.g. “God approves of slavery and He knows what’s right more than I do” or “women are intellectually inferior to men and allowing them to determine their own destiny would likely lead to disaster”).
There are lots of cases of factual errors in the past. But also, as Singer notes, throughout much of human history, people have harbored fairly egregious views about the moral insignificance of other people. Additionally, even if we were to think that we were special in not having any substantial moral errors, unlike previous societies, the presence of disagreement would mean that many of us are wrong. As I pointed out in the actual debate, even an anti-realist is likely to recognize that they are not infallible, and that if they reflected more they could change their moral views in better ways. Perhaps not objectively better ways, but if knowing more would make you care more about animals, for example, then it seems you should care about animals.
Second, the persistence of moral disagreement could just be evidence of not everyone having identical deep moral intuitions or it could be evidence that some people are better than others at bringing their moral intuitions into reflective equilibrium or (most likely!) some of each without being evidence that some (but not other) intuitions are failing to make contact with the underlying moral reality.
Ideally, we want the types of moral intuitions that other people won’t look back on in 100 years the same way we look back on slavery. However, as I’ve argued at great length, upon reflection, we do converge—specifically on utilitarianism.
Third, even if there is an underlying moral reality, moral intuitions are (however this works!) presumably our only means of investigating it. If you believe that, I don’t see how you can possibly say that the counterintuitive consequences of utilitarianism are “zero” evidence against utilitarianism. They’re some evidence. They could perhaps (“on reflection”) be outweighed by other intuitions. Whether that’s the case is…well….what the last ten thousand words have been about!
I explain this in greater detail here. But the basic idea is that, while intuitions are the way we gather evidence for our moral theory, one counterintuitive result isn’t any evidence, because we’d expect the correct moral theory to sometimes be unintuitive, given the infallibility of our moral intuitions. I also pointed this out in the actual debate with Ben.
Objection 6
Sixth, if we use the veil of ignorance, and imagine ourself not knowing which of the six people we were, we’d prefer saving five at the cost of one, because it would give us a 5/6ths, rather than a 1/6ths chance of survival.
If this is correct, it shows that to achieve the sort of neutrality that the veil of ignorance is supposed to give us, agents in the original position had better be ignorant of how likely they are to be the victim or beneficiary of any contemplated harm.
This is absurd! The reason the veil of ignorance is good is because it allows us to be rational and impartial—we don’t know who we are. Morality is just what we’d do if we were totally rational and impartial and that’s be utilitarian. There’s no justification for depriving us of the extra information.
Notice that without that layer of ignorance, the standard descriptions of the thought experiment aren’t actually true. “You don’t know whether you’ll be male or female, black or white, born into a rich family or a poor family, if there’s slavery you won’t know whether you’re a slave or a master,” etc. Some of these may be true but some of them won’t. Say you’re considering enslaving 1% of the population to serve the needs of the other 99%. If you’re behind the veil of ignorance but you know that, if you form the belief that you won’t be a slave — and you’re right — does that not count as knowledge? You had accurate information from which you formed an inference that you could be 99% sure was correct! On at least a bunch of boringly normal analyses of what makes true belief knowledge, the person who (correctly) concludes from behind the veil of ignorance that they won’t be a slave, and thus endorses slavery out of self-interest, does know they won’t be a slave. That very much defeats the point.
A final thought before leaving the veil of ignorance:
If there was only one rich person, it wouldn’t make sense to structure society as if one was just as likely to be rich as to be poor.
A final thought before leaving the veil of ignorance:
What if we came up with some impossibly contrived scenario whereby harvesting one little kid’s organs (instead of giving him a candy bar) would somehow save the lives of one hundred billion trillion people? As I’ve already indicated, I’m not entirely sure what I make of “moral state of exception” intuitions, but if you do take that idea seriously, here’s a way of cashing it out:
Rawlsianism is a theory of justice — although one that wisely separates justice from interpersonal morality, confining itself to the question of what just basic institutions would look like rather than going into the very different moral sphere of how a person should live their individual life. Plausibly, though, a virtuous person confronted with the choice between upholding and undermining the rules of a just social order should almost always uphold. Perhaps, though, in really unfathomably extreme scenarios a virtuous person would prioritize utility over justice. Again: I’m not entirely sure if that’s right and the good news is that no one will ever have any particular reason to have to figure it out since (unlike more grounded cases of conflicts between justice and utility) it’s just never ever going to come up.
But the entire attractiveness of the veil of ignorance is that it allows us to accurately reason about what we should do. If it’s just a theory of justice, but then you say that there’s some other morality, that wildly complicates the moral theory, sacrificing parsimony. On top of that, it becomes impossible to divvy people up into social groups. If you think you’re equally likely to be rich or poor, black or white, is the same true of being in China vs Iceland? There’s no non-arbitrary way of doing this.
…and that’s a wrap! I’m obviously deeply unpersuaded that any of these arguments actually give anyone much reason to reconsider the deep moral horror nearly everyone has when thinking about this consequence of utilitarianism, but there’s certainly enough here to keep it interesting.
Thanks Ben! You kept it interesting too!
One More Objection to Rights Just as a Treat
This is an excerpt from my book.
10 (Chappell, July 31, 2021) has given a decisive paradox for deontology. The scenario starts with two obvious assumptions “(1) Wrong acts are morally dispreferable to their permissible alternatives. If an agent can bring about either W1 or W2, and it would be wrong for them to bring about W1 (but not W2), then they should prefer W2 over W1.
(2) Bystanders should similarly prefer, of a generic moral violation, that it not be performed. As Setiya (2018, p. 97) put it, "In general, when you should not cause harm to one in a way that will benefit others, you should not want others to do so either.”
He then adds
“(3) Five Killings > One Killing to Prevent Five.” > here just means preferable from the standpoint of a third party benevolent observer. So this just means that a third party should prefer five killings according to the deontologist to one killing to prevent 5. He gives the following definitions
"Five Killings: Protagonist does nothing, so the five other murders proceed as expected.
One Killing to Prevent Five: Protagonist kills one as a means, thereby preventing the five other murders.”
This is very unintuitive. However, he has an additional argument for why it’s wrong.
He introduces claim 4
“(4) One Killing to Prevent Five >> Six Killings (Failed Prevention).^[Here I use the '>>' symbol to mean is vastly preferable to. Consider how strongly you should prefer one less (generic) murder to occur in the world. I will use 'vast' to indicate preferences that are even stronger than that.]”
Six killings are defined as “Six Killings: Instead of attempting to save the five, Protagonist decides to murder his victim for the sheer hell of it, just like the other five murderers.” So this is very intuitive, six murders are clearly more than one murder worse than one murder which prevents five murders.
He then introduces claim five as the following. “(5) Six Killings (Failed Prevention) >= Six Killings.”
Six killings failed prevention is defined as “Six Killings (Failed Prevention): As above, Protagonist kills one as a means, but in this case fails to achieve his end of preventing the five other murders. So all six victims are killed.”
This is obvious enough. Killing one indiscriminately after 5 other people commit murder isn’t any better than killing one to try to save 5, but ultimately being unsuccesful.
Claim six is “(6) It is not the case that Five Killings >> Six Killings.” To be >> relative to another state of affairs, the difference has to be more than one extra murder. However, in this case, the difference is precisely one extra murder. This claim is thus trivially true.
He then concludes “
Recall, from (3)--(5) and transitivity, we have already established that deontologists are committed to:
(7) Five Killings > One Killing to Prevent Five >> Six Killings (Failed Prevention) >= Six Killings.
Clearly, (6) and (7) are inconsistent. By transitivity, the magnitude of preferability between any two adjacent links of the chain must be strictly weaker than the preferability of the first item over the last. But the first and last items of the chain are Five Killings and Six, which differ but moderately in their moral undesirability. The basic problem for deontologists is that there just isn't enough moral room between Five Killings and Six to accommodate the moral gulf that ought to lie between One Killing to Prevent Five and Six Killings (Failed Prevention). As a result, they are unable to accommodate our moral datum (4), that Six Killings (Failed Prevention) is vastly dispreferable to One Killing to Prevent Five.”
This argument proves a straightforward paradox for the deontologist, one that is very hard to refute.
Concluding remarks
Well, that was a lot of fun! Thanks for the response Ben, and thanks to everyone for reading through this. If you got to the end, you have my respect.
Iff means if and only if
I originally typed that as asshold by accident.