Why a Bunch of Solutions to Huemer's Paradox of Deontology's More Threatening Twin Fail
Having chatted with several friends about this, I've heard various proposed solutions--none of them work
A Preliminary Note
In a recent article, I presented Huemer’s paradox of deontology’s more threatening twin (HPODMTT). The basic idea is that moderate deontology should say that most actions that violate rights in some way are wrong, especially if done for the sake of trivial benefits. But this leads to weird conclusions when you have beings that can each violate each other’s rights repeatedly in ways that make them all better off. Note, this doesn’t just apply to extreme deontology—each of the rights violations only produces trivial benefits, but it doesn’t produce hedonic harm, at least, conditional on the various other rights violations.
If this is confusing or if you’re getting ready to type “BUT THAT ONLY APPLIES TO ABSOLUTE DEONTOLOGY, NOT THE THRESHOLD VERSION” read the original article—it establishes the argument much more clearly, in much more depth. Let me just note one more thing before I launch into a defense of the actions. While this was originally applied to the alien theft case, it applies to a lot of cases. For example, most people think that it’s a rights violation to grab someone’s leg without their consent, say, when they’re sleeping. But if we imagine aliens doing it 100^100 times, wherein each time it reduces their suffering slightly, where it would otherwise be greater than all suffering in human history each second, then each time the aliens grab a human’s leg—without the human ever knowing, of course—the benefit they get is marginal, but they are violating rights. If deontology is true, then it’s wrong to violate rights for the sake of trivial benefits, but this clearly isn’t wrong.
Since publishing the article, I’ve discussed the idea with several people, and lots of objections have been offered. I don’t think any of them work, but it’s worth explaining why. In this article, I shall do so.
They can be used to solve collective action problems
The thought here seems to be that if a rights violation is part of a solution to a collective action problem in a way that benefits everyone then it’s fine on deontology. This runs into a lot of problems.
For one, it makes it so that actions affect each other in weird ways. Suppose I inflict 1 unit of suffering on Jim to give 100 units of pleasure to Fred. Ten years later, I inflict 2 units of suffering on Fred to give 2 units of pleasure to Jim. It seems to imply, quite puzzlingly, that the action done ten years later to Jim retroactively makes the action to Fred no longer wrong, because it’s part of a pareto optimal sequence of actions.
It also implies a very puzzling form of hypersensitivity. It implies that there’s a very significant difference between the scenario with Fred and Jim as specified and what it would be if there were slight changes in well-being. For example, if instead of 2 units of suffering inflicted on Fred to give to units of pleasure to Jim, .9 units of suffering was inflicted on Fred to give .9 units to Jim, then rather than two right actions having been taken, two wrong actions would have been taken. That’s rather puzzling. The difference between .999 units of pleasure in exchange for causing someone else .999 units of suffering and 1.00001 units of pleasure in exchange for causing someone else 1.00001 units of pleasure shouldn’t make this huge difference.
This also sometimes implies that you should do obviously wrong things for the sake of making actions no longer wrong. Suppose you have three options.
Option 1: Take action A which gives Fred 1000 units of pleasure and John 100 units of suffering.
Option 2: Take action A which gives Fred 1000 units of pleasure and John 100 unit of suffering. Then, take action B which gives Fred 900 units of suffering and John 101 units of pleasure.
Option 3: Do neither actions.
On this account, if you do action A, you’ll have done something wrong, while if you do option 2, you’ll have done two right things. If deciding whether to do two right things or one wrong thing, you should do two right things, so option two is preferrable to option 1. But this is clearly absurd. Option 1 is the best action.
Huemer also notes in his article that the deontologist shouldn’t say that it’s fine to cause suffering to avert more suffering, just because the opposite was done ten years ago. This should be relevant, from a deontologist’s perspective.
Can one not steal that which was stolen?
The case that I gave involved a series of people stealing stones from each other and then stealing the stolen stones. But one reasonable reply is the following: it’s fine to steal things that someone else stole. This explains why it isn’t wrong.
To avoid this, we can imagine that each of the entities have 100^100 stones. Each time they steal, they are stealing a different (non-stolen) stone. Each stone reduces the suffering of the person who has it by some significant amount while one possesses it, and then also when one steals a stone, their suffering reduces slightly, but the suffering reduction doesn’t wear off, even if the stone is stolen. Thus, the function of one’s suffering at each moment might be the following.
Thus, if you steal a stone, that reduces your suffering, and your suffering is also reduced for as long as you hold a stone.
This also doesn’t get around the leg case. In the leg case, the aliens grab human legs 100^100 times per second, each time to reduce their suffering. This isn’t the worst thing in the world—an event which prevents quadrillions of aliens from experiencing agony more extreme than all suffering that’s ever existed in human history—yet if each of the aliens leg grabbing are bad, this would be wrong enough to be the worst thing in the world. The standard deontological theory obviously holds that it would be wrong to, for example, grab someone’s legs while they’re sleeping, without their knowledge or consent.
Finally, we can imagine another case to get out of it. Suppose that there are two aliens. Each of them can inflict one unit of suffering on the other to reduce their suffering by 1.5 units. If they do this 100^100 times, both of their sufferings will be reduced to zero, while they’re currently suffering more than all humans ever have in history. It seems like they should do that, yet this is clearly a rights violation.
What if each step in the sequence is wrong but the end result is right
The idea here is roughly the following. Take the previous torture reduction case, where all the aliens reduce their torture by a bit, at the cost of increasing the torture of the other alien by a bit more. One thing one might say is that each step of the way is wrong, but the sequence of steps is right. Two wrongs, in this case, make a right.
This has a lot of problems. For one, it runs afoul of a very plausible principle.
Two wrongs DON’T make a right: if it is wrong to do A and wrong to do B, conditional on doing A, it is wrong to do A and B.
Huemer defends this principle. The principle is really obvious. Doing two wrong things is wrong, actually. How could it be that a series of things that are all wrong conditional on the others somehow end up being right. This is like supposing that three actions that each lower the world’s temperatures, conditional on the others, somehow raise the world’s temperature—it’s just absurd!
Additionally, the following principle is plausible.
Hope: Third parties should hope that one doesn’t act wrongly.
But third parties should obviously hope that you take the series of correct actions. Thus, on this account, third parties should hope that you do wrong things. Thus, Two wrongs and Hope both have to be false for this response to go through. But they’re obviously both true.
Here’s another plausible principle: if you can either do a right thing or a wrong thing, you should do the right one. This just seems to be what we mean by a wrong thing. But in this case, you’re literally instructed to do a series of wrong things. That can’t be right.
So, this response does not work.
But what if the aliens consent to a deal where they all agree to do this so they’re better off
They don’t by stipulation!
What if the aliens use this as a weird form of communication because they can’t talk so they use this to coordinate
They don’t by stipulation!
Conclusion
Well, that’s all for no folks. I think this argument poses a significant challenge for deontology—joining a cadre of others.