1 Explaining the problem
Deontology has some puzzles wherein you can violate someone’s rights to prevent yourself from violating more rights. Michael Huemer, in his debate with Richard Chappell, gave the following example, which I think came from Judith Jarvis Thompson. Suppose that a person can kill one person and harvest their organs to save five. However, the five are people who they’ve previously poisoned, so if the five die, they’ll have violated five rights.
However, most deontologists still have the intuition that, in this case, you shouldn’t kill the person to harvest their organs. The explanation of this is that, from a deontological perspective, what matters is not making sure you don’t violate rights, it’s making sure you don’t violate rights now. That’s why it would be wrong to kill one to prevent your future self from violating several rights.
This also allows the deontologist to make sense of cases like the following—one I’ve given before.
Suppose a person puts a bomb in a park for malevolent reasons. Then, they realize that they’ve done something immoral, and they decide to take the bombs out of the park. However, while they’re in the process, they realize that there are two other bombs planted by other people. They can either diffuse their own bomb, or the other two bombs. Each bomb will kill one person.
It seems very obvious that they shouldn’t diffuse their own bomb—they should instead diffuse the two others. But this is troubling—on the deontologist’s account, this is hard to make sense of. When choosing between diffusing their own bomb or two others, they are directly making a choice between making things such that they violate two people’s rights or violate another person’s rights.
If the deontologist thinks that you don’t have duties not to violate rights, but merely duties not to violate rights now, then they can make sense of this. If you don’t diffuse your own bomb, you’ll have taken a previous act that violated rights, but you don’t violate rights now.
But I don’t think that making deontology time relative is successful. Here, I’ll lay out a few reasons for that.
2 Oh come on, that can’t be what really matters
If we think about what really matters, it just seems so obvious, as a near unassailable moral datum, that this can’t be what really matters. Like, is it really significant whether you violate rights now or later, such that you have special duties to not violate rights now. No layperson would ever come up with this theory; it’s just a gerrymandered account that’s given to try to make sense of one’s moral intuitions. I mean, come on!
Maybe you think this is equivocation. When I’m thinking about what really matters, that sounds like an axiological notion—that sounds like I’m saying something about the value of the world. But the deontologist can agree that axiologically it doesn’t matter, while claiming that deontically it does matter.
But I’m not doing that. It really seems like deontically it doesn’t matter. Whether you violate rights now or later isn’t really important. When some act happens shouldn’t matter in regards to your current decision-making.
If someone had a moral theory according to which whether or not someone was named George determined the wrongness of killing them, the correct response would be “it doesn’t really matter whether one’s name is George. Thus, this is picking up on something that doesn’t really matter. This isn’t just true axiologically—as a deontic matter, it shouldn’t matter in regards to what you should do whether someone’s name is George. The same is true here.
Deontological reasons should latch on to things that matter—that are important. as Richard has noted. But it doesn’t seem as though this is important. The time when an act occurs doesn’t really matter!
3 What should you hope for at an earlier point?
Suppose I predict that one year from now, I’ll be given the choice of violating one person’s rights to prevent future me from violating ten people’s rights. What should I do? A natural response is that I should hope that I violate the person’s rights. After all, at this point, I’m deciding between a world in which future me violates one right and one in which further future me violates ten rights. But this means that you should do the wrong thing rather than the right thing.
This also implies a paradox. Suppose you are considering violating one person’s rights indiscriminately a year from now to prevent yourself from violating ten people’s rights indiscriminately two years from now. 1 rights violation done a year from now to prevent ten indiscriminate rights violations two years from now>1 gratuitous rights violation 1 year from now>10 gratuitous rights violation two years from now. Thus, by transitivity, you should prefer a world where you violate rights to one where you don’t.
But this is nuts! Future you shouldn’t think “hmm, my highest aspiration is to do something immoral a year from now.” You should want yourself to do the right thing, but on this account, you shouldn’t. All the worse for this account.
Btw, you can also have a version of Richard’s paradox, by just making the things that you’re comparing your own future rights violations rather than other people’s.
4 This implies that earlier you have a reason to enable yourself at a future point to do a wrong thing
Remember the account given by the deontologist of why you should defuse your own landmines, but you shouldn’t violate rights one time to prevent multiple future rights violations. You should regard your future rights violations just as you would regard rights violations by a third party.
But this gets really weird results. Suppose you know that in an hour, two things will be the case.
You’ll be convinced of utilitarianism.
You’ll have the ability to kill someone and harvest their organs to save five if and only if you right now take some action—in this case, the action can be putting a scalpel in your pocket.
Should you put the scalpel in your pocket? Well, there’s obviously some reason to do so—after all, you have some prima facie duty of benevolence. But you have no reason not to—after all, on this account, when deciding what to do now, future rights violations that you’ll cause do not factor into your deliberation as non-consequentialist reasons, because your reasons are time-relative.
But this is crazy. If it’s wrong to harvest the person’s organs, it’s obviously wrong to take actions right now specifically to facilitate your future harvesting of a person’s organs. You should not take actions with the sole purpose of allowing you to do wrong things in the future, obviously!
5 There isn’t a precise fact of the matter about when some rights violation takes place.
On this account, you have a special duty to make sure you don’t violate rights now. But I don’t think that there’s a fact about whether something is a rights violation now or in the future. For example, consider the following case.
Each minute, you press a button. When you press ten buttons, someone dies. When did you violate rights? Was it when you pressed the tenth button? Well then suppose that after the ten buttons are pressed you have another ten minutes, and if you press another button during those ten minutes, the person doesn’t die. In that case, when are their rights violated?
There are other cases that can be given, but I think the intuition is pretty clear. If some sequences of acts cause a rights violation, then it’s not clear that there’s any specific action that one can pinpoint as being the action that caused the rights violation—it was a sequence of actions that caused it.
You might say that the moment when the rights violation happens is the moment when the thing that violates rights actually occurs (E.g. the time when the person is squished by the trolley). But then this account totally fails—if rights violations are time relative, then you have no reason not to push the person in front of the trolley, because you won’t be violating his rights now—his rights will be violated in a few seconds when he’s squished by the trolley.
6 Okay, so maybe there are some time-relative reasons as well as agent-relative reasons; the time-relative reasons are just somewhat stronger, but both are stronger than your reasons not to prevent other extraneous deaths
As the title of this suggests, you might think that there are reasons that are both time relative and also person relative. However, the reasons that are time relative are stronger than the reasons that are person relative—thus, one still oughtn’t kill one person to prevent multiple other killings.
But this solution runs into perhaps even more problems. For one, it runs into literally all of the objections described in the previous section. It also implies that if you can diffuse 10 of your own bombs or 11 of other people’s bombs, assuming that your reason to prevent your rights violations is more than 10/11ths as strong as your reason to prevent other people’s rights violations, then you should diffuse 10 of your own bombs. But this is clearly wrong!
7 Okay, so the deontologist can’t have time relative duties. What’s the big deal?
Well, for one, this gets us deontological prisoners’ dilemmas.
Second, this means they still have challenges in the bombs in the park case. Though all in all, despite it committing them to several wildly unintuitive results, it still seems like the best option!