6 Comments

The Extra Choice principle is straightforwardly incompatible with agent-relative duties or prerogatives, right? You don't need the 100 rings to show this. Just consider any case where the "extra choice" bestowed has two features: (i) it better satisfies the other agent's agent-relative duties, but (ii) it is impartially (or agent-neutrally) worse.

E.g. imagine giving a parent an option to save their child from a fire, at the cost of flooding a basement and accidentally killing two other children. Suppose it's permissible (or even ideal, due to special obligations) for the parent to save their child. Still, others probably shouldn't give them this option!

Seems closely related to the 'Hope Objection': https://www.utilitarianism.net/arguments-for-utilitarianism#the-hope-objection

That said, I don't know that deontologists would be that bothered by rejecting the Extra Choice principle. You write, "If the extra option is worse than the existing options then they won’t take it." But this just assumes agent-neutral consequentialism. It could result in a worse outcome, yet be better *as an option for that agent*. (As in the parenting case.)

Returning to 100 rings: I'd expect the deontologist to respond as follows. First, work out the threshold for moderate deontology, beyond which one should just save more lives. Let's say >20 lives warrants killing one. Then, by backwards induction, work out which ring corresponds to passing this threshold. E.g. the last ring of deontologists should choose (2), letting five be killed by others rather than killing 1 themselves. But the prior ring (and all others before them) should be disposed to choose (1), each killing 1 rather than letting 25 (or more) be killed by others.

Expand full comment

I agree that it is related to the hope argument, but I think it goes slightly further. Not only does it show that you should hope people act wrongly, it shows that you should actively deprive perfectly moral people of decisions. But this is a very strange result -- thinking "oh no, they're about to do the right thing, someone stop them!" It's also strange to think that you would have decisive reason to take an action, but then that action will give people more options who never do wrong, so you shouldn't take the action. If we imagine god who always does right, it just seems bizarre that one is sometimes morally requires to take away his options.

Expand full comment

The Scenario is logically incoherent (just like utilitarianism)

Mainly because the argument relies on this line:

"The 100th circle is comprised of psycho murderers who will take option one if the buck doesn’t stop reaching them."

but what you are attempting to prove is this line

"a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders"

when you have JUST stated (in a poor attempt to get out of the objection I articulated on your third blog post). that the majority of this circle, is in fact not composed of perfectly moral people...

Expand full comment

It's only the first 99 circles that are comprised of perfectly moral people. The 100th circle is comprised of psycho murderers.

Expand full comment

Well, yes. I fully agree that having 1.5777218 psycho murdurers will generally produce negative results. What I don’t get is why that negatives deontology.

If the argument is that a deontology at the middle would somehow “let” the murdurers to their work, I would say two things.

A. The Deontologist would know what the first 99 circles would do, and therefore that option B kills lots of people, and therefore not do it. I’ll bite the bullet that giving enough depntologists additional choices can make the world a worse place, because I don’t care about measuring world-states much.

B. The scenario is still incoherent because the options being given are not the same. As the circle progresses further the number of rows before the row of evil people changes, meaning that you really aren’t giving them the same choice.

Expand full comment

I'd recommend reading the argument more carefully because these responses seem to rest on confusions. It negates deontology because of the problematic explosiveness of the principle that you shouldn't do x bad thing to prevent 2 instances of x bad thing. If you're willing to accept that giving perfectly moral people extra options can make the world worse, that's fine, but that's quite a bullet to bite and is supported by other intuitive arguments given in the article. Your second point is wrong--the options are the same, the 100th circle just has a different predisposition. Also, even if they weren't the same we could just stipulate that the options were the same up until the 100th circle at which point they were different. The options being the same was just intended to explain the scenario but it not necessary for the force of the argument.

Expand full comment