"Suppose if you kill one person now you’ll kill A, while if you wait, you’ll kill A,B,C,D, and E. In this case, A will be killed regardless, so it’s just a question of whether B,C,D, and E all get killed. There is literally no benefit to anyone from not killing A. Thus, this option is untenable."
Just to reiterate Dominik's point, your argument here is basically, "Deontologists don't do the thing that maximizes good consequences, therefore deontology is false."
This is... unlikely to rationally persuade a deontologist.
Thanks. Though I agree with Matthew that the first horn of the "dilemma" seems pretty untenable.
But my point is that most deontologists should just happily embrace the second horn of the "dilemma", while denying that it therefore follows that you should prevent someone else from killing 5 people by murdering them (and this is in fact what most deontologists who have written about this recommend)
I would strongly disagree with that notion, but, even if true, you can believe in free will and predict future things you'll do. For example, I predict I'll be at least slightly mean to at least five future people in my life, and so I'd maybe be mean to one now to prevent being mean to five future people.
I think it was Shelley Kagan who said something like: This is not an OBJECTION to my theory, this just IS my theory
This is exactly what this feels like. Obviously any deontologist would respond that premise 2b ("Therefore, you should kill one to prevent six killings done by other people") is false because reasons are agent-relative, i.e. what matters is that YOU don't kill.
This whole argument just feels like putting "I don't like deontology" into different words - I'm pretty convinced that no one who wasn't already sceptical of deontology has any reason whatsoever to be convinced by it.
Whether this argument will have force will depend on the moral intuitions of the deontologist. To me it seems pretty unintuitive, and given that you and the water line disagreed about which horn to take, there doesn't seem to be universal agreement about which one is more unintuitive.
I guess one of the reasons why people might tend towards the first horn is because it's hard to conceive of a situation where you know with 100% certainty that you will literally be unable to refrain from murdering 5 people. If you accept that premise, then it seems obvious to me that the second horn is preferable for any non-absolutist deontologist
"Suppose if you kill one person now you’ll kill A, while if you wait, you’ll kill A,B,C,D, and E. In this case, A will be killed regardless, so it’s just a question of whether B,C,D, and E all get killed. There is literally no benefit to anyone from not killing A. Thus, this option is untenable."
Just to reiterate Dominik's point, your argument here is basically, "Deontologists don't do the thing that maximizes good consequences, therefore deontology is false."
This is... unlikely to rationally persuade a deontologist.
Thanks. Though I agree with Matthew that the first horn of the "dilemma" seems pretty untenable.
But my point is that most deontologists should just happily embrace the second horn of the "dilemma", while denying that it therefore follows that you should prevent someone else from killing 5 people by murdering them (and this is in fact what most deontologists who have written about this recommend)
> you don’t kill one person now, later in the year you’ll kill five people
In my view if Free Will is false then ethics is moot, and we should all kick back and relax:
I would strongly disagree with that notion, but, even if true, you can believe in free will and predict future things you'll do. For example, I predict I'll be at least slightly mean to at least five future people in my life, and so I'd maybe be mean to one now to prevent being mean to five future people.
No way you can know that being mean to someone now will stop you from being mean to 5 in the future.
Also, why would ethics matter if free will wasn’t real? Seems pretty pointless to me.
Imagine a world of sentient robots that run on a code, that can suffer. Their suffering would be bad, despite their lack of free will.
It was a hypothetical...
I think it was Shelley Kagan who said something like: This is not an OBJECTION to my theory, this just IS my theory
This is exactly what this feels like. Obviously any deontologist would respond that premise 2b ("Therefore, you should kill one to prevent six killings done by other people") is false because reasons are agent-relative, i.e. what matters is that YOU don't kill.
This whole argument just feels like putting "I don't like deontology" into different words - I'm pretty convinced that no one who wasn't already sceptical of deontology has any reason whatsoever to be convinced by it.
Whether this argument will have force will depend on the moral intuitions of the deontologist. To me it seems pretty unintuitive, and given that you and the water line disagreed about which horn to take, there doesn't seem to be universal agreement about which one is more unintuitive.
I guess one of the reasons why people might tend towards the first horn is because it's hard to conceive of a situation where you know with 100% certainty that you will literally be unable to refrain from murdering 5 people. If you accept that premise, then it seems obvious to me that the second horn is preferable for any non-absolutist deontologist