14 Comments

Under utilitarianism, it is good for a gang of 1000 extremely sadistic people to kidnap and torture an innocent person. I'd like to see your defense of this.

Expand full comment

It might be a misunderstanding on my part, but it seems like there’s an inconsistency in argument 3. Your application of Deflection assumes that less harm is done if John suffers the prick instead of Jane. But your application of Combination seems to assume that the effect is the same no matter who gets pricked. I might be misreading your application of Combination, though.

On the torture vs. dust speck point, I think it also helps your point to consider cases involving risk. If torture is worse than any number of irritating dust specks, is it okay to bring about a 0.0000001 probability of torture to prevent 100^100 dust specks? If not, at what point is the probability of torture low enough that it becomes okay to take action against the dust specks? Any cutoff seems arbitrary and, as Michael Huemer points out in “Lexical Priority and the Problem of Risk,” seems to lead to paradoxes.

Expand full comment

Should read "if the threat were deflected from Jane to John by Deflection."

Expand full comment

If we would agree to utilitarianism behind the veil of ignorance, we can expect people to make organ harvesting contracts.

We don’t know whether we will need an organ in the future when our health deteriorates, and we should therefore agree that our organs be harvested in exchange for a higher probability that we will in turn receive an organ, assuming that we are actually utilitarians.

The fact that we don’t see these contracts (or even anyone wanting to make these contracts) shows that we aren’t utilitarians behind the veil.

Expand full comment