14 Comments

Under utilitarianism, it is good for a gang of 1000 extremely sadistic people to kidnap and torture an innocent person. I'd like to see your defense of this.

Expand full comment
author

That's just false. Torture ruins decades of life and inflicts well more than 1000 times the suffering that the torturers would get in pleasure. In addition, even if watching torture is pleasurable, it makes them more likely to commit future crimes and to torture people.

Expand full comment

I think you are overestimating the harm of torture and underestimating the pleasure that sadistic people can derive from torturing others. But if 1,000 is not enough, make it 10,000 or more.

And the objection that it would make them more likely to torture again in the future does not work, since we are stipulating a torture gang large enough for their sadistic pleasure to outweigh the victim's suffering. So any future torture sessions they enjoy will also be net-beneficial.

Expand full comment
author

But there are the second-order effects that must be taken into account involving making them more likely to torture in the future. In terms of the next objection, you are assuming that if they tortured people in the future it would benefit the potentially hundreds of millions of people who it would need to benefit to make up for the harm caused, which is false.

Expand full comment

It might be a misunderstanding on my part, but it seems like there’s an inconsistency in argument 3. Your application of Deflection assumes that less harm is done if John suffers the prick instead of Jane. But your application of Combination seems to assume that the effect is the same no matter who gets pricked. I might be misreading your application of Combination, though.

On the torture vs. dust speck point, I think it also helps your point to consider cases involving risk. If torture is worse than any number of irritating dust specks, is it okay to bring about a 0.0000001 probability of torture to prevent 100^100 dust specks? If not, at what point is the probability of torture low enough that it becomes okay to take action against the dust specks? Any cutoff seems arbitrary and, as Michael Huemer points out in “Lexical Priority and the Problem of Risk,” seems to lead to paradoxes.

Expand full comment
author

RE deflection: any action that is a utilitarian improvement could be made a Pareto improvement and then have the benefit deflected. This doesn't entail that actions that are neutral with regard to utility are neutral, but it does entail that actions that are positive with regard to utility are good.

Agree with your point about lexical priority and risk.

Expand full comment

Should read "if the threat were deflected from Jane to John by Deflection."

Expand full comment

If we would agree to utilitarianism behind the veil of ignorance, we can expect people to make organ harvesting contracts.

We don’t know whether we will need an organ in the future when our health deteriorates, and we should therefore agree that our organs be harvested in exchange for a higher probability that we will in turn receive an organ, assuming that we are actually utilitarians.

The fact that we don’t see these contracts (or even anyone wanting to make these contracts) shows that we aren’t utilitarians behind the veil.

Expand full comment
author

Well for one, they're illegal. For another, most people would be worried about doing that for non-rational reasons.

Expand full comment

Which is why I hedged with "wanting to" reflecting the fact that they're undesirable as well (although we still see plenty of illegal agreements and should expect to see them if they're desirable enough).

But we can control for legality. People are free to make purely utilitarian insurance contracts with unlimited personal liability, where they can subject their own wealth to the utilitarian calculus of the contracting group. So if a member satisfied their burden of proof through showing that by taking another member's wealth, collective happiness would increase significantly enough, then they would have a right to that member's wealth. Why don't we see any of those contracts if we would agree to them behind a veil?

And are you saying that people aren't rational behind the veil of ignorance? If so, couldn't the rational utilitarians arbitrage the sucker non-utilitarians' irrationality for their own benefit? Or do you define rationality as pure impartiality?

Expand full comment
author

People would be rational from behind the veil by stipulation. In the real world, they are not.

Expand full comment

People buy insurance out of uncertainty on the future, meaning they're made behind a veil. Why aren't insurance contracts utilitarian?

Expand full comment

Perhaps because people aren’t rational and suspect others won’t be too (so for example, calculating utility might be too difficult so easier and more reliable in cases to have instrumental fictionalist side constraints)

Expand full comment