Discussion about this post

User's avatar
Jay M's avatar

> By utilitarianism, I mean that, one should take all actions with positive utility.

This is a strange definition of utilitarianism, and is not a prescription that the vast majority of people - including Utilitarians - would accept. Imagine we live in a world where everyone is experiencing +10 utility. You have two options A and B before you. If you do A, then everyone's utility will increase to +20. If you do B, then everyone's utility will increase to +30. There are no other actions and these are the only effects of the action (there are no important non-utility effects). Action A will have positive utility, since it results in a world with more utility, but you obviously should not do it.

> Deflection: If one can redirect a threat that they’ve created, such that it will cause less harm, they ought to.

There are counterintuitive implications of deflection. Imagine someone created a harm that was directed at themselves, either by intention or reckless neglect. If they do nothing, they will cause themselves -50 utility. But if they deflect the harm to an innocent third party, they will cause the third party -40 utility. Most would say the agent should not deflect. Instead, he should suffer the consequences of his actions.

More generally, if the harm was originally directed at someone who is responsible for the harm in some way (e.g. they created the harm, they consented to the harm, etc.), then it does not seem intuitive that one should deflect.

> If something increases one’s welfare, they’d rationally consent to it.

This idea is essential your defense of Pareto. Is this an analytic truth or a synthetic truth? If it's an analytic truth, then it's trivial. If it's a synthetic truth, then you need to articulate what you mean by these concepts. I take it that to say that an action promotes one's welfare is to say that the action is "good" and to say that something is "good" is just to say that it is something that is rational to pursue. But on this view, to say that one would rationally pursue their welfare would be a trivial, analytic truth.

So you must be using a different meaning of "welfare" and "rationality", such that the concepts expressed by the terms are not reducible to each other. What in fact do you mean by these terms?

Similar questions can be asked about the term "should". You make claims about the relation between what we "should" do, what promotes "welfare", what it is "rational" to do. Presumably, you think these are synthetic truths. But then it's not clear what you even mean by the terms. Are these terms supposed to refer to distinct normative primitives that we all have clear and shared intuitions about upon reflection?

Expand full comment
Bolyai-Lobachevsky's avatar

As a non-utilitarian, Deflection seems objectionable. Someone might deserve to get the harm, even if there's overall more of it.

Suppose that I release a fatally poison gas in a room containing me and five people, and there's five gas masks available. I am closer than anyone else to the masks and can easily secure one for myself if I so desire. However, due to my physiology, I am 0.00001% more likely to die from the gas than the next most-likely person to die. Should I take a gas mask or not? By deflection, I can redirect my risk of death to someone else by taking a gas mask for myself, so I should do so.

I have the intuitive reaction that for sufficiently small additional risk, the perpetrator ought to bear the (magnified) risk rather than his intended victims.

Expand full comment
9 more comments...

No posts