11 Comments

> By utilitarianism, I mean that, one should take all actions with positive utility.

This is a strange definition of utilitarianism, and is not a prescription that the vast majority of people - including Utilitarians - would accept. Imagine we live in a world where everyone is experiencing +10 utility. You have two options A and B before you. If you do A, then everyone's utility will increase to +20. If you do B, then everyone's utility will increase to +30. There are no other actions and these are the only effects of the action (there are no important non-utility effects). Action A will have positive utility, since it results in a world with more utility, but you obviously should not do it.

> Deflection: If one can redirect a threat that they’ve created, such that it will cause less harm, they ought to.

There are counterintuitive implications of deflection. Imagine someone created a harm that was directed at themselves, either by intention or reckless neglect. If they do nothing, they will cause themselves -50 utility. But if they deflect the harm to an innocent third party, they will cause the third party -40 utility. Most would say the agent should not deflect. Instead, he should suffer the consequences of his actions.

More generally, if the harm was originally directed at someone who is responsible for the harm in some way (e.g. they created the harm, they consented to the harm, etc.), then it does not seem intuitive that one should deflect.

> If something increases one’s welfare, they’d rationally consent to it.

This idea is essential your defense of Pareto. Is this an analytic truth or a synthetic truth? If it's an analytic truth, then it's trivial. If it's a synthetic truth, then you need to articulate what you mean by these concepts. I take it that to say that an action promotes one's welfare is to say that the action is "good" and to say that something is "good" is just to say that it is something that is rational to pursue. But on this view, to say that one would rationally pursue their welfare would be a trivial, analytic truth.

So you must be using a different meaning of "welfare" and "rationality", such that the concepts expressed by the terms are not reducible to each other. What in fact do you mean by these terms?

Similar questions can be asked about the term "should". You make claims about the relation between what we "should" do, what promotes "welfare", what it is "rational" to do. Presumably, you think these are synthetic truths. But then it's not clear what you even mean by the terms. Are these terms supposed to refer to distinct normative primitives that we all have clear and shared intuitions about upon reflection?

Expand full comment

As a non-utilitarian, Deflection seems objectionable. Someone might deserve to get the harm, even if there's overall more of it.

Suppose that I release a fatally poison gas in a room containing me and five people, and there's five gas masks available. I am closer than anyone else to the masks and can easily secure one for myself if I so desire. However, due to my physiology, I am 0.00001% more likely to die from the gas than the next most-likely person to die. Should I take a gas mask or not? By deflection, I can redirect my risk of death to someone else by taking a gas mask for myself, so I should do so.

I have the intuitive reaction that for sufficiently small additional risk, the perpetrator ought to bear the (magnified) risk rather than his intended victims.

Expand full comment

I'm curious what your take is on the following thought experiment. Imagine you are placed in a room where you can stay alive indefinitely, but while inside the room your utility is always 0 - it's a completely neutral experience. Inside the room there is a number pad that you can press digits on like you would on a calculator. The only way to exit the room is to input some number on the pad and then press the enter button. The number you enter represents the utility you will experience in your life after exiting the room. So, if you entered 1, then you will be taken out of the room and experience a life of +1 utility. Presumably, you would want to enter some extremely large number by repeatedly pressing the 9 digit i.e. 9999999999999..... But the hard question is when do you stop? On the one hand, it seems irrational to ever stop - after all, at any point you get to, if you press 9 just ONE more time then that will increase the utility you experience by an entire *order of magnitude*. So you may as well keep pressing 9 over and over. But then this leads to a paradox because if you never hit the enter button, you will never leave the room and your potential utility will remain unrealized - you'll remain stuck in the room at utility 0. So, what's a utilitarian to do in this situation?

Expand full comment

I wonder if we can use a weaker premise. It's more plausible to say that Deflection is always permitted (rather than required). Then we have, all actions with positive utility are permitted.

That's enough for utilitarianists since all rival theories are thus ruled out lol

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

Still don't get why deflection is uncontroversial. If Alice and Bob have +10 utility and I can take an action that simultaneously adds +50 utility and -49 utility to Alice and doesn't harm Bob, I should take it. But I plausibly shouldn't take an action that gives Alice +50 utility and Bob -48 utility.

Expand full comment