11 Comments

> By utilitarianism, I mean that, one should take all actions with positive utility.

This is a strange definition of utilitarianism, and is not a prescription that the vast majority of people - including Utilitarians - would accept. Imagine we live in a world where everyone is experiencing +10 utility. You have two options A and B before you. If you do A, then everyone's utility will increase to +20. If you do B, then everyone's utility will increase to +30. There are no other actions and these are the only effects of the action (there are no important non-utility effects). Action A will have positive utility, since it results in a world with more utility, but you obviously should not do it.

> Deflection: If one can redirect a threat that they’ve created, such that it will cause less harm, they ought to.

There are counterintuitive implications of deflection. Imagine someone created a harm that was directed at themselves, either by intention or reckless neglect. If they do nothing, they will cause themselves -50 utility. But if they deflect the harm to an innocent third party, they will cause the third party -40 utility. Most would say the agent should not deflect. Instead, he should suffer the consequences of his actions.

More generally, if the harm was originally directed at someone who is responsible for the harm in some way (e.g. they created the harm, they consented to the harm, etc.), then it does not seem intuitive that one should deflect.

> If something increases one’s welfare, they’d rationally consent to it.

This idea is essential your defense of Pareto. Is this an analytic truth or a synthetic truth? If it's an analytic truth, then it's trivial. If it's a synthetic truth, then you need to articulate what you mean by these concepts. I take it that to say that an action promotes one's welfare is to say that the action is "good" and to say that something is "good" is just to say that it is something that is rational to pursue. But on this view, to say that one would rationally pursue their welfare would be a trivial, analytic truth.

So you must be using a different meaning of "welfare" and "rationality", such that the concepts expressed by the terms are not reducible to each other. What in fact do you mean by these terms?

Similar questions can be asked about the term "should". You make claims about the relation between what we "should" do, what promotes "welfare", what it is "rational" to do. Presumably, you think these are synthetic truths. But then it's not clear what you even mean by the terms. Are these terms supposed to refer to distinct normative primitives that we all have clear and shared intuitions about upon reflection?

Expand full comment
author

Yeah, I should have said that, if one has no other options, one should take an action with positive utility. But this view can account for why one should take the higher utility act, because switching the acts generates higher utility. Also, utilitarianism will agree with the verdict that if one has no other options, they should an action that generates positive utility, while no other views do.

There may be exceptions to deflection, but none of them seem to apply in the mainstream cases.

The defense of Pareto is synthetic. It is true that it is analytic that one would rationally prefer better things for themselves. It is synthetic--though I think trivial--that one would in hindsight maintain that preference.

Expand full comment

> There may be exceptions to deflection, but none of them seem to apply in the mainstream cases.

I don't how many exceptions there are, so I can't answer whether they apply in mainstream cases. But it certainly doesn't seem like a "basic axiom".

> The defense of Pareto is synthetic. It is true that it is analytic that one would rationally prefer better things for themselves. It is synthetic--though I think trivial--that one would in hindsight maintain that preference.

It is interesting that you think it is analytically true that one would rationally prefer things that are better for themselves. But is that what you mean by the terms "rational" and "better for"? I.e. do you believe the following: to say that it is rational for A to prefer X over Y is just to say that X is better than Y for A?

But I'm not sure you actually hold that view. Because I believe you have previously asserted that it is rational for A to prefer things that are not just better for A, but also things that are better for others. E.g. I believe you believe the following: For any agent A, if X is better than Y for B and X vs Y does not impact anyone else, then it is rational for A to prefer X.

If you hold that view, then I guess your view is the following: to say that it is rational for A to prefer X over Y is just to say that X is better than Y?

Expand full comment
author

Yes, ceteris paribus.

Expand full comment

As a non-utilitarian, Deflection seems objectionable. Someone might deserve to get the harm, even if there's overall more of it.

Suppose that I release a fatally poison gas in a room containing me and five people, and there's five gas masks available. I am closer than anyone else to the masks and can easily secure one for myself if I so desire. However, due to my physiology, I am 0.00001% more likely to die from the gas than the next most-likely person to die. Should I take a gas mask or not? By deflection, I can redirect my risk of death to someone else by taking a gas mask for myself, so I should do so.

I have the intuitive reaction that for sufficiently small additional risk, the perpetrator ought to bear the (magnified) risk rather than his intended victims.

Expand full comment

I'm curious what your take is on the following thought experiment. Imagine you are placed in a room where you can stay alive indefinitely, but while inside the room your utility is always 0 - it's a completely neutral experience. Inside the room there is a number pad that you can press digits on like you would on a calculator. The only way to exit the room is to input some number on the pad and then press the enter button. The number you enter represents the utility you will experience in your life after exiting the room. So, if you entered 1, then you will be taken out of the room and experience a life of +1 utility. Presumably, you would want to enter some extremely large number by repeatedly pressing the 9 digit i.e. 9999999999999..... But the hard question is when do you stop? On the one hand, it seems irrational to ever stop - after all, at any point you get to, if you press 9 just ONE more time then that will increase the utility you experience by an entire *order of magnitude*. So you may as well keep pressing 9 over and over. But then this leads to a paradox because if you never hit the enter button, you will never leave the room and your potential utility will remain unrealized - you'll remain stuck in the room at utility 0. So, what's a utilitarian to do in this situation?

Expand full comment

I wonder if we can use a weaker premise. It's more plausible to say that Deflection is always permitted (rather than required). Then we have, all actions with positive utility are permitted.

That's enough for utilitarianists since all rival theories are thus ruled out lol

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

Still don't get why deflection is uncontroversial. If Alice and Bob have +10 utility and I can take an action that simultaneously adds +50 utility and -49 utility to Alice and doesn't harm Bob, I should take it. But I plausibly shouldn't take an action that gives Alice +50 utility and Bob -48 utility.

Expand full comment
author

It's taken mostly for granted in trolleyology. Otherwise, it is hard to account for lots of specific cases like Switch and Driver.

Expand full comment

The deflection axiom in trolleyology is weaker: If one can redirect a threat from A to B that they’ve created, such that it will cause less harm, they ought to, unless the threat was tied to a greater benefit for A. This works for the trolley situations but doesn't for your proof. The difference between our deflection axioms is the whole controversial part of total utilitarianism.

Expand full comment
Apr 7, 2023·edited Apr 7, 2023

Maybe that's because self-prerogatives or inequality aren't involved in trolleyology?

Expand full comment