In a recent post, I made an argument for the conclusion that one should create a person with positive utility even if that involves inflicting some lesser amount on suffering on existing people. But I realized that one can design a very similar argument for utilitarianism full stop. This appeals to a few premises.
Weak Pareto: If some act is better for someone and worse for no one, it is good overall.
Deflection: If one can redirect a threat that they’ve created, such that it will cause less harm, they ought to.
Combination: If one should take each act in some sequence, then if some act N produces the same effect as taking each act in the sequence, they should take act N.
Normative Decision-Tree Separability: the moral status of the options at a choice node does not depend on other parts of the decision tree than those that can be reached from that node. .
Expansion Improvability: The fact that a choice enables future choices that are worth taking does not count against it.
By utilitarianism, I mean that, one should take all actions with positive utility. I guess this isn’t quite utilitarianism, because one could in theory think that one should just take all actions, or the actions proscribed by utilitarianism and other ones, but this is close enough to utilitarianism to be worth calling it that—there is literally no theory I’ve heard anyone propose that meets this requirement that is not utilitarianism.
To illustrate the proof, Pareto will entail that one should do anything with positive utility. By Pareto, suppose there is some act that causes Jane negative utility while giving Jon slightly greater positive utility. Suppose that Jane will be pricked by a sharp object to give John some greater amount of utility. Well, it would be good if John was pricked by the sharp object and given more utility by Pareto. Then it would be good if the threat were deflected from John to Jane by Deflection. By Combination then, an act that pricked Jane to give more pleasure to John is good. One might think that the future or past actions make the other act undesirable, but this is ruled out by Normative Decision-Tree Separability and Expansion Improvability.
Anything that is utility maximizing could be improved by Pareto by being turned into a combination of a threat and utility boost, and then the threat could be redirected. Thus, this gets us all the way to utilitarianism.
I think each of these premises is super plausible—you can see more discussion of them here. Deflection seems to be taken for granted in trolleyology, weak Pareto is supremely plausible, and the other three are, I think, pretty trivial. If I were a non-utilitarian, I’d probably want to reject Pareto.
Rejecting Pareto can be supported by reflection on some thought experiments. For example, it seems like even if forcing someone to take up cooking would increase their utility, it would still be wrong to do so for paternalist reasons.
But I think that Pareto is more plausible than these counterexamples. All these counterexamples seem to involve imagining cases where we stipulate that a typically disastrous act stops being disastrous and our intuitions fail to update. There is a very strong utilitarian reason to have a very strong non-paternalistic heuristic, even if you’re pretty sure that something will benefit someone.
Additionally, Pareto can be supported through the following principle (broadly inspired by Parfit in reasons and persons).
If all parties affected by some act would rationally consent to it, that act is worth taking.
If some action is a Pareto improvement, all affected parties would rationally consent to it.
Therefore, if some action is a Pareto improvement, it is worth taking. If something increases one’s welfare, they’d rationally consent to it, thus, it is worth doing. If someone fails to consent to something due to error on their part, it seems we can sometimes override their preferences, if we know this fact with certainty. For example, it seems if a young child refuses to go to a hospital despite it benefitting them, they ought to.
Another argument can be given for the Pareto principle.
Any act that is a Pareto improvement can be decomposed into a sequence of acts, each of which improves the life of one person and make no one else worse off.
If an act can be decomposed into a sequence of other acts, and each is worth taking, no matter which combination of the others one has taken, then the original act is worth taking.
Therefore, if an act that improves the welfare of one person and harms no one is worth taking, regardless of which other sequence of such acts have been taken, then if an act is a Pareto improvement it is worth taking.
If only one person is affected by some act and they will later, when they are more rational, prefer that the act that would have made them better off had taken place, then one should take that act.
If at the end of one’s life they would be omniscient and perfectly rational for one second, they would prefer that any act which improves their welfare and affects no one else had taken place.
Therefore, if some act is a Pareto improvement, it would be worth taking if all the affected parties will be omniscient and perfectly rational for one second at the end of their life.
If acts that are Pareto improvements would be worth taking if all the affected parties would be omniscient and perfectly rational for one second at the end of their life, then they are worth taking.
Therefore, if an act is a Pareto improvement, it is worth taking.
1 is obvious—if an act benefits A, B, and C, then it can be divided up into 3 acts, one of which benefits A, the other B, the other C.
2 is plausible—if an act is the same as a sequence of other acts, and each is worth taking conditional on the others, then the single act is worth taking. If you should save John and Jane after saving John or John after saving Jane, you should save Jane and John.
3 follows from 1-2.
4 is plausible. If one doesn’t currently want an act to happen, but they later will when they’re older and wiser, then that act should happen. If one does not want a surgery, but will later wish they’d had it, it seems fine to give them a surgery, if we really control for all other ripple effects, and know that in the future they’ll wish it had happened.
5 is also plausible—when one is fully rational they’d prefer good things happened to them to bad ones. 6 follows from 3-5.
7 is very obvious. Surely whether it’s worth taking some act that makes one better off cannot depend entirely on one second at the end of their life in which they’ll prefer it had happened. This would involve hypersensitivity—the rightness of an act supervening on whether the affected party will be smart and omniscient 80 years later. And the Pareto principle follows.
Objections? Questions? Reasons why the actions that harm me and benefit no one else are good all else equal? Leave a comment!
> By utilitarianism, I mean that, one should take all actions with positive utility.
This is a strange definition of utilitarianism, and is not a prescription that the vast majority of people - including Utilitarians - would accept. Imagine we live in a world where everyone is experiencing +10 utility. You have two options A and B before you. If you do A, then everyone's utility will increase to +20. If you do B, then everyone's utility will increase to +30. There are no other actions and these are the only effects of the action (there are no important non-utility effects). Action A will have positive utility, since it results in a world with more utility, but you obviously should not do it.
> Deflection: If one can redirect a threat that they’ve created, such that it will cause less harm, they ought to.
There are counterintuitive implications of deflection. Imagine someone created a harm that was directed at themselves, either by intention or reckless neglect. If they do nothing, they will cause themselves -50 utility. But if they deflect the harm to an innocent third party, they will cause the third party -40 utility. Most would say the agent should not deflect. Instead, he should suffer the consequences of his actions.
More generally, if the harm was originally directed at someone who is responsible for the harm in some way (e.g. they created the harm, they consented to the harm, etc.), then it does not seem intuitive that one should deflect.
> If something increases one’s welfare, they’d rationally consent to it.
This idea is essential your defense of Pareto. Is this an analytic truth or a synthetic truth? If it's an analytic truth, then it's trivial. If it's a synthetic truth, then you need to articulate what you mean by these concepts. I take it that to say that an action promotes one's welfare is to say that the action is "good" and to say that something is "good" is just to say that it is something that is rational to pursue. But on this view, to say that one would rationally pursue their welfare would be a trivial, analytic truth.
So you must be using a different meaning of "welfare" and "rationality", such that the concepts expressed by the terms are not reducible to each other. What in fact do you mean by these terms?
Similar questions can be asked about the term "should". You make claims about the relation between what we "should" do, what promotes "welfare", what it is "rational" to do. Presumably, you think these are synthetic truths. But then it's not clear what you even mean by the terms. Are these terms supposed to refer to distinct normative primitives that we all have clear and shared intuitions about upon reflection?
As a non-utilitarian, Deflection seems objectionable. Someone might deserve to get the harm, even if there's overall more of it.
Suppose that I release a fatally poison gas in a room containing me and five people, and there's five gas masks available. I am closer than anyone else to the masks and can easily secure one for myself if I so desire. However, due to my physiology, I am 0.00001% more likely to die from the gas than the next most-likely person to die. Should I take a gas mask or not? By deflection, I can redirect my risk of death to someone else by taking a gas mask for myself, so I should do so.
I have the intuitive reaction that for sufficiently small additional risk, the perpetrator ought to bear the (magnified) risk rather than his intended victims.