Deducing a Radical Utilitarian Conclusion From Very Plausible Axioms
Why you should inflict N units of suffering on an existing person to create a future person with more than N units of utility on net
The following principles are plausible.
Creating a person with positive utility is good all else equal.
If one can benefit one and harm no one, one should.
If one can redirect a threat that they’ve created, such that it will cause less harm, they ought to.
However, from these we can deduce that one should inflict N units of suffering on an existing person to create a future person with more than N units of utility. Here are the steps.
First 1 proves that one should create a person with positive utility. So one should create a person, all else equal, with .00001 utility.
Second, suppose an action does the following. First, it increases the newly created person’s welfare by 50 units. Second, it creates a threat that will decrease their welfare by 49.9 units. This action is clearly good—it improves their welfare and harms no one. The goodness of this follows from 2—it benefits them and harms no one.
Third, suppose one can redirect the threat that would otherwise decrease the welfare of the newly created person by 49.9 units to a third party, such that it will only cause 49.8 units of harm. A unit of harm is the same as a lost unit of welfare. This is clearly worth doing—after all, it redirects a threat to make it cause less harm. Additionally, people generally think one has especially strong obligations to those they’ve created—thus, the case for redirecting a threat is stronger. Virtually everyone accepts principle 3—this explains, for example, why one should turn the trolley in the driver version or flip the switch.
Fourth, suppose that they can replace the newly created and redirected threat with a threat that is merely created and directed towards the third party, that will cause 49.7 units of harm. This is an improvement by the pareto principle—it reduces harm to the third party and harms no one.
But this means that one should create a person with 50.00001 at the cost of causing 49.7 units of suffering. And 50.00001 vs 49.7 generalizes to the notion one should create a person with N units of utility at the cost of causing another person to lose any amount less than N units of utility.
One might reject the first principle. One popular view that does that, which has the advantage of avoiding the repugnant conclusion, is the critical level view, which says that it is only good to create a person with utility above some threshold—for example, ten units of utility. Now, I think this view is implausible, but even if one accepts it, we’ll still get the result that one should inflict N units of suffering on an existing person to create a future person whose utility is N units above the threshold.
One could adopt a person-affecting view. However, if they do, then they should accept that, if creating this person would also increase the welfare of the third party by .0000001 units, then it would be worth doing. Thus, we can still deduce the same result.
One could reject the second principle, which says that it is good to take some action if it benefits someone and harms no one. This principle might have some exceptions on deontological views—for example, it is plausible that, on deontology, one should not take actions that benefit someone if the person does not want this action to be taken.
But remember, this newly created person is a baby. They cannot consent to anything. Thus, it’s very plausible that, in this case, one should just do what maximizes their welfare.
One could deny 3, but there don’t seem to be plausible deontological views that involve doing that. Surely, if one creates a threat, and then they can reduce its size, they should do so.
Thus, each premise is very plausible. And they entail a very radical conclusion—one basically only accepted by utilitarians. Objections?
> "if creating this person would also increase the welfare of the third party by .0000001 units, then it would be worth doing."
This is a good point, and suggests a stronger version of the argument based on just two premises (your 2nd and 3rd principles).
> "Objections?"
I think the commonsense view here is to embrace time-inconsistency due to value changes. We should care more about existing people, and so reject (at time t1) the prospect of harming an existing person merely to bring a better new life into existence. If it was predictable that creating the new life would result in our doing the subsequent transfer, then we shouldn't create the new life. But if we don't anticipate the transfer, we could rationally follow a sequence of steps that yields this result.
We should create the beneficial life (when it has no apparent downside). And then, having done so, our values must change (at t2) to give full weight to this newly-existing person. Given our new values, we should then endorse the transfer. But it doesn't follow that the combination act of harming + creating is one we should regard positively, from the perspective of our t1-values. So the argument is invalid.
The statement "you should inflict N units of suffering on an existing person to create a future person with more than N units of utility on net" on its own isn't really radical. It's radical if we assume one of the following conditions:
* One is *obligated* to create the future person in this way.
* The process by which the existing person is harmed is in a rights-violating way (e.g., used as a mere means).
In order to deduce a radical conclusion, you need to reformulate your premises in a way that explicitly references obligations or you need to specify that the harm happens in a way that is plausibly rights-violating. But then the premises are not very plausible axioms. E.g. it is not a very plausible axiom that we are obligated to create persons with positive utility (even extremely high positive utility). E.g. while it may be plausible to redirect threats to third-parties in order to minimize overall harm, it is not plausible to harm third-parties as a means to minimize overall harm (or in whatever way counts as a rights violation according to the deontologist); after all, deontologists are generally okay with redirection in general (e.g., 70% of deontologists support switching the lever from 5 to 1 in the trolley problem, according to PhilPapers surveys).
If the prescription is reformulated to explicitly exclude the two conditions that I mentioned above, it is not radical at all: "you are permitted do an action which has the side-effect of creating a future person with >N units of net utility, but which also has the side-effect of inflicting <N units of harm on an existing person." In fact, most would probably accept that you are permitted to do this action even if the new person has less net utility than the harm caused to the existing person.