25 Comments

> "if creating this person would also increase the welfare of the third party by .0000001 units, then it would be worth doing."

This is a good point, and suggests a stronger version of the argument based on just two premises (your 2nd and 3rd principles).

> "Objections?"

I think the commonsense view here is to embrace time-inconsistency due to value changes. We should care more about existing people, and so reject (at time t1) the prospect of harming an existing person merely to bring a better new life into existence. If it was predictable that creating the new life would result in our doing the subsequent transfer, then we shouldn't create the new life. But if we don't anticipate the transfer, we could rationally follow a sequence of steps that yields this result.

We should create the beneficial life (when it has no apparent downside). And then, having done so, our values must change (at t2) to give full weight to this newly-existing person. Given our new values, we should then endorse the transfer. But it doesn't follow that the combination act of harming + creating is one we should regard positively, from the perspective of our t1-values. So the argument is invalid.

Expand full comment

The statement "you should inflict N units of suffering on an existing person to create a future person with more than N units of utility on net" on its own isn't really radical. It's radical if we assume one of the following conditions:

* One is *obligated* to create the future person in this way.

* The process by which the existing person is harmed is in a rights-violating way (e.g., used as a mere means).

In order to deduce a radical conclusion, you need to reformulate your premises in a way that explicitly references obligations or you need to specify that the harm happens in a way that is plausibly rights-violating. But then the premises are not very plausible axioms. E.g. it is not a very plausible axiom that we are obligated to create persons with positive utility (even extremely high positive utility). E.g. while it may be plausible to redirect threats to third-parties in order to minimize overall harm, it is not plausible to harm third-parties as a means to minimize overall harm (or in whatever way counts as a rights violation according to the deontologist); after all, deontologists are generally okay with redirection in general (e.g., 70% of deontologists support switching the lever from 5 to 1 in the trolley problem, according to PhilPapers surveys).

If the prescription is reformulated to explicitly exclude the two conditions that I mentioned above, it is not radical at all: "you are permitted do an action which has the side-effect of creating a future person with >N units of net utility, but which also has the side-effect of inflicting <N units of harm on an existing person." In fact, most would probably accept that you are permitted to do this action even if the new person has less net utility than the harm caused to the existing person.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I reject 3. It's good to give someone 50 + -49.9 units of utility but plausibly bad if the harm is redirected to someone else. It's the difference between effectively adding no new negative utility to the world and adding -N utility (among other actions), since agents are the fundamental locations of value rather than collections of agents.

Expand full comment
Comment deleted
Expand full comment