Uh Oh! The Person Affecting View Has THIS New FATAL Problem
It has to think that morally irrelevant interactions are morally relevant!
(Title is a joke—mostly making fun of the titles of David Pakman videos). Also, warning: this will get complicated and technical. I think this is one of my favorite arguments that I’ve ever thought of, and probably the second or third most clever.
The person-affecting view is perhaps the worst widely believed view in ethics. There are a million utterly decisive objections to it, and yet it still keeps chugging along. In 2016, Trump was called Teflon Don, because so many attacks people made against him seemed to bounce off him. The person-affecting view may as well be the Teflon moral theory—despite there being about Googol decisive objections, it’s still around, somehow! Buoyed to relevancy by its pleasant-sounding slogan, the person-affecting view has a fantastic ability to shrug off any criticism, however devastating, at least in the minds of its fanatical supporters.
The person-affecting claims that it’s good to make people happy but not to make happy people. That sounds sort of nice. Specifically, it claims that our moral obligations to promote welfare concern existing people—so we should make existing people better off, but we have no moral reasons, at all, to create people, no matter how happy they’d be. One has no reason to, for example, create utopia on mars, if they could do so with the press of a button. Somehow though, its supporters don’t find that to be a decisive objection. Hopefully, this argument will (actually, there are two similar versions of the argument; hopefully both will be convincing).
Here’s a plausible principle: if you should do A, B after having done A, C after having done B and A, then if action D has the effect of doing A, B, and C, you should do action D. So if you should save James, Lily, and Olivia, and pressing one button does all three of those things, you should press that button.
But now suppose that there are 4 buttons. Button 1 would create Steve in a cave—he’d live a good life—and would also provide Quan with 1 dollar’s worth of benefits. Button 1 would be worth pressing—benefitting Quan, who already exists, is good, and creating Steve is not bad. Suppose that none of these people will interact with others—they’ll just be born in an isolated cave. Button 2 would benefit Steve to some degree and create Brenda. This would be good! Button 3 would create Romero, benefit Brenda to some great degree, and get rid of the benefit to Quan. That would be worth pressing—the benefit to Brenda would dramatically outweigh the cost to Quan. Button 4would have the effect of pressing all of the other buttons. Given that button 4 has the effect of bringing about 3 good actions, it’s good according to the earlier principle. But Button 4 just creates a happy Steve, Brenda, and Romero—after all, the benefits to Quan of action 1 are canceled out by action 3. So it’s worth pressing a button that just creates happy people.
Of course, you might object that what matters is not the effects of the actions but that they benefitted people. Sure, they benefitted people who wouldn’t have existed in the absence of the actions. But nonetheless, the collective effect of the actions involved separately creating and then benefitting the people.
However, it’s very obvious that this is not morally relevant. Suppose that in the earlier case, the ways that the people are benefitted is that a basket of gifts appears in their isolated caves. Thus, button 4—which is worth pressing—has the effect of creating Steve, Brenda, and Romero, each of whom have a basket of gifts. This view has to hold that, to determine whether creating a happy Steve, Brenda, and Romero—each of whom have a basket of gifts—is good, one has to know if the basket of gifts creation is causally connected to the other people who were created. But that’s totally irrelevant—changing the way button 4 works so that the gift baskets are causally connected to the creation of other people doesn’t matter. The alternative view is bizarre—if an act has various effects, the causal connection between various subparts of the act is clearly morally irrelevant.
One could object by believing in incomparability. They could claim that creating a new person is neither good nor bad. However, when things are neither good nor bad, improving them slightly might still leave them neither good nor bad. The believer in incomparability claims that A and B might be incomparable, such that A is not better than B, B is not better than A, and even if you improved A, it would still not be better than B. For example, going to Yale may be no better than going to Harvard, going to Harvard may be no better than going to Yale, but even if you were offered an extra 10 dollars to go to Harvard, it would still be no better than Yale.
Thus, the believer in incomparability would say that button 1 is neither worth pressing nor not worth pressing. Thus, the argument fizzles—without button 1 being worth pressing, the argument cannot succeed. Of course, we have good reasons to accept comparability, but deniers of comparability obviously aren’t convinced by this reasoning. Fortunately though, we can rescue the argument without denying comparability.
Believers in incomparability generally claim that there’s some threshold, such that if A was improved by a huge amount it would be better than B, even if they begin incomparable. If Yale and Harvard are incomparable, if someone promised you a quadrillion dollars to go to Yale, then you should pick Yale.
Suppose that the threshold is 10 units of well-being, such that creating a happy person only becomes worth doing if it increases the well-being of an existing person by 10 units. As we’ll see, the argument can be made no matter what the threshold is.
Suppose that there are 20 initial buttons. The first one creates a happy person and gives 10 units of well-being to an existing person named Frankenstein. The next 19 each create a happy person and give 10 units of well-being to the previously created person. Each button is worth pressing because it clears the threshold. Then, next ten buttons each take 1 unit of well-being away from Frankenstein while giving 1 unit of well-being to all of the 20 people who have been created by the buttons. Each of those actions are worth taking. But collectively, these 30 actions just create 30 people with a good life plus additional benefits. One button that did all those things would just create 30 people—and then benefit all of them. But, as was established earlier, if that’s good, then creating 30 people with good lives—who are born with the benefits—is good. So the argument works even if we believe in incompleteness.
Suppose you irrationally reject this argument. Perhaps you think that the principle that says “if you should do A, B after having done A, C after having done B and A, then if action D has the effect of doing A, B, and C, you should do action D,” is false. Fortunately, we can reconstruct the argument without that principle.
Here’s another plausible principle: suppose an action has three effects that are good both axiologically and deontically and has no other effects. The action is worth taking. Talking about a deontically good effect may sound strange—let me clarify what I mean. Some effect is deontically good if an action that brought about only that effect would be worth taking. An effect is axiologically good if it improves the world.
This principle is extremely plausible. It’s very intuitive, I cannot think of a single counterexample, and it’s supported by lots of cases (E.g. if you can save 3 people, each of whom are individually worth taking, then you should do so). Note that when I describe an action as being worth taking, I mean that it’s better to take that action than to do nothing—there might, of course, be other actions that are even more worth taking.
Well suppose that an action creates three people, and gives them each benefits. That can be decomposed into three actions—each of which create one person and benefits one person who will exist. So each of those actions are worth taking. But then the single action is worth taking—meaning you should create three happy people with gift baskets in caves. By the reasoning earlier, because the causal connections between sub-acts aren’t relevant, this means that you should create happy people.
One last objection you might have is that what matters is that you create an additional benefit. Maybe it’s not good to create a person who will naturally be happy, but it is good to create a person who you will then benefit. But this is very implausible. Suppose that you can create one of two people—one of whom will be born without arms naturally, who you’ll somehow give arms, the other of whom will be born with arms. On this account, because you take additional action to benefit the first, this would be better. But that’s obviously false.
I think these arguments are pretty decisive. Objections?
Surely no coherent ethical systems rely on thinking about impossible buttons that create impossible people, either for or against. We have something a bit like a button that creates people, and in fact living long enough to push it a bunch is our prime evolutionary goal. There's no telling if the people created will be happy, though, and this analog of a button very much affects the persons involved. A thought experiment that starts by skipping over the actual hard parts is on the same level as 'assume a can opener'.
This seems to assume that units of well-being can be created out of nothing indefinitely, but is that the case? Since humans are social animals I don't think solitary confinement in isolated caves will lead to happiness. But if we imagine them all in one large cave then eventually we will run into overcrowding, and adding more people will reduce each person's share of the cave's finite resources. Perhaps we should imagine that there is a finite limit to the units of well-being? Then it would be good to keep creating people until all the units of well-being are accounted for, but not so good once that threshold has been passed and creating more people requires removing units of well-being from the existing people. Is there a minimum level of well-being needed for survival? If so then I suppose there is another threshold where creating just one more person would reduce the entire population to below the minimum survival level. That sounds bad to me.