Never mind, I'll find someone like you
I wish nothing but the best for you, too
—Adele, summarizing Caspar Hare’s morphing argument.
Caspar Hare is a genius! He has devised an extremely clever argument from which one can deduce almost all of the controversial claims of utilitarianism, all the while relying on extremely modest premises. I think his argument is pretty much a decisive proof that utilitarianism is correct unless the soul theory is correct. Thus, as the title suggests, those who reject utilitarianism are either souls or incorrect.
Richard has a good summary of Hare’s argument, though there are different versions depending on what he’s trying to prove. In The Limits of Kindness, Hare keeps the things he derives pretty minimal, but I think one can go further and prove all of the controversial bits of utilitarianism.
One of Hare’s targets is the claim made by proponents of the non-identity argument that, if one is deciding between having a child now who will be somewhat well off or a child later who will be very well off, it doesn’t matter which they choose. Choosing the child now is bad for no one—the child who is created still lives a good life so they’re not made worse off.
Hare argues for the following plausible principles (paraphrased):
Transitivity: if you prefer A to B and B to C you are rationally required to prefer A to C.
Rational People are Guided by their Preferences: if there are two options, A and B open to you and you prefer A to B, you will actualize A rather than B. For example, if you’d rather there be a gold bar in the middle of time square to a genocide, then if you are deciding between bringing about one of the two, you should pick the gold bar over the genocide.
Minimal Benevolence: one should prefer state of affairs A to state of affairs B if state of affairs A has all of the same people as state of affairs B but some of them are better off and none are worse off.
These are all pretty trivial. There are some people who reject transitivity because we live in a fallen world where sin is rampant, but it’s still pretty trivial. The others are obvious too! But from these, one can prove that one should create the happier person rather than the less happy person.
Call the happier person Todd and the less happy person Edward. Now imagine a sequence of the following form: P(1) is Edward, P(2) is still Edward but made slightly better off and changed very slightly to be a bit more like Todd, P(3) is Edward made even better off but changed slightly to be more like Todd…P(5 billion) is Todd. Minimal Benevolence requires that one prefers P(n) to P(n-1), because at each step of the way people are getting better off and keeping their identity. But the end of the sequence is just Todd. So Minimal Benevolence requires one prefers creating Todd to Edward; Rational People are Guided by their Preferences in combination with Minimal Benevolence requires that one create Todd over Edward.
Note that this argument requires the falsity of the soul theory. If the soul theory is true, then one is essentially a soul—as a result, no matter how much one changes Edward’s features, he will remain the same soul and thus Edward cannot gradually morph into Todd. But this isn’t a huge deal for the argument because the soul theory is probably false—though that’s a topic for another article and I don’t have much that’s interesting to say about it!
From this, however, we can draw other more extreme conclusions. Suppose I want to show that I should harm someone with whom I have a special relationship to provide a greater benefit for another. Clearly, I should harm my friend by some amount to provide a greater benefit to that same friend. But through the morphing sequence, we can gradually change the beneficiary from my friend to a stranger, at each step of the way improving the action, and the end of the sequence will be desirable.
Or suppose I want to prove that one should replace people with other better-off people, ceteris paribus. Well, that’s easy to prove—you gradually morph people, until they’re different people, and they’re better off at each step of the way. Therefore, you have an all-things-considered preference for their replacement.
I basically think that this is a proof of pretty radical full-blooded utilitarianism. Utilitarian extraordinaire Richard Chappell disagrees, thinking that Hare’s argument is ultimately unconvincing. Here, I’ll address what Richard says in response to Hare’s arguments.
Richard argues that we can be justified in having personal concerns—one can be justified in being partial to their son Bob, for example. They would thus favor every step of the sequence until the one where Bob stops existing. Hare worries that this violates a plausible anti-haecceity principle according to which whether you prefer A to B can’t depend on what’s actual. I agree with Richard that partialists have no reason to accept such a constraint.
But the view still produces a few distinct varieties of weird conclusions. For one, it denies the transitivity of active favoring. One actively prefers A to B if, were one unsure whether A or B was actual, they’d hope A was actual. I actively prefer that my bathroom is as I left it last to it being full of snakes because, I don’t know which it is (though I have a hunch), but I have a strong preference that it’s not full of snakes. But it just seems like a plausible requirement of rationality that the actively favoring relation is transitive.
Second, this produces troubling results that violate Minimal Benevolence. Suppose that there are 5 million buttons each of which would make Frank slightly better off and more like Tim. If all 5 million are pressed, then John would simply become Tim. On Richard’s view, it would be worth pressing all of the buttons except 1. But once you’ve pressed all the buttons except the last one, why not press the last one. This new person who started out as John but has been morphed would like you to—he’d be better off. “The original John is no more, so why not press the button—I am the same person as John—the only one that exists—and I’d be made better off,” he’d proclaim. If he is the only one who exists, it seems worth pressing the button—it would make him better off and harm no one.
Now perhaps it could be objected that one shouldn’t press the earlier buttons because if they do then that sets up a position where they can press the later buttons. If one presses the early buttons then, acting rightly, they’ll end up in a situation where John will be replaced. But this seems flagrantly irrational. It requires thinking that the fact that some action gives you other options that will be worth taking sometimes counts against an action. To see why this is irrational, imagine the following dialogue:
Genie: here are 4,999,999 buttons. If you press them all, John will just be made better off.
Button presser: Okay, I’ll press them.
Genie: But before you do, know that if you do, you’ll have the option to press another button that is worth pressing. This will make another change.
Button presser: Wow, that’s a great button that’s worth pressing that I won’t have to press. But as a result of the addition of it, I will now not press the earlier buttons that enable me to press the buttons.
Genie: You won’t have to press the final button.
Button presser: I know. But it’s worth pressing. That’s why I won’t press the earlier buttons—they’ll enable me to take a worthwhile sequence of actions and will unlock a further worthwhile sequence of actions.
Genie: I’ll cut you a deal! I’ll make it so that you can’t press the final button if you press the first 4,999,999.
Button presser: deal!
Clearly, this is irrational! A perfectly benevolent and rational agent shouldn’t bind their hands, removing their ability to take future worthwhile acts. An agent’s will shouldn’t be at war with itself, with their hoping that they won’t take future actions that are worth taking. The addition of future worthwhile actions can’t count against an action!
Third, it seems plausible that personal identity is vague. There isn’t a precise fact about how many changes make one no longer the same person. But if it’s vague whether a person is replaced, and you shouldn’t replace a person, then there are a range of actions whose desirability is vague—where there isn’t a precise fact about it. But how should one act in such cases? If one accepts that there are precise facts about what one should do in various circumstances—which they should—then they can’t think that personal identity is both vague and determines what one should do.
Notably, even if one accepts Richard’s view that what one should prefer depends on where they are in modal space, Hare’s argument still gives one reason not to bite the bullet on the non-identity problem. Whether you support A over B might depend on whether A or B is actual, but if one is deciding when to procreate, neither child is actual.
Thus, one who is rational is either a soul or a utilitarian.
I also accept Hare's argument.
Like many intuitions that run against consequentialist conclusions, the one giving people issues here seems to rely upon an ambiguity, in this case with respect to "replacing". The sense of "replace" relevant to the argument would involve the *theoretically previous* (not temporally previous!) person being substituted in their entirely, including all knowledge, memories and hopes others have concerning them. The sense of "replace" making the conclusion counterintuitive is the sense of one person simply disappearing and another appearing, with the grief, confusion and costly adjustments by others remaining as part of the picture.
Perhaps the problem here is a core difference people see between existence/nonexistence and utility/harm, sort of parallel to how people dont accept the conclusion that a certain amount of toe stubbings over a lifetime is a greater harm than death. If you value the existence of a person (in any state positive or negative) as a matter of first order moral significance you might object to this hypothetical in all but the most extreme pleasure/pain differentials, distorting the applicability to other scenarios.
Oh and change the subheading from “Carpar” to “Caspar” 😁