18 Comments
Dec 5, 2023Liked by Bentham's Bulldog

I also accept Hare's argument.

Like many intuitions that run against consequentialist conclusions, the one giving people issues here seems to rely upon an ambiguity, in this case with respect to "replacing". The sense of "replace" relevant to the argument would involve the *theoretically previous* (not temporally previous!) person being substituted in their entirely, including all knowledge, memories and hopes others have concerning them. The sense of "replace" making the conclusion counterintuitive is the sense of one person simply disappearing and another appearing, with the grief, confusion and costly adjustments by others remaining as part of the picture.

Expand full comment

Perhaps the problem here is a core difference people see between existence/nonexistence and utility/harm, sort of parallel to how people dont accept the conclusion that a certain amount of toe stubbings over a lifetime is a greater harm than death. If you value the existence of a person (in any state positive or negative) as a matter of first order moral significance you might object to this hypothetical in all but the most extreme pleasure/pain differentials, distorting the applicability to other scenarios.

Oh and change the subheading from “Carpar” to “Caspar” 😁

Expand full comment

> For one, it denies the transitivity of active favoring. One actively prefers A to B if, were one unsure whether A or B was actual, they’d hope A was actual.

I don't see how the transitivity of active preferences is denied if favors every step until the one where Bob stops existing. I'm not sure what the argument is supposed to be for that.

It looks like you need some kind of linking premise which asserts that one's normal (not active) preferences should stand in some kind of relationship to their *active* preferences. Without such a linking premise, one can just say "Sure, if I didn't know what was actual, I would prefer P(n) to P(n-1) for all n and therefore (by transitivity) I would prefer P(n) to P(m) for all n > m. However, given that I know what is actual, I prefer P(n) to P(n-1) only if n doesn't breach the threshold determined by what is actual"

> Third, it seems plausible that personal identity is vague. There isn’t a precise fact about how many changes make one no longer the same person. But if it’s vague whether a person is replaced, and you shouldn’t replace a person, then there are a range of actions whose desirability is vague—where there isn’t a precise fact about it. But how should one act in such cases? If one accepts that there are precise facts about what one should do in various circumstances—which they should—then they can’t think that personal identity is both vague and determines what one should do.

If one accepts this, then this undermines much of the intuitive force for Minimal Benevolence (at least for non-Utilitarians). Much of the plausibility of that principle stems from how well it explains our intuitions about treating persons. Without the force of person-affecting intuitions, Minimal Benevolence is much less obvious.

I do however think that the vagueness of personal identity is a problem for one who believes both that (1) moral realism is true and (2) our treatment of persons is a fundamental morally relevant consideration, apart from just aggregate utility (e.g., if you think we should respect the rights of other persons, we should keep our promises to other persons, we should give persons what they deserve, we should be partial towards persons that we stand in special relationship with, we should give special priority to persons who are worse-off, we should care about how utility is distributed among persons, etc. -- in fact, I think almost all of the proposed non-Utilitarian morally relevant factors depend on persons in some way). If one thinks that vagueness doesn't exist objectively and/or thinks that the contours of personal identity depend on social conventions (as I do), then it's not clear why or how a non-objective factor (i.e. personal identity) would have objective fundamental moral relevance.

Expand full comment

Some nitpicks on the first two principles:

> Transitivity: if you prefer A to B and B to C you are rationally required to prefer A to C.

I don't think you accept this given your realist perspective. I believe you've said in the past that you take "one is rationally required to do X" to be analytically equivalent to "one should do X". If so, then I think you should mean something like this:

"If you SHOULD prefer A to B and B to C, then you should prefer A to C".

But you wouldn't say:

"If yo DO prefer A to B and B to C, then you should prefer A to C"

Unless you've had some fairly big change in your views recently.

> Rational People are Guided by their Preferences: if there are two options, A and B open to you and you prefer A to B, you will actualize A rather than B. For example, if you’d rather there be a gold bar in the middle of time square to a genocide, then if you are deciding between bringing about one of the two, you should pick the gold bar over the genocide.

Again, here presumably what you mean is "If you SHOULD prefer X, then you should do X". Otherwise, I don't think you accept this.

Expand full comment

I think vagueness of personal identity can also cause problems for Utilitarians. Many of the arguments for e.g. transitivity of preferences derive from money pumps--it's irrational to put yourself in a position to lose money while someone cycles through your preferences.

But if it's vague whether the person who faces the choice between Z and A at the end of a cycle is the same as the person who faced the choice between B and A at the beginning, the argument loses some of its force.

If it's someone else who loses the dollar to get back to the choice that I had initially, it's much less compelling than if it's *me specifically* who loses the dollar.

Expand full comment

Great discussion! I agree with your conclusion re: non-identity cases (where neither A nor B is antecedently actual).

Self-binding is only "clearly irrational" if we should never do things that change our values/preferences, like falling in love. But if it's possible that you should do something that will change your preferences, then your (current) preferences could be better achieved by binding your future self. This would seem to make such self-binding rational (if your current preferences are themselves reasonable).

And, of course, wanting your child not to be replaced by a completely different person seems pretty reasonable, on its face.

To avoid implausibly sharp discontinuities, it may be that you should discount benefits to the counterfactual person the more different they become from your actual child. There will be some optimal point X that best balances benefits to your child vs changes to their psychological identity. After you adjust to that change, and come to care about the new person Child(X), you'll now be tempted to make further changes for *their* sake. But the prospect of this further change would make it *not* worth it, for Child(1)'s sake, to press X buttons in the first place. It will only be worth it if you can self-bind your future self to stop at X.

Expand full comment