4 Comments
User's avatar
Richard Y Chappell's avatar

Thanks for this! Three main thoughts:

(1) What counts as a "big decision"? I say: one on which *distinct* big values are in conflict. It's not enough to just have big value involved. You shouldn't feel deeply torn about whether to claim a $1 billion prize in $100 bills or in $20 bills. As you say, these are "functionally identical" -- but that sounds like just another word for *fungible*, i.e. making no difference that it makes sense to care about.

So there is clearly a distinction to be drawn between equal-valued choices that make no difference worth caring about, vs. equal-valued choices that *are* meaningfully different. And that's just what I'm trying to capture.

(2) I agree that equal-valued choices within a life can be meaningfully different. And this isn't limited to timeslices. We may hold that different types of pleasure, or different types of objective value, warrant valuing "separately" from each other in this way. That's not an objection to my view, but an application of it.

(3) Parfit's deflationary view of identity is compatible with seeing something in the vicinity (e.g. "relation R" / psychological continuity) as having much the same significance as we ordinarily attribute to identity. See: https://rychappell.substack.com/i/56234504/does-anything-matter-in-survival

Expand full comment
Bentham's Bulldog's avatar

(1) A big decision is a decision that will have a significant impact on your life. Your example is a good one -- I think for the intuition to kick in it would both have to be a big decision and have to change the distribution of value atoms in some way (this is done by changing either when they happen, by changing which one's they are (e.g. changing pleasure to friendship if one is an objective list theorist), or by changing which pleasures one experiences.

(2) We could hold that, but this seems incompatible with valuing either individuals or timeslices. If we accept

A) We should care about good things just because they're good for individuals

We should accept

(B) If two things are equally good for the same individual, we should care about them equally.

Thus, I think that the intuitions are better explained by it just seeming reckless to make big changes lightly, rather than by there being distinct sources of final value.

(3) I agree with that. However, I think various thought experiments by Parfit show that relation R is not what matters.

Expand full comment
Richard Y Chappell's avatar

re: (B): What do you mean by "care about them equally"? I think this is only true if "equally" means "with equal strength" (just as we should care about equally good outcomes for *different* individuals with equal strength). If you mean that you should have no cares (desires) that distinguish the two outcomes, then I think this is false. You should want Fred to have great relationships, and you should *separately* want Fred to avoid physical suffering, both for Fred's sake. If you had to trade off between the two, you should feel torn, and not as though the two are fungible means to the single goal of Fred's welfare.

Note that you can get the relevant conflict even in "small stakes" cases, where it doesn't seem that the charge of "recklessness" has bite. Suppose that either of two people could suffer a mild papercut. You should separately desire that each person not (mildly) suffer, and so feel (mildly) torn in the forced choice. I don't see any reason to deny this.

re: (3): what cases do you have in mind?

Expand full comment
Bentham's Bulldog's avatar

(B) I think this is right. But then this doesn't reflect caring about them for different reasons, it just reflects different things -- even when they matter for the same reason --seeming non-fungible. The objection to utilitarianism from value receptacles seems to be.

(C) Utilitarianism wrongly holds that there's nothing regrettable about anything that doesn't change the total welfare. It does this because it says that the only reason to care about goods is because they contribute to welfare.

However, we have reason to accept

(D) Even if we care about something (X) merely because it contributes to something else (Y), that doesn't mean we shouldn't feel some regret when X is replaced with Z which contributes equally to Y.

(D) is supported by various intuitions about welfare. If the only reason to care about a person being happy is because we care about things going well for them, and we should also care about their friendship only because we care about things going well for them, we should feel no regret when their friendship is replaced by a nice foot massage. This is, however, implausible.

The point about small stakes is a good one. I think that the more significant the decision is, the more reckless it seems. However, if distinct goods are traded off, there still seems something regrettable in small stakes decisions.

Re (3): I think various branch line cases, for example, show that the s relation is not what matters. To use the example used in your article, one's reasons would be no different if they were on a psychological branch line, between their selves a minute from now and their selves when they wake up in the morning.

Expand full comment