One of my favorite philosophers — Richard Yetter Chappell — wrote a paper about value receptacles several years back. While I think a lot of it is right, there are some significant disagreements that I have. I think that Yetter Chappell decisively refutes the value receptacles objection by showing several assumptions that seem true if the value receptacles objection is persuasive, but those assumptions allow one to easily avoid the value receptacles objection. However, while I think that this should convince one that the objection doesn’t work, I think many of the assumptions end up not being correct.
After explaining why some traditional versions of the value receptacles version don’t work, Yetter Chappell explains the best version of the objection.
Recall Singer (1993, 121)’s evocative explanation of the ‘value receptacle’ metaphor: “It is as if sentient beings are receptacles of something valuable and it does not matter if a receptacle gets broken, so long as there is another receptacle to which the contents can be transferred without any getting spilt.” The worry here is that there’s an important sense in which utilitarianism fails to treat us as individuals. It takes our interests into account, perhaps even as nterests, but not in a way that appreciates the normative distinctness of my interests and yours. We are all melded together, into a kind of unstructured, undifferentiated welfare soup. To make the problem vivid, imagine that Connie the Consequentialist is faced with two poison victims, and just enough anti-venom to save one of them. And suppose that, faced with their pleading faces, but realizing that it makes no difference to the total utility which person she saves, Connie finds herself feeling completely indifferent about her choice. It’s as if she had to choose between a $20 bill or two tens. Now, it seems that Connie making a deep moral mistake here. She’s treating the two people’s interests as completely fungible, like money, and neglecting what we might call the “separateness of persons”—the fact that each person is of distinct intrinsic importance, in their own right, and not merely a fungible means to aggregate welfare. As this case illustrates, we often imagine the utilitarian agent as having but a single ultimate desire: to maximize aggregate welfare. They thus see different individuals as interchangeable. It makes no difference, to such an agent, which of several people is helped (or indeed whether one person is helped a lot or several people each helped a little), so long as the impact on aggregate welfare would be the same in either case. To bring out why this is so objectionable, note that fungibility is, in general, the mark of the instrumental. Money is fungible precisely because we do not value the possession of particular bills: replacing two tens with a twenty would serve my ends just as well. For another example, if my sole ultimate desire is to slake my thirst, then I will be indifferent between two equally effective means to satisfying this goal. If someone switches my glass of water for another that’s qualitatively identical, this is not a change that’s normatively significant to me. I do not desire that glass in particular, so it may just as well be replaced by any other that would do the job. On the other hand, if I had (bizarrely) desired the original glass in its particularity, then the substitution would be of significance to me: it would thwart one of my non-instrumental desires. This connection between fungibility and merely instrumental valuation explains why the above objection to utilitarianism seems so forceful. It seems perverse to treat individuals as replaceable or fungible, because such treatment constitutes a failure to intrinsically value individuals in their particularity. The correct moral theory, we feel, must attribute intrinsic value to particular individuals and not just to the general welfare (cf. Cohen 2011)
Thus, Yetter Chappell presents it as a problem of what it’s fitting to care about rather than an axiological problem. While axiology finds that saving Jim is just as important as saving Jane, it would be perverse to be totally indifferent between those two choices. He gives another example later.
An intuitive example of this is pleasure: I’m completely indifferent between the prospects of a massage for my left foot or my right, assuming that either would be similarly pleasant.
But I think there’s a better explanation of this — it just seems reckless to not care very much about the results of big decisions. If I was making a decision about whether to have my left or right leg chopped off, it would seem bizarre and perverse to be indifferent. It seems bad and reckless to be indifferent between two big decisions.
To consider another example, suppose that I was making a decision about whether to marry someone. Even if I was assured that all lives would go precisely as well whether or not I marry them, it would still seem bizarre to flip a coin about marriage.
There’s an obvious heuristic reason for this — in the real world, it makes a lot of sense to think hard about big decisions. Our intuitions about how we should make decisions are largely dependent on the ways that we actually make good decisions in the world.
If what we care about is just how well people’s lives go — if that’s the only intrinsic value — then we should be indifferent between two states of affairs in which one has an equally good life. But again, it would seem perverse for one to not care about whether to move to another country and start a totally different life, with a totally different distribution of pleasure and objective list goods, even if the lives would be equally good.
Next, Yetter Chappell describes what would allegedly be fitting to care about.
There are distinct reasons pulling you in either direction, corresponding to the distinct values served by either choice. But these reasons are equally weighty, so the agent is torn rather than pulled without resistance towards one choice over the other.
But the same seems true of the other cases — the fitting response is to be torn between moving to a different country and starting a different life and staying put.
The reader should now have an intuitive grasp of the distinction between (equally-weighty) distinct final values and (equally effective) mere means to a single final value. I’ve suggested that one way this distinction might play out is that in the second case the two options are perfect substitutes, and hence the fitting attitude for an agent to take towards them is indifference. In the former case, by contrast, the two options are not substitutes; they serve different ends, albeit equally worthy ones. This naturally suggests that the fitting attitude to take is ambivalence, rather than indifference.
I think that the distinction between these is only apparent. If there are two distinct things that are equally valuable but are valuable in different ways — as is true of any distinct sources of pleasure — it’s not clear that there’s a fact of the matter about whether they’re substitutes. This will I think become clearer with analysis of more points made in the paper.
If there is a set of some number of distinctly valuable things, then I don’t think that there’s a fact of the matter about whether one cares about the fact that the things are members of the valuable set or whether they’re valuable themselves. If a property is necessarily associated with members of some set, then I don’t think that there’s a fact of the matter about whether one cares about the property or the set — there is, after all, no possible world in which the two of them come apart.
We are now in a position to evaluate the objection that utilitarianism treats people, and their interests, as fungible. This is, as we have seen, equivalent to interpreting utilitarianism as the view that only one token thing, namely aggregate welfare, has intrinsic value. Call this view token-monistic utilitarianism. This view really does neglect the separateness of persons, for it attributes intrinsic value merely to the whole, and not to each of us in our particularity. As a consequence, the token-monistic utilitarian mindset involves but a single desire—to maximize welfare—and treats our individual interests and concerns as mere (constitutive) means to the satisfaction of this more global goal. This is, I agree, morally perverse. But there is no reason why utilitarianism must take this monistic form. There is a very natural alternative view, call it token-pluralistic utilitarianism, on which each particular person’s interests are (separately) accorded final value.7 There is not just one thing, the global happiness, that is good. Instead, there is my happiness, your happiness, Bob’s, and Sally’s, which are all equally weighty but nonetheless distinct intrinsic goods. What this means is that the morally fitting agent should have a corresponding plurality of non-instrumental desires: for my welfare, yours, Bob’s, and Sally’s. Tradeoffs between us may be made, but they are acknowledged as genuine tradeoffs: though a benefit to one may outweigh a smaller harm to another, this does not cancel it. The harm remains regrettable, for that person’s sake, even if we ultimately have most reason to accept it for the sake of more greatly benefitting another.8
The comments above address a lot of this. I don’t think that there’s a fact of the matter about whether one cares about whether some combination of an offsetting good and bad is regrettable and desirable but in opposite directions or whether they offset. It seems like they offset by virtue of being regrettable and desirable in opposite directions.
Contrast this with the case of money: If you have to invest $5 to earn $10, there is nothing to regret. The $5 is a “cost” merely in the sense that it would have been even better if you could have attained the $10 payoff without having to pay the $5. But given that this is not an option, there is nothing regrettable about the deal as a whole, the way that there is something regrettable about benefitting one person greatly at lesser cost to another.
But this is explainable by it being a small decision and by two five-dollar bills being functionally identical to two ten-dollar bills. There is something regrettable about moving from one valuable house to another equally valuable house — our intuitions about these cases seem to be tracking the magnitude of the decision.
One other reason to think that this is true relates to worries about personal identity. I agree with Parfit — I don’t think there’s a precise fundamental distinction between persons. Thus, I think it makes sense to think of ethics as being about the interests of an all-experiencing super person — after all, there’s no robust sense in which our future self remains us.
An interesting implication of my account is that we may find that we actually treat our interests-at-a-time as fungible.9 While we might initially have assumed that our momentary interests have final value, we may find on reflection that we consider our interests across time, unlike interests across people, to be properly fungible. As in the case of fungible pleasures, this view can easily be incorporated into my framework by positing that individuals’ interests-at-times are mere constitutive means to the final good of their timeless welfare. Alternatively, you might opt for the view that it’s fitting to consider tradeoffs between timeslices to be just as emotionally fraught as tradeoffs between persons, and so assign final value to each momentary self individually. For purposes of this paper, I can remain neutral on this question of whether to attribute final value to momentary welfare, or only to timeless welfare.
But I think there’s a deeper problem — it’s not just about interests about time slices; it’s also about distinct ways in which things benefit people. Consider the example of moving from before — moving to a totally different place with identical pleasure and objective goods at each moment would seem like something that one should care about — unlike trading a ten dollar bill for two five-dollar bills. Thus, this seems to require regarding the causes of welfare as ends in and of themselves — after all, even if they all provide equal welfare, we should find something regrettable about changing them, though that regret is offset by excitement.
Here’s one intuition pump that shows that the view doesn’t have anything to do with caring about time slices. Suppose that one was choosing between either
A) Having 10 minutes of a massage and then 10 minutes of reading.
or
B) Having 10 minutes of reading and then 10 minutes of a massage.
Suppose the reading brings them more pleasure than the massage. If timeslices were ends in and of themselves, there should be something regrettable about this. This is not, I believe, true. The experiences do seem to be the same across the two cases, like trading a ten-dollar bill for two five-dollar bills.
Thus, I think overall, Yetter Chappell’s intuitions play on two features. First, it seems reckless to have an indifferent, apathetic attitude towards big decisions. Second, when things are different, it seems one shouldn’t have an indifferent, apathetic attitude. But any time benefits are distributed across people, things are different. When people are different, just like when benefits are different across the same person, it seems like one shouldn’t switch around people.
Thus, I think the utilitarian has a decent response to the value receptacles objection. They first insist that there’s no precise fact of the matter about whether or not we care about things because they’re good for people or just good — after all, good is necessarily the same as good for someone. They can’t be distinguished any more than maximizing the conscious experience of light can be distinguished from maximizing people’s conscious experience of light. They then apply the account I’ve given — one shouldn’t be indifferent towards exchanging very different things or towards big decisions.
Do people think this account is successful? I’m pretty uncertain about my thoughts surrounding this, so I’d be curious to hear what people think. I’m most confident in Parfit style considerations about personal identity — do people think there’s a way to reconcile that with the account given by Yetter Chappell?
At the start of writing this, I felt more confident in the account than I do now. But maybe that’s a mistake.
Thanks for this! Three main thoughts:
(1) What counts as a "big decision"? I say: one on which *distinct* big values are in conflict. It's not enough to just have big value involved. You shouldn't feel deeply torn about whether to claim a $1 billion prize in $100 bills or in $20 bills. As you say, these are "functionally identical" -- but that sounds like just another word for *fungible*, i.e. making no difference that it makes sense to care about.
So there is clearly a distinction to be drawn between equal-valued choices that make no difference worth caring about, vs. equal-valued choices that *are* meaningfully different. And that's just what I'm trying to capture.
(2) I agree that equal-valued choices within a life can be meaningfully different. And this isn't limited to timeslices. We may hold that different types of pleasure, or different types of objective value, warrant valuing "separately" from each other in this way. That's not an objection to my view, but an application of it.
(3) Parfit's deflationary view of identity is compatible with seeing something in the vicinity (e.g. "relation R" / psychological continuity) as having much the same significance as we ordinarily attribute to identity. See: https://rychappell.substack.com/i/56234504/does-anything-matter-in-survival