1 Comment

Another long response unfortunately! But I think there are some common themes in our disputes which I think can be boiled down to the 2 main points that I give at the bottom.

> Remember, the dialectical context is that this is being trotted out as a counterexample to an otherwise plausible moral theory. Thus, whether it’s an effective counterexample will hinge on whether it is hard to reject. If it turns out that we have good reason to reject it, or at least that it’s reasonable to reject, it is not a good counterexample. This is especially true because of the meta reasoning about apparent counterexamples that I describe here. If utilitarianism were correct, we’d expect it to sometimes go against our intuitions, but we’d expect that, when it does, there are other plausible principles that must be denied to accommodate the original intuition. This is exactly what we find in this case.

I'm not sure what much of this means, but it sounds like a lot of realist talk. I don't know what I would expect about the relation between utilitarianism and our intuitions if utilitarianism were "correct", since I don't know what you mean by "correct". If you mean "correct" in some realist sense, then I just don't understand what it would mean for a moral theory to be correct in a realist sense. If you mean "correct" in some anti-realist sense, then I *might* have a notion of what to expect, but it would depend on the flavor of anti-realism.

> I also find the second principle that I laid out to be far more obvious than the intuition about Haydn—particularly when we take into account the other reasons to reject the Haydn intuition. Given the tentativeness of the Haydn intuition, is it really less plausible than the notion that making your average utility per day a million times greater and making your utility spread evenly distributed wouldn’t make you better off?

I touch on this below, but I take it that claims about an agent's (non-moral) reasons for action are just claims about what is conducive to that agent's values. So I don't think any fact can be a reason to reject the Haydn intuition unless that fact explains why the intuition doesn't track my values (or the values of the agent who is considering becoming the oyster).

> Positing that only suffering can outweigh involves a puzzling sort of asymmetry, which becomes especially hard to maintain given the possibility of trading off pain avoidance for some amount of pleasure. If only self aware beings’ utility matters, this would exclude many infants.

I'm not sure if I value pleasure/suffering in an asymmetric way (probably not), but I do believe that the pleasure needs to be coextensive with the other values that I mentioned (and perhaps more) before sacrificing all other considerations.

Also, regarding infants, the claim is not that pleasure needs to be coextensive with other values in order to matter. Rather, the claim is that the pleasure needs to be coextensive with other values in order to *breach the utility threshold*. Of course, pleasure is good for infants, but the prospect of future infant pleasure for myself wouldn't be sufficient to outweigh all non-utility considerations.

> I would defend the view that it would be irrational to not sacrifice oneself for the oyster, but it’s not entailed by the previous discussion—I may discuss that view at some future juncture. Specifically, the claim is that in moral tradeoffs, one has most moral reason to take action X if action X produces enough utility. For the present purposes, I was assuming that the oyster was somehow you, in at least the same sense that Haydn would be you. Whether that’s possible comes down to questions about personal identity, which are not worth delving into. However, if it is inconceivable, we can replace the oyster with a person “floating very drunk in a warm bath, forever.”

For the record, I interpreted the previous post to be about non-moral reasons, since moral reasons are presumably intrinsically other-regarding whereas we're concerned with the reasons that an agent when only considering himself.

Regarding the bath hypothetical, that avoids the problems with suicide. But the same basic point applies: one might have a threshold where utility considerations outweigh non-utility considerations, but it might be that the *quantity* of utility alone is not enough to breach the threshold, i.e. other conditions must be met for pleasure to breach the threshold. I don't have a general theory or framework outlining these conditions because I think an agent's (non-moral) reasons vary based on their values. I have given some conditions that must be met for (I would assume) most people.

> I think it would be irrational not to plug in, and our contrary intuitions can be easily explained away. Given that uniformity is never conducive, in the real world, to overall utility, it’s unsurprising that we’d have the intuition that a totally uniform life would be worse.

This is similar to a point I made above. I don't know what it means to "explain away" an intuition. If you're just saying you can provide evidence that our intuitions are not tracking the objective reasons, then I would not actually disagree with you. But that's because I don't think *any* of our intuitions are tracking the objective reasons. Again, I take claims about an agent's (non-moral) reasons to be claims about what accords to their values. On my view, the only way you could "explain away" my intuition is if you provided evidence that my intuition did not track my values.

> I take X to be good for you iff you have self interested reason to promote X, so the second claim seems definitionally false.

"Self-interested" reasons probably wasn't the best term, since I'm sensing some ambiguity here. On one sense, self-interested reasons are just reasons based on one's self-interest. And self-interest is plausibly just synonymous with one's good, so self-interested reasons in this sense would be definitionally identical to goodness.

But what I meant by "self-interested reasons" was a broader notion. I meant the reasons that one has based on *their* values. This is a broader notion because presumably a person could value things other than their self-interest (e.g., they may value the self-interest of others or they may value impersonal things such as knowledge, art, etc.). While this sense of "self-interested reasons" is broader than the self-interest sense, I included the qualifier "self-interest" to exclude reasons that aren't based on an agent's values (e.g., presumably an agent's moral reasons are not based on an agent's values, hence why I switched to saying "non-moral reasons" in this response).

> But self awareness plausibly would exclude infants. I also don’t think that there being vagueness in reality is possible—there has to be some fact of the matter about whether a being is self aware; vagueness is a part of language not reality. The map, not the territory, is vague. Additionally, if it’s vague, there’s no firm cutoff, which this argument requires.

> However, even if we grant this, this would mean that as long as the oyster was self aware, they’d be better off than Haydn. This conclusion is bolstered by the many biases that undermine the reliability of our intuitions, combined with the other arguments I’ve presented.

Regarding infants, this is similar to my earlier point. The point here isn't that utility without self-awareness lacks value. The point is just that all else must be equal in order for it to be plausible to say that a large increase in duration is worth more than a small sacrifice in average utility per unit of time.

Regarding the self-aware oyster, self-awareness was just one example. Another might be a persistent sense of personal identity. You could respond with the eternal bath example which does have a persistent sense of personal identity. But I would give a similar response: depending on one's values, there may be some goods which cannot be maintained as one's experiential quality inches closer to that of the eternal oyster (or the eternal bath example). As a result, there will be some line where the "all else equal" condition that I mentioned will not be satisfied as one's experiential quality inches closer to that of either of your examples. Where that line is will vary based on one's values.

---

This is (again) a long post, but I think the main points are:

1. You seem to be invoking a lot of realist ideas or ideas that only seem plausible if realism is true. As an anti-realist and someone who doesn't even understand realism, I think we're probably talking past each other.

2. When I mention the importance of other values such as self-awareness, I'm not saying that these other values are necessary for the pleasure to matter. Rather, I'm making more narrow claims, e.g., to pleasure must be coextensive with self-awareness to breach one's utility threshold, self-awareness must be held fixed for it to be plausible that "living for 77 years as Haydn currently would be less good than living for 87 years with utility of 95,000", etc.

Expand full comment