Jay M has written a long comment on my previous post, one which I’ll respond to here.
> 1 If you make every day of a person’s life better and then add new days of their life which are good at the end of their life, you make the person’s life better. This is obvious.
> 2 If one life has a million times higher average utility per day than another, a million times higher total utility, and more equal distribution of utility, it is a better life. This is also obvious.
> 3 If A>B and B>C then A>C. I’ve defended transitivity here, while insulting those who criticize transitivity.
All plausible principles. But this isn't sufficient to make it reasonable to accept the conclusion, i.e. that it's better to live as the oyster than as Haydn. What you've really shown is that there are 4 plausible propositions that are mutually inconsistent: the 3 principles you mentioned just here and the proposition that it's better to be Haydn than the oyster. When met with a set of mutually inconsistent propositions, the rational decision is to abandon the least plausible proposition. It seems to me that the *least* plausible proposition here is your second principle. Prior to seeing this argument, I would give the second principle maybe ~80% credence, but I'd give the other 3 propositions (your other 2 principles and the proposition that it's better to be Haydn) >90% credence.
Remember, the dialectical context is that this is being trotted out as a counterexample to an otherwise plausible moral theory. Thus, whether it’s an effective counterexample will hinge on whether it is hard to reject. If it turns out that we have good reason to reject it, or at least that it’s reasonable to reject, it is not a good counterexample. This is especially true because of the meta reasoning about apparent counterexamples that I describe here. If utilitarianism were correct, we’d expect it to sometimes go against our intuitions, but we’d expect that, when it does, there are other plausible principles that must be denied to accommodate the original intuition. This is exactly what we find in this case. I also find the second principle that I laid out to be far more obvious than the intuition about Haydn—particularly when we take into account the other reasons to reject the Haydn intuition. Given the tentativeness of the Haydn intuition, is it really less plausible than the notion that making your average utility per day a million times greater and making your utility spread evenly distributed wouldn’t make you better off?
I'm going to assume that when you say utility considerations "outweigh" other considerations, you just mean we have most *reason* to promote utility rather than whatever other non-utility considerations are present.
You assume correctly
I suppose this statement of yours would be compelling to someone who believed that there is a threshold at which *the quantity of utility alone* can outweigh other considerations. But I don't think most people value utility in this way. Someone might have a utility threshold based on something other than just the _quantity_ of utility. For example, someone might think that the disvalue of suffering, but not the value of pleasure, can outweigh other considerations. Or one might think the utility threshold can be breached only if the utility is coextensive with other values (e.g., one might think that self-awareness must be present, that one must maintain their sense of personal identity while experiencing the utility, that one's memories are persisted, that average utility is sufficiently high, etc.).
Positing that only suffering can outweigh involves a puzzling sort of asymmetry, which becomes especially hard to maintain given the possibility of trading off pain avoidance for some amount of pleasure. If only self aware beings’ utility matters, this would exclude many infants.
So one might think that even an infinite quantity of utility doesn't necessarily cross their threshold, since the infinite quantity of utility might not be generated in the right way. In any event, you would need an argument explaining why, if someone adopts a threshold at which utility can outweigh other considerations, then they should adopt a threshold at which *quantity of utility alone* must outweigh other considerations. Without such an argument, there is no compelling reason to accept your statement here.
Note that if one believes that the utility threshold can always be satisfied by quantity of utility alone, then that would imply that the threshold can be breached even if the utility isn't *theirs*. When I make a sacrifice now in order to promote my happiness in the future, what is it that makes the happiness in the future "mine"? Well, presumably it's a sense of personal identity, psychological continuity, and maybe some other mental processes (self-awareness?). But if none of these mental processes are met, then there is no sense in which the future utility is mine (even if the future utility is nevertheless "good"). In fact, it seems that if I perform an action that creates a lot of utility for my body in the future but lack the aforementioned kind of mental processes, then I've actually committed suicide for the sake of some other being who will use my body. If that's right, then if the utility threshold can always be breached by the quantity of utility alone, then it would be irrational to not commit suicide to allow for a new being to be produced (so long as the new being experienced a sufficiently large amount of utility). In other words, it would be irrational to not commit suicide for the eternal oyster. But this is implausible.
I would defend the view that it would be irrational to not sacrifice oneself for the oyster, but it’s not entailed by the previous discussion—I may discuss that view at some future juncture. Specifically, the claim is that in moral tradeoffs, one has most moral reason to take action X if action X produces enough utility. For the present purposes, I was assuming that the oyster was somehow you, in at least the same sense that Haydn would be you. Whether that’s possible comes down to questions about personal identity, which are not worth delving into. However, if it is inconceivable, we can replace the oyster with a person “floating very drunk in a warm bath, forever.”
I'm going to translate all talk about "value/goodness/badness" to talk about (self-interested) "reasons". That said, while it may seem that *goodness* is aggregative in the sense you allude to here, it does not seem that our *reasons* are aggregative in this same sense. A good example of this concerns aggregating experiences that are qualitatively identical. Let's say a man is stuck in a time loop where he repeats the same day over and over (like Groundhog's day). Fortunately for him, he doesn't know that he's in a time loop. Also, the day is actually relatively good. It might seem plausible that experiencing the day N times is N times as good as experiencing the day once (all else equal).
But now consider someone else who has the option of _entering_ the time loop for N days (followed immediately by their death), where the quality of life of each day is greater than the average quality of life that he would have if he refrains from entering. Despite the fact that N days in the time loop is N times as "good" as just 1 day, it does not seem that he necessarily has N times as much reason to *enter* the time loop as he does to experience the day just once (even if we assume all else equal). To say otherwise would imply that there is some number of days N in the time loop such that it would be irrational to not enter. But this seems implausible. It seems like it may be rational to choose to, say, pursue other achievements in life rather than living the same (happy) day over and over.
This also implies that what a person has most self-interested *reason* to do can diverge from what is most *good* for them, something that I find plausible for other reasons.
I think it would be irrational not to plug in, and our contrary intuitions can be easily explained away. Given that uniformity is never conducive, in the real world, to overall utility, it’s unsurprising that we’d have the intuition that a totally uniform life would be worse.
I take X to be good for you iff you have self interested reason to promote X, so the second claim seems definitionally false.
> Suppose that everyday Haydn has 100,000 utility and the oyster has .1, as was stated above. Presumably living for 77 years as Haydn currently would be less good than living for 87 years with utility of 95,000, which would be less good than 150 years with utility of 90,000, which would be less good than 300 years with utility of 85,000, which would be less good than 700 years with utility of 80,000…4 which is less bad than one person having utility of .1 for 100000000000000000000000000000 years, which is clearly inferior to the oyster. By transitivity and this analysis, the oyster’s life is better.
The central premise underlying each step in this reasoning is that a much longer life with slightly lower average utility is better. Note that this is plausible only if we assume all other values are held constant when comparing the shorter and longer life. But there will be certainly some point between Haydn's life and the oyster's life where all other values are *not* equal. For example, there will be some point (which is vague and probably undefinable) where one loses self-awareness. Assuming that one values self-awareness in the same way that (I assume) most people do, it does not always seem better to greatly extend one's lifespan in exchange for inching their experiential quality closer to that of the oyster.
But self awareness plausibly would exclude infants. I also don’t think that there being vagueness in reality is possible—there has to be some fact of the matter about whether a being is self aware; vagueness is a part of language not reality. The map, not the territory, is vague. Additionally, if it’s vague, there’s no firm cutoff, which this argument requires.
However, even if we grant this, this would mean that as long as the oyster was self aware, they’d be better off than Haydn. This conclusion is bolstered by the many biases that undermine the reliability of our intuitions, combined with the other arguments I’ve presented.
Response to Jay M on Oysters
Another long response unfortunately! But I think there are some common themes in our disputes which I think can be boiled down to the 2 main points that I give at the bottom.
> Remember, the dialectical context is that this is being trotted out as a counterexample to an otherwise plausible moral theory. Thus, whether it’s an effective counterexample will hinge on whether it is hard to reject. If it turns out that we have good reason to reject it, or at least that it’s reasonable to reject, it is not a good counterexample. This is especially true because of the meta reasoning about apparent counterexamples that I describe here. If utilitarianism were correct, we’d expect it to sometimes go against our intuitions, but we’d expect that, when it does, there are other plausible principles that must be denied to accommodate the original intuition. This is exactly what we find in this case.
I'm not sure what much of this means, but it sounds like a lot of realist talk. I don't know what I would expect about the relation between utilitarianism and our intuitions if utilitarianism were "correct", since I don't know what you mean by "correct". If you mean "correct" in some realist sense, then I just don't understand what it would mean for a moral theory to be correct in a realist sense. If you mean "correct" in some anti-realist sense, then I *might* have a notion of what to expect, but it would depend on the flavor of anti-realism.
> I also find the second principle that I laid out to be far more obvious than the intuition about Haydn—particularly when we take into account the other reasons to reject the Haydn intuition. Given the tentativeness of the Haydn intuition, is it really less plausible than the notion that making your average utility per day a million times greater and making your utility spread evenly distributed wouldn’t make you better off?
I touch on this below, but I take it that claims about an agent's (non-moral) reasons for action are just claims about what is conducive to that agent's values. So I don't think any fact can be a reason to reject the Haydn intuition unless that fact explains why the intuition doesn't track my values (or the values of the agent who is considering becoming the oyster).
> Positing that only suffering can outweigh involves a puzzling sort of asymmetry, which becomes especially hard to maintain given the possibility of trading off pain avoidance for some amount of pleasure. If only self aware beings’ utility matters, this would exclude many infants.
I'm not sure if I value pleasure/suffering in an asymmetric way (probably not), but I do believe that the pleasure needs to be coextensive with the other values that I mentioned (and perhaps more) before sacrificing all other considerations.
Also, regarding infants, the claim is not that pleasure needs to be coextensive with other values in order to matter. Rather, the claim is that the pleasure needs to be coextensive with other values in order to *breach the utility threshold*. Of course, pleasure is good for infants, but the prospect of future infant pleasure for myself wouldn't be sufficient to outweigh all non-utility considerations.
> I would defend the view that it would be irrational to not sacrifice oneself for the oyster, but it’s not entailed by the previous discussion—I may discuss that view at some future juncture. Specifically, the claim is that in moral tradeoffs, one has most moral reason to take action X if action X produces enough utility. For the present purposes, I was assuming that the oyster was somehow you, in at least the same sense that Haydn would be you. Whether that’s possible comes down to questions about personal identity, which are not worth delving into. However, if it is inconceivable, we can replace the oyster with a person “floating very drunk in a warm bath, forever.”
For the record, I interpreted the previous post to be about non-moral reasons, since moral reasons are presumably intrinsically other-regarding whereas we're concerned with the reasons that an agent when only considering himself.
Regarding the bath hypothetical, that avoids the problems with suicide. But the same basic point applies: one might have a threshold where utility considerations outweigh non-utility considerations, but it might be that the *quantity* of utility alone is not enough to breach the threshold, i.e. other conditions must be met for pleasure to breach the threshold. I don't have a general theory or framework outlining these conditions because I think an agent's (non-moral) reasons vary based on their values. I have given some conditions that must be met for (I would assume) most people.
> I think it would be irrational not to plug in, and our contrary intuitions can be easily explained away. Given that uniformity is never conducive, in the real world, to overall utility, it’s unsurprising that we’d have the intuition that a totally uniform life would be worse.
This is similar to a point I made above. I don't know what it means to "explain away" an intuition. If you're just saying you can provide evidence that our intuitions are not tracking the objective reasons, then I would not actually disagree with you. But that's because I don't think *any* of our intuitions are tracking the objective reasons. Again, I take claims about an agent's (non-moral) reasons to be claims about what accords to their values. On my view, the only way you could "explain away" my intuition is if you provided evidence that my intuition did not track my values.
> I take X to be good for you iff you have self interested reason to promote X, so the second claim seems definitionally false.
"Self-interested" reasons probably wasn't the best term, since I'm sensing some ambiguity here. On one sense, self-interested reasons are just reasons based on one's self-interest. And self-interest is plausibly just synonymous with one's good, so self-interested reasons in this sense would be definitionally identical to goodness.
But what I meant by "self-interested reasons" was a broader notion. I meant the reasons that one has based on *their* values. This is a broader notion because presumably a person could value things other than their self-interest (e.g., they may value the self-interest of others or they may value impersonal things such as knowledge, art, etc.). While this sense of "self-interested reasons" is broader than the self-interest sense, I included the qualifier "self-interest" to exclude reasons that aren't based on an agent's values (e.g., presumably an agent's moral reasons are not based on an agent's values, hence why I switched to saying "non-moral reasons" in this response).
> But self awareness plausibly would exclude infants. I also don’t think that there being vagueness in reality is possible—there has to be some fact of the matter about whether a being is self aware; vagueness is a part of language not reality. The map, not the territory, is vague. Additionally, if it’s vague, there’s no firm cutoff, which this argument requires.
> However, even if we grant this, this would mean that as long as the oyster was self aware, they’d be better off than Haydn. This conclusion is bolstered by the many biases that undermine the reliability of our intuitions, combined with the other arguments I’ve presented.
Regarding infants, this is similar to my earlier point. The point here isn't that utility without self-awareness lacks value. The point is just that all else must be equal in order for it to be plausible to say that a large increase in duration is worth more than a small sacrifice in average utility per unit of time.
Regarding the self-aware oyster, self-awareness was just one example. Another might be a persistent sense of personal identity. You could respond with the eternal bath example which does have a persistent sense of personal identity. But I would give a similar response: depending on one's values, there may be some goods which cannot be maintained as one's experiential quality inches closer to that of the eternal oyster (or the eternal bath example). As a result, there will be some line where the "all else equal" condition that I mentioned will not be satisfied as one's experiential quality inches closer to that of either of your examples. Where that line is will vary based on one's values.
---
This is (again) a long post, but I think the main points are:
1. You seem to be invoking a lot of realist ideas or ideas that only seem plausible if realism is true. As an anti-realist and someone who doesn't even understand realism, I think we're probably talking past each other.
2. When I mention the importance of other values such as self-awareness, I'm not saying that these other values are necessary for the pleasure to matter. Rather, I'm making more narrow claims, e.g., to pleasure must be coextensive with self-awareness to breach one's utility threshold, self-awareness must be held fixed for it to be plausible that "living for 77 years as Haydn currently would be less good than living for 87 years with utility of 95,000", etc.