6 Comments

> 1 If you make every day of a person’s life better and then add new days of their life which are good at the end of their life, you make the person’s life better. This is obvious.

> 2 If one life has a million times higher average utility per day than another, a million times higher total utility, and more equal distribution of utility, it is a better life. This is also obvious.

> 3 If A>B and B>C then A>C. I’ve defended transitivity here, while insulting those who criticize transitivity.

All plausible principles. But this isn't sufficient to make it reasonable to accept the conclusion, i.e. that it's better to live as the oyster than as Haydn. What you've really shown is that there are 4 plausible propositions that are mutually inconsistent: the 3 principles you mentioned just here and the proposition that it's better to be Haydn than the oyster. When met with a set of mutually inconsistent propositions, the rational decision is to abandon the least plausible proposition. It seems to me that the *least* plausible proposition here is your second principle. Prior to seeing this argument, I would give the second principle maybe ~80% credence, but I'd give the other 3 propositions (your other 2 principles and the proposition that it's better to be Haydn) >90% credence.

> If one adopts any threshold at which utility can outweigh other things, then they’d have to accept that the utility is so great in this case that it outweighs any other non utility considerations.

I'm going to assume that when you say utility considerations "outweigh" other considerations, you just mean we have most *reason* to promote utility rather than whatever other non-utility considerations are present.

I suppose this statement of yours would be compelling to someone who believed that there is a threshold at which *the quantity of utility alone* can outweigh other considerations. But I don't think most people value utility in this way. Someone might have a utility threshold based on something other than just the _quantity_ of utility. For example, someone might think that the disvalue of suffering, but not the value of pleasure, can outweigh other considerations. Or one might think the utility threshold can be breached only if the utility is coextensive with other values (e.g., one might think that self-awareness must be present, that one must maintain their sense of personal identity while experiencing the utility, that one's memories are persisted, that average utility is sufficiently high, etc.).

So one might think that even an infinite quantity of utility doesn't necessarily cross their threshold, since the infinite quantity of utility might not be generated in the right way. In any event, you would need an argument explaining why, if someone adopts a threshold at which utility can outweigh other considerations, then they should adopt a threshold at which *quantity of utility alone* must outweigh other considerations. Without such an argument, there is no compelling reason to accept your statement here.

Note that if one believes that the utility threshold can always be satisfied by quantity of utility alone, then that would imply that the threshold can be breached even if the utility isn't *theirs*. When I make a sacrifice now in order to promote my happiness in the future, what is it that makes the happiness in the future "mine"? Well, presumably it's a sense of personal identity, psychological continuity, and maybe some other mental processes (self-awareness?). But if none of these mental processes are met, then there is no sense in which the future utility is mine (even if the future utility is nevertheless "good"). In fact, it seems that if I perform an action that creates a lot of utility for my body in the future but lack the aforementioned kind of mental processes, then I've actually committed suicide for the sake of some other being who will use my body. If that's right, then if the utility threshold can always be breached by the quantity of utility alone, then it would be irrational to not commit suicide to allow for a new being to be produced (so long as the new being experienced a sufficiently large amount of utility). In other words, it would be irrational to not commit suicide for the eternal oyster. But this is implausible.

> If you think that oyster pleasure is a little bit good and Haydn’s life is very good, if we multiply the goodness of oyster pleasure by a large enough number, we’ll get the badness of torture. There must be some number of days of oyster pleasure that can outweigh the goodness of Haydn’s life.

I'm going to translate all talk about "value/goodness/badness" to talk about (self-interested) "reasons". That said, while it may seem that *goodness* is aggregative in the sense you allude to here, it does not seem that our *reasons* are aggregative in this same sense. A good example of this concerns aggregating experiences that are qualitatively identical. Let's say a man is stuck in a time loop where he repeats the same day over and over (like Groundhog's day). Fortunately for him, he doesn't know that he's in a time loop. Also, the day is actually relatively good. It might seem plausible that experiencing the day N times is N times as good as experiencing the day once (all else equal).

But now consider someone else who has the option of _entering_ the time loop for N days (followed immediately by their death), where the quality of life of each day is greater than the average quality of life that he would have if he refrains from entering. Despite the fact that N days in the time loop is N times as "good" as just 1 day, it does not seem that he necessarily has N times as much reason to *enter* the time loop as he does to experience the day just once (even if we assume all else equal). To say otherwise would imply that there is some number of days N in the time loop such that it would be irrational to not enter. But this seems implausible. It seems like it may be rational to choose to, say, pursue other achievements in life rather than living the same (happy) day over and over.

This also implies that what a person has most self-interested *reason* to do can diverge from what is most *good* for them, something that I find plausible for other reasons.

> Suppose that everyday Haydn has 100,000 utility and the oyster has .1, as was stated above. Presumably living for 77 years as Haydn currently would be less good than living for 87 years with utility of 95,000, which would be less good than 150 years with utility of 90,000, which would be less good than 300 years with utility of 85,000, which would be less good than 700 years with utility of 80,000…4 which is less bad than one person having utility of .1 for 100000000000000000000000000000 years, which is clearly inferior to the oyster. By transitivity and this analysis, the oyster’s life is better.

The central premise underlying each step in this reasoning is that a much longer life with slightly lower average utility is better. Note that this is plausible only if we assume all other values are held constant when comparing the shorter and longer life. But there will be certainly some point between Haydn's life and the oyster's life where all other values are *not* equal. For example, there will be some point (which is vague and probably undefinable) where one loses self-awareness. Assuming that one values self-awareness in the same way that (I assume) most people do, it does not always seem better to greatly extend one's lifespan in exchange for inching their experiential quality closer to that of the oyster.

Expand full comment

I doubt it. Util says that we have to maximize desirable mental states.

Even if you live "forever" you don't necessarily have infinite mental states. Repeating the *exact* same mental state for 10,000 ears is still only ONE mental state. A minimal amount of utility.

Expand full comment