(Caplan, 2008) has given the following scenario
“Suppose you were offered the following gamble:
“1. With probability p, you will live forever at your current age.
“2. With probability (1-p), you instantly, painlessly die.
“What is your critical value of p? If you combine expected utility theory with the empirical observation that happiness is pretty flat over time, it seems like you should be willing to accept a very tiny p. But I can’t easily say that I’d accept a p<1/3.
“Perhaps the main reason is that all the people I care about would suffer a lot more from my instant death than they’d gain from my immortality. But even if I were fully selfish, I wouldn’t be enthusiastic even at p=.5. Should I get my head examined?”
I wouldn’t want to comment on whether Caplan should get his head examined. However, Caplan should take a small risk of living forever, assuming that his life would continue to be good indefinitely. The following very plausible principles, when combined with transitivity, is sufficient to justify the low chance of living forever.
The first principle is the Lengthening Principle: Let M be a number of years lived and N be a greater number of years lived. Let P be some probability less than 100%.
P(N)>100% chance of M, for some values of N and P.
The second principle is the modified independence: If A>B then the expected value of P(A)>P(B)--assuming P represents the same probability here.
To see how these principles are sufficient to justify the conclusion, suppose one is deciding between a 100% chance of 60 years and a 99.9% chance of 200 years. Clearly the 99.9% chance of 200 years of life would be better.
As per the second principle, a 99.9% chance of 200 years would be worse than a 99.8001% chance of 400 years of life, if a 100% chance of 200 years would be worse than a 99.9% chance of 400 years. After all, 99.8001% is 99.9%^2. Thus, as per the second principle, if a 100% chance of 200 years is less good than a 99.9% chance of 400 years, then a 99.9% chance of a 100% chance of 200 years is less good than a 99.9% chance of a 99.9% chance of a 99.9% chance of 400 years, which would mean that a 99.9% chance of 200 years would be worse than a 99.8001% chance of 400 years of life. We can keep doing this process--each time finding a low chance of a very long life to be better than a certainty of a shorter life.
One could reject the modified independence principle. However, this principle is widely accepted--denying it seems to require a confusion of the type required by rejecting transitivity. To see why this principle is obvious, compare a 99% chance of a 99% chance of X to a 99% chance of Y, in a world where we conclude that a 99% chance of X is better than certainty of Y. The 1% chance of getting neither X nor Y, regardless of which action is chosen, doesn’t affect the decision at all. There’s a 1% chance that nothing is had, regardless of what they choose. Thus, that additional 1% of the probability space should be ignored in decision making.
However, in the 99% segment of probability space, the choice is just between a 99% chance of X and a 100% chance of Y. However, we’ve already granted that a 99% chance of X is better than a 100% chance of Y. Thus, this follows as a basic logical principle, once the terms and ideas are properly considered.
Rejecting this requires us to reject dominance. Consider the problem in the following way. Imagine there are two random number generators, which generate numbers between 1 and 100. They can choose two offers.
Offer 1: Use random number generator 1. If it generates any number other than 100, you get Y.
Offer 2: Use random number generator number 1. If it generates any number other than 100, use random number generator 2. If that generates any number other than 100, get X.
Thus, random number generator 1 will either generate number 100 or not. If it does generate number 100, the offers are equal. However, if it does not, then one is choosing merely between a 99% chance of X and a 100% chance of Y. Given that we’ve already stipulated that a 100% chance of Y is less good than a 99% chance of X, the offers are either equal or offer 2 is better.
Thus, modified independence should be accepted. I’ve defended transitivity previously in this chapter. In order to hold the view espoused by Caplan, one would have to reject the lengthening principle. However, this principle is not plausible to reject.
Even if we accept that utility has diminishing marginal value (from a moral standpoint), as long as its value never drops to zero, the utility will be sufficiently vast to make the lengthening principle true. Even if one accepts that the moral value of some amount of utility is a logarithmic function of utility, the lengthening principle would still be true.
Thus, one would have to hold that vast amounts of utility beyond a certain point have either zero moral value or asymptotically approach zero moral value. However, holding such a view is deeply implausible.
First, suppose you came across some entity that had been around since the beginning of the universe. Additionally, suppose that the universe was far older than we thought, so the entity was really 100^100^100 years of age. The entity has lived a pretty good life up until this point. It’s not at all plausible that the moral value of benefits to the entity would be near zero.
Suppose you can either give a slight benefit to a human or give benefits to that entity equivalent to the sum total of all well-being experienced during all of human history. Rejecting the diminishment principle and accepting the moral value of happiness beyond a certain threshold being near zero, would lead to the conclusion that one should give slight benefits to a human (say, equivalent to eating one chocolate chip), instead of producing for that very old entity the sum total of all joy experienced during the history of the world. This is deeply implausible.
Another related implausible implication follows from this judgment. Suppose that one had a dream during which they subjectively experienced a vast amount of time. To them, the dream felt like living 100^100^100 years. They had experiences for eternities, though in the external world only a few hours passed. After they woke up, this account would say that making them well off would become almost entirely morally irrelevant. This is not at all plausible. Long dreams shouldn’t rob a being of any moral worth.
If one bites the bullet on these cases, even larger problems arise relating to the experience of pain. If pleasure has diminishing marginal value, does pain also have such diminishing marginal value? Whether the answer is yes or no, the consequences are very implausible.
First, suppose the answer is yes, pain has diminishing marginal value. The moral badness of pain diminishes the more pain one has experienced. If this is true then, returning to the being that has lived 100^100^100 years, torturing such a being would be worth preventing a pinprick. They’ve lived so long and experienced so much pain that, if pain has decreasing marginal value, the marginal disvalue of them being tortured would be nearly zero. This, however, is deeply implausible. Beings that have been around for cosmic timescales are still bad to torture for slight benefits to others.
One might appeal to rights, arguing that you shouldn’t torture them to prevent other slight pains merely because it would be a rights violation. However, they could argue, the value of their pain does still diminish. This, however, still runs into similar problems. The scenario can be modified so that instead of torturing them to prevent a pinprick, one can either prevent their torture or prevent a pinprick. The person who adheres to declining marginal value of pain would have to argue that one should prevent the pinprick. This, however, is deeply implausible.
One could argue that there’s only declining marginal value of pain if their life is painful overall. This, however, is deeply implausible. It would hold that if a person has been miserable for 100^100^100 years, then torturing them wouldn’t be very bad--and it would be less bad than a pinprick. This is ludicrous!
One might object that after being in pain for 100^100^100 years, they are less harmed by the pain, so the judgment isn’t counterintuitive. This, however, rests on a confusion. For the purposes here, we’re stipulating that their subjective experience of disliking the pain is equivalent to that of the average person being tortured. While experiencing lots of pain might make one less bothered by pain, such considerations are accounted for by the utilitarian account I defend. Thus, for the hypothetical, it is stipulated that they have a similar reaction to pain to the one that most of us would have to pain, finding it equally unpleasant.
Thus, this account which holds that the more pain one experiences, the less bad their pain is, will not work.
Now suppose that their answer is no, and they hold that pain doesn’t have decreasing marginal value. Pain is just as bad, regardless of how much pain one has experienced previously. This judgment produces implausible results when combined with belief in the declining marginal value of pleasure.
If pleasure above a certain threshold has virtually no moral value, but pain continues to have the same value, then from the standpoint of this entity who has been around for 100^100^100 years, they should not be willing to trade off any amount of pain for any amount of pleasure. If pleasure produces virtually zero value for them, but pain still produces disvalue, then they should prioritize minimizing pain over pursuing pleasure. This would produce the following implausible verdicts.
The being can undergo a hundred years of unfathomable bliss, each experience as pleasurable as the sum total of all joys in human history. However, if they do this, they’ll stub their toe once. This account would say they shouldn’t make that tradeoff. Such a judgment is ridiculous.
The being should not be willing to trade off having its pleasure increasing a hundred thousand fold in exchange for a papercut. One might object here that increasing their pleasure a hundred thousand fold would reduce the pain they experience more than the papercut would cause. However, we can stipulate that the being will experience no future pain, so no pain is being reduced.
Suppose the being knows that in the future they’ll experience one pinprick. However, for the rest of their existence they’ll be unfathomably well off. This principle would hold they should commit suicide. This is deeply implausible.
Such judgments are hard to stomach. However, additional problems present themselves for such a view.
Suppose you personally found out that there was a 50% chance that you’d been around for 100^100^100^100 years previously and a 50% chance that you hadn’t. Then a genie appears to you and gives you two offers.
Offer 1: If you’ve been around for 100^100^100^100 years you’ll get a vast amount of utility--as much as has been experienced previously throughout all of human history.
Offer 2: If you haven’t been around for 100^100^100^100 years, you’ll get a chocolate bar that is tasty.
It seems clear one should take option 1. However, option 1 would only benefit you if you’ve experienced vast amounts of previous utility, which would mean your extra utility has virtually no value. Despite that, it’s hard to imagine that one should take option 2.
One might object that having no memories of your previous lives means that the people in them were not you. We can modify the scenario to avoid this. Suppose that right before you gain the utility, you’d have all of the memories of the 100^100^100^100 years of existence. This still doesn’t seem to change the verdict.
Additionally, such a view about personal identity is deeply controversial and most likely don’t accept it. It’s hard to imagine that having your memories erased would turn you into a different person. After all, when you dream, you often don’t have access to your memories. Despite that, your dream self still seems like you.
Finally, we can modify the scenario so that you still have access to the memories but just are not thinking about them. Strangely, you haven’ thought of your previous lifetimes full of memories for your entire lifetime. You still have access to them, you just haven’t thought about them, and still don’t think about them when presented with the offer.
This scenario may seem bizarre. However, not focusing on particular memories is not contradictory, merely implausible in the scenario. Yet a scenario being implausible does not mean that it has no normative implications.
It’s clear that even if you don’t think about particular memories, that doesn’t mean you aren’t the person who you were during those memories. I currently am not thinking about memories from when I was 8, but that doesn't mean, even on the memory based theory of personal identity, that I’m not the same person as my 8 year old self. The memory theory is about the memories which we have access to, not the ones we happen to remember at any particular moment.
Finally, suppose that I have memories of when I was a child, during which I had memories of my previous lives, during which I had memories of the previous ones, during which I had memories of previous ones, etc. If you remember events during which you remembered particular events, then you are still the same person on the memory theory who was around during those original events. I am, for that reason, still the same person as my two year old self, despite not having many memories of being 2.
Thus, with such questions out of the way about whether or not one remains the same person as their much younger self, and whether they would be the same person they were in previous lives, we now turn to the specific moral verdict and why it’s counterintuitive. Is it really plausible that the value of making me well off would hinge on whether I’d experienced lots of well-being in previous lives? The answer seems to clearly be no. Regardless of whether one experienced lots of happiness in past lives or dreams, that doesn't seem to influence the value of making them better off. Finding out that I’d been happy in previous lives wouldn’t make me any less concerned about how well my life goes.
It’s hard to maintain that the things that have happened to me decades or more ago would really affect how good my happiness is for me. If there are events I have no memory of, these wouldn’t seem to undermine whether my happiness makes me better off. It seems like each moment of experience has value which is not affected by causally inert previous moments.
A final thought experiment can elucidate the unintuitiveness of the rejection of the lengthening principle. Suppose that you are offered a 1% chance of eternal life or a 100% chance of 100 years of life, with two additional modifications.
After each 100 year period, your memories will be erased and you’ll be placed into a different situation.
If you live 100 years, you have no guarantee that the you that will exist will be the being that you currently think of as you. The life that you are currently living is a random 100 year period in the first 100^100^100 lifetimes. Thus, the probability that if you live 100 years you’ll live the life that you currently are living is 1/100^100^100, while if you accept a 1% chance of eternal life, the odds are 1%.
In this case, it seems intuitive that we should take offer two. After all, offer one means that the lifetime that we currently occupy will almost necessarily never exist. Thus, when rather than having our current life be guaranteed, causing us to be affected by status quo bias, our current life will almost definitely not exist, the intuition seems to flip. This shows that the judgment is rooted in status quo bias. If we conceived of our current life as a random period of 100 years, the intuition would go away.
This verdict about living forever is somewhat unintuitive. However, there are lots of reasons to distrust our intuitions about such cases.
Status quo bias. Humans have a status quo bias. It is no coincidence that, when deciding upon the optimal length of a lifetime before diminishing marginal returns kick in, the intuitions of most people seem to be about the average length of time. Humans often favor status quo lifetimes, even if longer ones would be better. I recall when I was ten or so thinking that I’d accept a certainty of living to 30 over the current expected trajectory of my life. Now, being 18, that seems obviously foolish. It’s much easier to think that an age is the optimal life length when one is far away from approaching that age.
Large number biases. Humans are quite bad about reasoning about large numbers, as was argued in part 3 about the repugnant conclusion. It’s hard to conceptually understand the difference between a thousand years of life and a billion years. Thus, our intuitions about such cases would be expected to be unreliable, particularly given the background information about the unreliability of intuitions about large numbers.
The scenario is hard to imagine. Humans cannot conceive adequately what it would be like to live for a billion years, much less forever. Thus, our intuitions here are likely to be far off. When imagining it, we might imagine finding billions of years of life to be tedious. It’s hard to imagine enjoying a trillion years of life.
Zero percent chance I accept a possibility of living forever unless I was guaranteed by God himself that it would be happy. It seems overwhelmingly likely to me that living forever now would cause an infinite amount of suffering.