Going All the Way to Crazy Town
Sam Atis gets off the train to crazy town before it gets to some seemingly absurd utilitarian conclusions--I do not. Here, I'll explain why
Introduction
Sam Atis is a superforecaster, Substacker, friend, and all-around super insightful guy. You have to be pretty insightful to get Scott Alexander to recommend your blog. He is the reason I applied to emergent ventures (I got a grant from it, so it worked out). If you’re not subscribed to his blog, you should be—his is maybe one of the two or three best blogs that are relatively small, perhaps my second favorite Substack after Good Thoughts.
Saint Petersberging oneself into nonexistence
Sam has an article in which he describes where he gets off the train to crazy town. He’s a pretty hardcore utilitarian, but gets off the train before he starts getting mugged by Pascal or having to accept the very repugnant conclusion. I basically never get off the train to crazy town, so I thought I’d explain why it’s worth accepting the extreme, expected value maximizing, utilitarian conclusion about all the cases. His first worry is this
So far, I’m a satisfied bullet biter. So, where do I get off the proverbial train to crazy town? Tyler Cowen’s variant of the St. Petersburg paradox is one objection to utilitarianism that I accept as a serious problem. Suppose you are offered a deal - you can press a button that has a 51% chance of creating a new world and doubling the total amount of utility, but a 49% chance of destroying the world and all utility in existence (let’s assume that there are no aliens in the universe, or alternatively that the button also doubles the number of aliens or something). If you want to maximise total expected utility, you ought to press the button - after all, the button is rigged in your favour and so pressing the button has positive expected value.
But the problem comes when you are asked whether you want to press the button again and again and again - at each point, the person trying to maximise expected utility ought to agree to press the button, but of course, eventually they will destroy everything. I’m not happy to almost certainly destroy all utility in existence because utilitarianism tells me to. My friend Eli Lifland (who I believe does bite this bullet) has a useful objection though - are there any odds that you would take?
Suppose that rather than there being a 49% chance you lose everything, there’s a one-in-a-trillion chance. It seems like you ought to push the button over and over again, although of course if you press it enough times you run into the same problem as with the original odds: you will almost certainly eventually destroy everything. Most ordinary people I’ve spoken to about this say ‘I would just press the button a ton of times until I feel like I’ve done a load of good, and then I would take my winnings’, which seems irrational but also appealing. I’m not sure what to do about this one.
My solution to this modified Saint Petersberg paradox is the following: accept that if something has a zero percent chance of getting you utility, even if the amount of utility approaches infinity, the expected utility is still zero. Maybe this is sort of cheating as a mathematical point, but it just seems obviously right and allows us to get off the train and say that flipping the coin an infinite number of times is the worst possible strategy because it produces, with 100% certainty, the destruction of all value in the universe. So one thing is very clear—you should not just keep pressing the button indefinitely, because that guarantees zero actual value, which does not maximize expected value. If something guarantees that you get zero utility, it does not maximize expected utility.
But how many times should you press it? Well, there isn’t really a right answer. The answer is that each number of presses is better than the last, but infinity is the worst answer. If we’re scalar utilitarians, and there are infinite options each ranging in utility from most to least, then this is not a problem—we just say that there is no best answer, but each answer is better than the last. Imagine that there was a sheet of paper where you could write down any number, and someone would generate any amount of utility. Obviously, there’s no best answer—writing down the previous number plus one will always be an improvement. But nonetheless, there’s no paradox there—it just turns out that when there are an infinite number of options, there won’t always be a best one, just a range. This also explains what you should do in the case where there’s only a 1 in a trillion chance of death—keep flipping. There is no worst strategy, but the ranking of best strategies is as follows keep flipping indefinitely<flip once<flip twice<flip thrice…
As an upside, this also allows us to avoid the Saint Petersberg Paradox with ease!
Now, maybe this seems unintuitive. It’s not paradoxical, but it does seem implausible that you should gamble away your life for a low probability of arbitrarily large amounts of utility. But I think that this result is pretty trivial—you can deduce far more radical conclusions in similar domains from super plausible first principles. It just seems like our failure to bite the bullet here is some combination of status quo bias, inability to grasp large numbers, and the fact that our brain just rounds small numbers down to zero.
Take my money, Pascal
Sam’s next worry is Pacal’s mugging.
I’m also unwilling to be a victim of Pascal’s mugging. Here is the description of the problem from Wikipedia:
Blaise Pascal is accosted by a mugger who has forgotten their weapon. However, the mugger proposes a deal: the philosopher gives them his wallet, and in exchange the mugger will return twice the amount of money tomorrow. Pascal declines, pointing out that it is unlikely the deal will be honoured. The mugger then continues naming higher rewards, pointing out that even if it is just one chance in 1000 that they will be honourable, it would make sense for Pascal to make a deal for a 2000 times return. Pascal responds that the probability for that high return is even lower than one in 1000.
The mugger argues back that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and given human fallibility and philosophical scepticism a rational person must admit there is at least some non-zero chance that such a deal would be possible. In one example, the mugger succeeds by promising Pascal 1,000 quadrillion happy days of life. Convinced by the argument, Pascal gives the mugger the wallet.
While the offer of coming back with money probably fails because of the diminishing marginal utility of money (at some point, getting extra cash just doesn’t make you any happier), the example where the mugger claims to be able to create simulations of huge numbers of people and torture them does seem to pose a problem for utilitarians. Should you give your money away? Of course you shouldn’t, but I think this probably does indicate that trying to maximise expected utility does fail when it comes to extremely low probabilities of an extremely large reward. In fact, that standard problem of Pascal’s wager remains fairly serious.
I think in the real world, one should not give money to people mugging them. The odds that the person mugging them is actually a super wizard is so low that it’s swamped by other probabilities—for example, maybe donating a bit to charity will slightly increase the odds of infinite pleasure; seems like much less of a longshot than Pascal’s mugging. Or maybe giving into the mugging will make more people take it up, such that when the real wizard comes, no one will give him money. Or maybe the real omnipotent being will get mad. Or maybe giving that money will take away money that could have been spent on a medical bill which would have saved your life, and then you’d have invented technology that allows us to live forever. This is such a longshot that it’s swamped by the other ways that all of our actions all the time slightly affect the odds of infinite value.
But if you have someway of guaranteeing that some very destructive action will slightly increase the overall odds of infinite utility, I’ll take it! Again, I think that this is pretty trivial—you can deduce it from the following two axioms.
The first axiom is the Utility Increasing Low Principle: Let M be an amount of utility and N be a greater amount of utility. Let P be some probability less than 100%.
P(N)>100% chance of M, for some values of N and P.
So basically, for any amount of utility, there is a much greater amount of utility such that an almost 100% chance of getting it is better. For example, if you have 1,000 utility, a 99.99999999% chance of 1000000000000000000000000000000000 utility is better. It doesn’t matter what the numbers are—it will still imply that one should gamble away any amount of value for arbitrarily low probabilities of infinite utility.
The second principle is the modified independence: If A>B then the expected value of P(A)>P(B)--assuming P represents the same probability here. This is both intuitive and can be supported by money pump arguments. If you’re curious how these entail the result, and for more detailed arguments as to why these principles are basically undeniable, see here, and see here for more arguments for independence.
For this reason, Pascal can take my money.
Very repugnant, you say?
The last stop Sam gets off before he gets to crazy town is this one.
The ‘Very Repugnant Conclusion’ is another such problem (which is pretty similar to Ursula Le Guin’s famous story ‘The Ones Who Walk Away from Omelas’). Here, the point is total utilitarians need to accept not only the original Repugnant Conclusion, but also a world with a huge number of people living lives that are barely worth living that also contains a smaller number of people who live a life that is filled only with torture and extreme suffering. Here is a brief explanation, from the EA Forum:
There seems to be more trouble ahead for total [symmetric] utilitarians. Once they assign some positive value, however small, to the creation of each person who has a weak preference for leading her life rather than no life, then how can they stop short of saying that some large number of such lives can compensate for the creation of lots of dreadful lives, lives in pain and torture that nobody would want to live? (Fehige, 1998, pp. 534–535.)
The very repugnant conclusion
Budolfson and Spears show that every plausible axiology entails the very repugnant conclusion—at least the additive version, which says that it’s sometimes better to create a very large number of people with slightly positive welfare and a large number with terrible lives than 10 billion with great lives. I’ll merely provide the demonstration for average utilitarianism. On average utilitarianism if there are currently 10^30 people with utility of -5, then it would be better to create 10^40 people with utility of 1 and 1 billion people with utility of -100,000 than to create 10 billion people with utility of 1,000,000. Average utility would be higher if the first option is taken than if the second option is taken.
Budolfson and Spears provide another argument for accepting the very repugnant conclusion. They show that one will have to accept the results of the very repugnant conclusion if they accept three things
1 Transitivity
2 Convergence in signs (informal statement). If enough identical lives, at a utility level u, are added to any base population, eventually (possibly in a very large population) the result is a combined population that is overall just as good as some perfectly equal population of the same size as the combined population, in which every person has a utility of the same sign as u.
3 Extended egalitarian dominance. If population A is perfectly equal-in-welfare and is of greater size than population B, and every person in A has higher positive welfare than every person in B, then A is better than B.
Each of these are very plausible.
Another argument can be made for accepting the very repugnant conclusion. If we accept
Transitivity.
The diminishment principle: For any number of people with positive utility P, there can be a better state of the world with some greater number of people experiencing some amount of utility less than P. (E.G. 50 people with utility of 100 is less good than 100,000 people with utility of 99).
The reverse diminishment principle: For any number of people with negative utility N, there can be a worse world with some greater number of people experiencing negative utility of less than N (E.G. 10 people with utility of -100 is less bad than 30 with utility of -90).
Some number of lives with lots of positive utility are worth creating, even at the cost of creating some number of lives with slight negative utility.
The diminishment principle combined with transitivity shows that for any large number of people with very high positive utility, there can be a better state of affairs with a much larger number of utility with utility sufficiently low that their lives are barely worth living. For example 10 billion people with utility of 100,000 is less good than 10^40 people with utility of 1.
The reverse diminishment principle shows that for any number of people with negative utility, there is a worse world where there are more people with lower negative utility. By transitivity, this shows that a world with 10 people with utility of -10,000, that would be less bad than 15 people with utility of -9000, which is less bad than 30 people with utility of -8000…which is less bad than some vast number of people with utility of -1.
Thus, some number of people with high positive utility is good enough to outweigh the harms of some number of people with slightly negative utility, which is worse than some much smaller number of people with very negative utility, meaning that by transitivity, some number of people with high positive utility is enough to outweigh some number of people with very negative utility. However, some much larger number of people with lives barely worth living produce a better state of affairs than the very large population of people with excellent lives, which would mean that, by transitivity, creating some number of people with lives barely worth living is sufficiently good to offset the creation of large numbers of people with extremely negative utility.
Another argument can be given for accepting the very repugnant conclusion
1 A person existing with a life barely worth living is good
2 For any number N and event M, N instances of event M are N times as better than M, if the instances of N don't exert a causal impact on each other
3 Infinite people with lives barely worth living don't exert a causal impact on each other
4 If something is infinity times better than something good, it is infinitely good Therefore, infinite people with lives barely worth living is infinitely good
5 10 billion people with awesome lives isn't infinitely good
6 10 billion people living horrible lives isn’t infinitely bad
7 If reality contains infinite goodness and not infinite badness, it is infinitely good
8 things that are infinitely good are better than things that are not infinitely good
Therefore, infinite people with lives barely worth living and 10 billion people living terrible lives is better than 10 billion people with awesome lives
Biases and debunkings
Our anti-repugnant conclusion intuitions fall prey to a series of biases, as Huemer shows.
1 We are biased having won the existence jackpot. A non-existing person who could have lived a marginally worthwhile life would have perhaps a different view.
2 We have a bias towards the existence of roughly similar numbers of people to the numbers who exist today. This is shown by how different our intuitions are when we consider a world with just one person with a great life, rather than a vast number with lives barely worth living.
3 Humans are bad at conceptualizing large numbers. It’s really hard to conceptualize the difference between 1 million years and 10 billion years of good life, even though 10 billion years is 10,000 times longer.
4 Humans are bad at compounding small numbers. It’s hard intuitively to see how lots of very small things could add up to being as good as a very good thing.
The basic problem
Sam sort of admits that just jettisoning utilitarianism when he doesn’t like a particular result is not the most responsible thing.
So, I do get off the train to crazy town eventually. If the price of a train ticket is that I accept that all utility is virtually guaranteed to be destroyed, or that I have to give thousands of pounds to a mugger who says he will simulate miserable existences should I not give him my money, or if I have to accept that many people will live awful lives but many more people will be able to eat potatoes and listen to muzak, I’m not going to ride the train. But I suppose the more meta question is: what are the principles by which we should decide when to get off the train? I guess the guiding principle for me is that I ought to get off when my intuition to get off is stronger than the intuitions that drew me towards utilitarianism in the first place.
But I think that there’s a problem with getting off the train using this principle. Sometimes I imagine talking to someone who gives large amounts of their money to local animal shelters, and telling them that they ought to give their money to effective charities instead (although I don’t actually criticise peoples’ charitable giving in reality). What if they invoke this principle to defend their ineffective giving? If we’re back to intuitions about what seems crazy to us, why shouldn’t they get off the train to crazy town at the point where it asks them to donate to AMF rather than local animal shelters? Let me know what you think in the comments or message or @ me on Twitter.
I think that when one has strong intuitions, they should just slightly tweak utilitarianism. But a few spare intuitions diverging from utilitarianism isn’t good evidence that utilitarianism is wrong, because we’d expect utilitarianism to be unintuitive sometimes, even if it were true. So I think that unless utilitarianism produces results that are so obviously absurd that it’s basically self-evident that we should reject them, a few seeming counterexamples hanging around shouldn’t threaten our confidence in the theory. I’m happy sort of kind of slightly modifying the view to avoid the result that you get guaranteed zero utility, but for the others, I’m happy biting the bullet. And in each case, I think there are good reasons to bite the bullet, even if one were not antecedently committed to utilitarianism.
So no need to get off the train. Ride all the way to crazy town.
I often struggle with articles like this. Thanks for making philosophy so readable.