Two people have written replies to me on Pascal’s wager: Richard Hanania and Dylan. I enjoyed Richard’s piece, and felt annoyed by Dylan’s piece. I thought it would be worth explaining why they are both wrong!
Hanania and extortion
has a piece called Pascal’s Wager as Spiritual Extortion where he argues against my post. His core argument is that Pascal’s wager is giving into extortion. He writes: I’m not sure about this logic. I don’t know how to think about probabilities at the level of 1/30,000 or 1/300,000. At some point, these numbers cease to have meaning. A rule of thumb that says go through life just ignoring 1/300,000 probabilities seems like a good one to me. But putting that aside, I would give Christianity maybe a 2% chance of being true, almost exclusively on the grounds that a lot of smart people have believed in it, plus it was the faith of the civilization that conquered the world. Christian ethics has never made sense to me, nor have I found the alleged historical evidence for miracles or the Resurrection to be compelling. But I have enough intellectual humility to say that if many people as intelligent as I am have believed something, I’ll give it more than 1/300,000 odds, even if I’m pretty certain in my own opinion.
So the probabilities are high enough for me that it seems like I should take Pascal’s Wager. Yet something about this feels like extortion.
Now, it isn’t true that you can ignore probabilities of 1/300,000 if they’re sufficiently great. Suppose that there is a 300,000-sided die. It will be rolled. If it comes up 1, you get one util, 2 you get 2, and so on (or, if you have util-derangement syndrome, replace util with something else of value, like well-off person created or person spared from suffering). On this view, you’d value this die at zero, because you ignore all the probabilities of 1/300,000. There are more sophisticated ways of patching the “ignore low probabilities” view, but as I explain here, they all face devastating problems.
But Hanania’s main argument is that accepting Pascal’s wager feels like giving into extortion. If you accept the religion with the most miserable hell, then you’ll be tricked into gambling on whichever religion is the most brutal. That rewards adding extra brutality to your religion.
This is an interesting point but not ultimately persuasive.
First of all, sometimes you should give into extortion. If someone was going to torture me unless I firmly committed to adopt some religion, I’d adopt that religion. I don’t want to be tortured! So even if it is extortion, you should take the gamble. Sometimes it is proper to be extorted.
Second, you can construct the wager purely with positive value! It’s good to be in heaven. Infinitely so. It makes sense to take the wager, not to avoid bad outcomes, but to procure good ones. Now, in response to this, Hanania writes:
The role of Heaven, providing a carrot in addition to the stick, is less salient. I’m inclined to be risk averse with my soul. Would you take a coin flip, with one side giving you eternal bliss and the other eternal torture? I would probably not flip that coin and take the choice of obliteration instead. Torture is bad, and bliss is good, but torture forever seems more bad than bliss forever seems good. So for the sake of Pascal’s wager, I focus on the downside of being wrong. I’d probably take the coin flip at 80/20 odds, but even then I’d be quite nervous.
But even if heaven is less salient then hell, heaven is still pretty good (or so I’ve heard). In fact, it is so mind-blowingly awesome that you’d be rational to forfeit all earthly goods for a single second of it. Heaven’s supposed to be vastly better than hell is bad. So heaven should dominate your decision-calculus.
Now, maybe if you’re risk averse you should fear hell more than you value heaven. But Hanania’s entire argument is that you shouldn’t act based on hell because that’s extortion. So then ignore the possibility of hell, but take the wager for the chance of heaven. Even if God doesn’t condition heaven on having the right beliefs, strengthening your connection with God might make heaven better in some way.
Third, it’s a bit different from extortion. Religious people don’t threaten to torture you unless you adopt their religion. What they say is that your prospects, apart from what they do, will be better if you adopt the religion. Whether that’s true isn’t up to the person doing the alleged extorting.
But Christianity entails that those who go to hell all really deserve it. God doing it isn’t wrong if infernalist Christianity is true. Thus, there’s no threat of evil inflicted upon us, but rather warning us of the just consequences that will follow from non-Christianity. That is rather different. That is more like telling a criminal that the state will justly sentence them to death if they keep offending.
So overall, I don’t find the extortion worry that plausible, nor the other worries. Probably you should still gamble on God.
Tyranny of the mean
Dylan’s piece is called Tyranny of the Mean, with the subtitle Please Stop Abusing Expected Value. Dylan is correct that expected value has been abused, he is simply mistaken about who has done the abusing. Dylan’s piece amounts to little more than a simple exhortation not to use expected value, noting that doing so has some counterintuitive implications. He notes that it’s uncontroversial that you should use expected value reasoning if you’re in some repeated scenario where doing so guarantees you better payoff, but claims that you shouldn’t use expected value in other cases, because doing so risks ruin.
If you play a game where you can flip a coin up to ten times, and you triple your payouts each of ten times if it comes up heads, while lose everything if it comes up tails, then using expected value reasoning will almost guarantee you get nothing. So if you use expected value reasoning you’ll be willing to risk everything for a very low chance of sufficiently good payouts. Dylan suggests: so just don’t use expected value reasoning.
He claims that for EV reasoning leads to paradoxes like the St. Petersburg game, where an action has infinite expected value but is guaranteed to have finite value. Now, I don’t know quite where the paradox is supposed to arise: that is an uncontroversial fact about math. The St. Petersburg game certainly has some weird implications, but there’s no out and out paradox (and for reasons I explain here in section 12, I think biting the bullet is the best option)!
Dylan then says:
Okay, so putting all of that together, to use expected value to make intelligent decisions requires the following conditions must be met:
You need an accurate utility function that represents your interests
You need the risk of ruin to be nullified
You need to be able to play a similar game or make a similar decision sufficiently often that your achieved results actually trend towards the expected value
You need to be wary of using EV to make decisions with potentially extreme outcomes that can’t be repeated
You need to be wary of using EV in situations involving infinity
This is all rather silly! You don’t need a precisely accurate utility function for you to know that some action is very worth taking, despite having a low chance of payout (I may not know exactly what matters in the universe but might still know that infinite people spending forever in paradise has infinite expected value). And you shouldn’t be uniquely warry of using EV in infinite situations—otherwise you get the bizarre consequence that you should value a 1/2 chance of 100,000 utils using EV reasoning, but if the utils increase, so that you get a 1/2 chance of infinite utils, then you stop using infinite reasoning.
Also, notably, it makes no sense to say that you should only use EV if a scenario is not repeatable, because every repeatable outcome can just be transformed into a one-shot game (e.g. if you repeat flipping a coin that creates 3 utils if heads and 0 if tails twice, that’s just the same as a one shot game that has a 1/4 chance of creating 6 utils 1/2 chance of creating 3, and 1/4 chance of creating none).
He also seems to confuse money and utility. Everyone agrees that you shouldn’t gamble away any finite quantity of money for a tiny probability of infinite money. Money has declining marginal utility. After you have a quadrillion dollars, the rest of your money does you no good. But utility does not have declining marginal utility! And Pascal’s wager is over years in heaven and hell which don’t have declining marginal utility. An extra year in hell is still very bad, even if you’ve already been there for a trillion years.
Moving beyond that morass of confusion, he declares:
Many people get this wrong. Bentham's Bulldog for example, often uses the concept of EV to make his arguments without double checking these conditions. Take his recent post defending Pascal’s Wager…
Oh silly me! I forgot to double-check these conditions! Or maybe, you know, I disagree with them. One way you could see this is that in the Pascal’s wager post that Dylan claims to be addressing I have a section titled Discount low risks? where I explain why I don’t find this objection plausible. Perhaps when responding to this piece, it would be reasonable to consider what I say about the objection, rather than assuming that the most obvious insight that everyone thinks of when they hear about EV maximization is a decisive objection and no response to it has ever been given. It is very easy to rebut someone if you do not consider what they say in response to your objection.
As it happens I do have an objection to this. In fact, I have several in the piece that Dylan was reportedly responding to, none of which were addressed. The first one goes: even if you buy the notion that infinite payouts shouldn’t always dominate your decision calculus, you should still take the wager. This is because there are views which are not astronomically implausible on which picking the right religion has infinite value (perhaps just by strengthening your connection to God). Such views have been believed by a sizeable portion of people who have ever lived and of philosophers, so you shouldn’t think their odds are like one in a billion! But if their odds are low but non-zero, they’ll be above the threshold where they are decisive.
Now, my other objection, and the more important one for Dylan’s piece, is that there are very strong arguments for thinking that any chance of infinite payouts outweighs any finite good. This isn’t just my assessment: talk to anyone who does serious research in the field, and they will say the same thing. This paper by Beckstead and Thomas notes that there are intractable paradoxes for every view—that every view will have to say something extremely bizarre. Beckstead and Thomas don’t like fanaticism—the view that infinite payouts always outweigh—but because they know about the topic, they recognize that avoiding this conclusion isn’t as easy as declaring that you don’t want to use expected value.
In my piece—the one Dylan was pretending to be addressing—I noted quite explicitly that alternative views have huge problems! In fact, I noted that I’ve already written a long post addressing this issue, and linked to that post. I won’t repeat all the things I say in that post, but in it, I note:
Views other than fanaticism (the view any chance of an infinite outcome beats any guaranteed finite outcome)1 imply that you should sometimes turn down deals that multiply your payouts by a factor of googolplex while only reducing the odds of payout by a factor of .00000000000000000001%.
Non-fanaticism violates one of the two following principles. The first is transitivity which says that if A is better than B and B is better than C, then A is better than C. The second one is called Partial Dominance which says “If there are two actions A and B which both bring about some gamble S, and A brings about, in addition to S, a higher chance of a better or equally good outcome relative to B or an equal chance of a better outcome relative to B, then A>B.” If you don’t think a tiny chance of infinite payouts outweigh, you have to give up one of them.
Non-fanatical views imply that when deciding whether to take a gamble, you should care about what totally causally isolated gambles are occurring elsewhere in the galaxy or in the distant past.2
Non-fanatical views imply that when deciding whether to take a gamble, either: 1) you should pick gamble A over B, even though there’s some question Q, such that for any answer Q could have, after learning it, you’d pick gamble B over A, or 2) that whether you should take a gamble will depend on how happy people are on Mars, or were in ancient Mesopotamia, even though you have absolutely no causal impact on that.
I also give various reasons for why we shouldn’t trust our direct intuitions about low probabilities. Thus, contrary to Dylan I do not “simply shrug and repeat that the hypothetical 1/300,000 chance that an atheist assigns to Christianity being correct rationally obligates him to devote his entire life to faith anyways- because the expected value is still infinity!” What I do is write a 15,000-word post about this view being correct, link to it in the piece that Dylan is responding to, and then Dylan ignores it and just repeats that the view is somewhat unintuitive. Yes, we know it’s unintuitive—everyone who adopts fanaticism in the world acknowledges that. The reason to adopt the view is because there are arguments for it, and if one wants to sensibly speak on the subject, they should say something about the arguments. Especially if those arguments are contained in the post they purport to be addressing.
You should not write a post claiming to provide the quick easy fix for paradoxes of low probability if you don’t address any of those paradoxes. If you can’t even be bothered to read the post you’re ostensibly addressing, and aren’t aware that there are arguments for accepting the view you reject, then you have no business speaking about the subject. To the extent you speak about the subject, you are actively misinforming people. It’s incredibly irresponsible, particularly on a subject that is potentially infinitely weighty.
I’ve been pretty snarky so far, so let me defend my snark. This is the philosophical equivalent of malpractice. It is a piece which actively serves only to make people less informed. It would be rather like a junior mathematician who claimed to have solved some complex mathematical puzzle, and then misinformed the public about this by not telling them (likely because of their own ignorance) any of the reasons the puzzle is difficult. It would be like a scientist schoolmarmishly declaring that there is some obvious solution to a difficult puzzle, and then simply ignoring why there’s a puzzle in the first place.
If you’re going to claim to have solved some paradox, and chide others for their alleged silly errors, you should know what the paradox is! If your piece simply consists of saying things that are obvious to everyone, and then asserting your view to be correct without acknowledging the reasons that puzzle is hard, your piece serves simply to make people less informed. It would be as if someone claimed they had a nice solution to all repugnant-conclusion-related puzzles, but weren’t even aware of the mere addition paradox—and then on top of this had the gall to claim to be debunking a piece that presents the mere addition paradox. One wonders: did they even read it?
Dylan’s errors probably come from conflating how one should treat financial risk with how one should treat gambles over utility. But financial risk is very different. Money has declining marginal utility. Utility doesn’t, and so talk about “risk of ruin,” doesn’t advance the subject at all. It’s old news that fanaticism implies you should give everything up for a tiny chance of infinite value. The only question: is that rational?
Lastly, Dylan declares:
Decision-making requires bounded utilities, else the tiniest probability of infinity overrides everything else, no matter how terrible or silly the alternative.
The bounded utility idea is that as the amount of stuff of value approaches infinity, the value approaches a finite bound. Now, as anyone familiar with the subject would know, this view is not without problems. Aside from the objections I gave above, that are fully general to any non-fanatical view, bounded utility implies:
Which actions to take will sometimes depend on what’s happening in distant galaxies. If there’s enough value in far-away galaxies, then this takes us closer to the bound. Thus, because how much you value the outcome depends on what’s happening in far away galaxies, those sometimes affect which gamble you should take.3
If there’s already infinite value, bounded utility implies that nothing matters because marginal value asymptotically approaches infinite as value approaches infinity.
Bounded utility views sometimes imply that you should take a near guarantee of a tiny amount of value over a tiny chance of a ton of value. For example, suppose we reach a future state where we are 99.999% sure that the world has been and is very good. However, there’s a tiny chance (.001%) of a conspiracy theory where the world is very good. You have two options A and B. A brings about great good if the world has been very good. B brings about a tiny good if the conspiracy theory is true. Thus, A has a 99.999% chance of bringing about tons of good, while B has a .001% chance of bringing about a tiny amount of good. However, if A brings about good, there’s already been a lot of good. Bounded utility implies that, so long as there’s enough good if the conspiracy theory is false, the amount of extra value added is so tiny that you should pick B over A (for as the amount of previous value approaches infinity, the goodness of a large amount of extra stuff of value approaches zero). But this is absurd!
Bizarrely, after this point, Dylan has written another post defending his right to confidently expound on topics without knowing anything about them. He still just repeats the same points, ignores every objection, falsely claims he’s addressed my criticisms without being able name a single argument for fanaticism that he’s addressed, and doubles down on still more ridiculous claims. After falsely claiming that Ethan and Amos are undergraduates, falsely claiming I misrepresented him by construing my statement as being the opposite of what I said, and ludicrously analogizing expecting him to address counterarguments in a post he is responding to with requesting one engage in depth with highly specific details of conspiracy theories, he says:
The problem is that the “read the literature bro” sentiment has no downside and so it gets regularly abused. You can borrow any argument from any paper, claim that the literature robustly supports it, and sit tight with an easy air of superiority. The amount of work it takes to go and prove these claims wrong is far more than to just throw the claim out there, so they can usually skate by without being challenged.
But I’m not just throwing out claims from random papers. There are well-known paradoxes for views of decision theory that don’t imply fanatical results, and these will be discussed in any piece seriously addressing fanaticism. If you claim that you have solved the issues in some domain, and then ignore all the issues for your view—the 101 style ones that you’d read about in an introduction to the subject—even ones that have been known about for quite a while, and are mentioned in the post you are pretending to respond to, you are full of shit.
It also entails that any guarantee of any finite outcome is such that for any probability P there is some vastly better outcome that beats that finite outcome.
Technically you can also give up dominance, but that’s crazy. Dominance says that gamble A is better than gamble B if it provides some chance of better outcomes and no chance of worse outcomes.
For a simple model, assume that there’s a view that says that utility up until 100,000 utils is valuable and then after that point extra utility has no value (note: this example is just illustrative, similar points will apply for other views). You can take a gamble that gives you a 1/2 chance of 10 utils and a 1/2 chance of nothing or one with a guarantee of one util. If there are no utils in far-away galaxies, then this view implies you should take the first gamble. If there are already 99,999 utils in other galaxies, then the second gamble becomes better—it guarantees you reach the maximum util cap rather than just having a 50% chance of doing so.
I don’t think you fully buy your own argument. If you did, you would abandon the shrimp and dedicate the rest of your life to Effective Evangelism
The confidence to quality of arguments ratio in these Dylan articles is just infuriating. The 'wrestling with pigs' one especially. I don't understand why he's writing these if he doesn't think it's incumbent on him to, like, engage with any of the arguments whatsoever. You know you're phoning it in when you have to write a follow up article the thesis of which is "actually, I don't have to make arguments."