The "Suggesting Some Value Has Some Chance of Being Low In Response to Claims That It Is Large In Expectation," Fallacy
I think I'm getting the hang of this whole coming up with catchy names thing!
Fallacy talk is, to use a technical term, giga-cringe. An entire generation was raised on fallacies, and came to believe that the hallmark of true wisdom was having a collection of random informal fallacies that could be brusquely tossed out mid-argument. Fallacy talk and its consequences have been a disaster for the human race.
But if we’re going to have fallacies be part of the public lexicon, then I’d like to add my own. It’s called the “Suggesting Some Value Has Some Chance of Being Low In Response to Claims That It Is Large In Expectation,” or SSVHSCOBLIRTCTIILIE. Very concise. Unlike many “fallacies” this one actually is: 1) a kind of inference people sometimes make and 2) is in error.
You can probably guess what the fallacy is from its name. It consists in acting like an argument for why some value is probably low is a response to claims that it is large in expectation. Something’s expected value is the sum of all the probabilities of it taking on a value multiplied by the value. For example, if I flip a coin that will give me $2 if heads and $1 if tails, the expected value is $1.50. You multiply 1/2 by $2 and then add 1/2 times $1.
It should then be clear why merely establishing that a value might be low doesn’t establish that it’s very low in expectation. There can be a 99% chance that a course of action will achieve nothing and its expected value can still be very high. So merely establishing that a value is likely or plausibly zero doesn’t do anything to refute arguments from its expected value being high.
Here’s an analogy for the error: imagine that there was a button which, if pressed, would make you get punched in the face one time for every grain of sand that exists. Pressing the button would also give someone a sandwich. Someone is going to press the button, and argues this is a great idea because they think mereological nihilism is true, and so probably there are no grains of sand. Even if they’re pretty sure of mereological nihilism—even if they’re 99.9% sure—they shouldn’t press the button! Merely establishing that pressing the button is probably harmless doesn’t establish that it’s harmless in expectation.
I see this error committed a lot in discussions of effective altruism. Some examples of the sorts of statements that make this error:
“There’s no case for giving to shrimp welfare. Plausibly shrimp aren’t conscious at all. If they are conscious, their expected sentience probably correlates with their neuron counts. They have only a few hundred thousand neurons, so probably they’re just barely conscious, and giving to the Shrimp Welfare Project—which makes painless more than 10,000 shrimp deaths per dollar—isn’t a good thing.”
“Longtermism is silly. It’s plausible we’ll go extinct soon, so the future isn’t larger than the present. Thus, the argument for Longtermism from most people being in the future doesn’t go through.”
“Insect welfare isn’t valuable. Plausibly insects aren’t even conscious, and if they are, they’re more likely barely conscious.”
“God doesn’t exist because of the problem of evil, so Pascal’s wager doesn’t work. If there’s no God then it doesn’t make sense to wager on him.”
These are all errors. Even if you think insects and shrimp probably are not conscious or are only minimally conscious, they might still matter a lot in expectation. Even if your best theory is that consciousness correlates with something like neuron count—which it shouldn’t be—because neuron counts give such low estimates for the consciousness of simple creatures, most of the expected consciousness value comes from worlds where neuron counts are not good proxies.
Thus, even if you think neuron counts are probably a good proxy, they’re a terrible proxy given uncertainty, particularly for creatures with very few neurons. Neuron counts as a proxy imply that humans are about twenty-one thousand five hundred times more conscious than house geckos. So to buy this as a proxy given uncertainty, you must be extremely certain that house geckos are barely conscious at all compared to humans—well above 99% confident that when you step on a house gecko’s tail, it feels no more than 10% of the pain you would if someone stepped on your leg with similar force. Even though the gecko will act distressed, as if it’s in a lot of pain, you must be nearly certain that it’s in barely any pain compared to your own.
And maybe that’s true. But it’s not obviously true. The odds it’s false aren’t on the order of .1%. So given uncertainty, you should assign way more weight to the house gecko’s interests. To justify using neuron counts, you must be nearly certain that creatures with few neurons barely suffer at all—but you obviously shouldn’t be.
This is a particularly good example of the fallacy because it illustrates how the reasoning goes wrong. People assume that because they have a model on which a value is small and some arguments for that model, the value is small in expectation. But that’s totally wrong. Expected value ranges over your uncertainty across models, including ones where it’s a lot larger.
Similarly saying “here are reasons why extinction soon is likely,” doesn’t rebut the case for Longtermism. Longtermism is the idea that making the future go well is extremely important. Strong Longtermists think it’s a lot more important than making the present go well. But you could be a Longtermist even if you were 90% sure there wouldn’t be many future people.
The case for Longtermism rests on there being lots of expected people, meaning number of people multiplied by the probability of there being that many people. It doesn’t require that it’s very likely that there’s a lot of people. High expected value doesn’t imply high probability of high value. The expected monetary value of a 1/1,000 chance of getting a trillion dollars is a billion dollars even though you probably get nothing.
I think this fallacy is a version of failing to distinguish between confidence levels inside and outside an argument. Scott Alexander explains the error well:
Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?
Mine would be significantly less than 999,999,999 in a billion.When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in “But that still leaves a one in a billion chance, right?”. The majority of the probability is in “That argument is flawed”. Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.
…
If you can only give 99% probability to the argument being sound, then it can only reduce your probability in the conclusion by a factor of a hundred, not a factor of 10^20.
This is why it’s usually difficult to derail arguments with potentially astronomical stakes. If an argument has astronomical stakes, then if it is correct, some action has very enormous expected value. Suppose that you come up with a great argument against it—an argument for why, with 99% probability, the first argument is wrong. Well, this only reduces the expected value of the first action by two orders of magnitude. So if the first argument established that something had EV that you needed many exponents to write out, it can survive a two orders-of-magnitude hit with ease.
And normally arguments aren’t good for a two order of magnitude drop in expected value. That requires you’re 99% sure the argument is wrong. But you generally shouldn’t be. Humans are famously wildly overconfident in 99% guesses.
So not only is this reasoning strictly wrong; it makes people’s thinking more confused. It leads people to underestimate important conclusions, thinking giving a not crazy objection is enough. Unlike most named fallacies, then, this one might actually be conceptually useful!


"Rule High Stakes In, Not Out" might be a catchier slogan for the key idea here :-)
https://www.goodthoughts.blog/p/rule-high-stakes-in-not-out
I think you're right about this, which does indeed make it hard for me to think insect welfare isn't very important (I'm still undecided on this). I'd also note that it's extra-annoying when this fallacy is used with "there's no evidence that X", instead of "probably not X".