Comments on Torture vs. Dust Specks
Responding to both the serious and unserious objections
Some people flatter their audiences and are captured by their audience’s opinions. I aim to be like Richard Hanania, having nothing but scorn and disdain for my audiences when appropriate. So I will say: a lot of comments on the recent torture vs dust specks article were pretty silly. Though some were good, and here I’ll respond to both.
For a brief summary, in that piece I made five arguments for the conclusion that some number of dust specks are worse than one torture:
Imagine taking a torture, making it a bit less bad, and inflicting it on more people. Then do that again and again and again. At each step of the way, it gets worse. But at the end, it’s only as bad as a dust speck. So then a bunch of dust speck level pains must be worse than a single torture.
If a torture is worse than any number of dust specks, certain other principles imply that no number of dust specks can be as bad as any risk of torture (otherwise you could stack together many risks of dust specks and end up trading off high likelihood of a torture against a large number of dust specks).
One dust speck is bad, infinite dust specks are infinitely worse than a single dust speck, so infinite dust specks are infinitely bad. Tortures, plausibly, are only finitely bad, so infinite dust specks are worse than one torture.
Our intuitions about infinite dust specks aren’t trustworthy because we don’t have trustworthy intuitions about big numbers.
Our intuitions about single cases are less trustworthy than intuitions about principles. Principles apply to many cases. So if a principle is right, we’d expect it to occasionally appear unintuitive. In contrast, if a case judgment is right, we wouldn’t expect it to conflict with a plausible principle.
These arguments are all pretty straightforward and powerful. They appeal to quite modest premises. Yet many people gave pretty confused replies. I appreciate Ibrahim Dagher who admitted that many of these are problems, but just finds the alternative view so counterintuitive that he still thinks a torture cannot be outweighed by any number of dust specks. I’ll start with the comments that I think missed the mark before addressing the ones raising good points. Neo left the following comment:
Never understood this argument, always seemed extremely fanatical.
I like cute dogs. I want there to be more cute dogs with happy lives in the universe. Is this mathematically justifiable under util? No. I just want it.
I would rather have a bunch of cute happy dogs existing than tile the same amount of space with hyper-efficient pleasure-generating-cells. Similarly, I really despise the idea of one person being literally tortured to avoid a much less significant pain inflicted on many others.
Must we all accept utilitarianism until we bite the bullet on claims like this? I certainly do not. Seems like a complete denial of your identity and complex preferences.
But there wasn’t a single “this argument,” not to understand. There were five arguments. And none of the arguments were:
Utilitarianism is true.
If utilitarianism is true, then some number of dust specks are worse than a torture.
So some number of dust specks are worse than a torture.
That would obviously not be persuasive to non-utilitarians. But that’s not what I argued. I appealed to very modest premises that non-utilitarians would accept in other contexts. So why did so many people act like I assumed utilitarianism or go on other irrelevant screeds about utilitarianism without addressing the arguments?
(Also, worth noting, utilitarians can think that tortures are worse than any number of dust specks).
Nathan Worsley said:
The clear logical error here is the assumption that suffering behaves like maths. Ie 2 suffering + 2 suffering = 4 suffering
If you don’t accept the first principle that mathematical logic applies cleanly to ethical reasoning then the entire article falls apart immediately
Tarang Sinha said:
Once more with your arguments, it seems the problem is that the moral calculus (if there is one) is not the same aas simple adding and multiplying.
But that’s not an assumption in any of the arguments. That infinite instances of a bad thing is infinitely bad is the closest I get to assuming that. But that doesn’t assume that morality always obeys expected utility axioms or anything like that.
I think the error people made was they assumed that the first argument assumed that pain could be precisely ranked on a cardinal scale. I talked about taking a pain, making it a bit less intense and inflicted on many more people, and then this being repeated. But this doesn’t require some precise mathematical quantification of pain. You don’t need to think that one can precisely quantify painfulness to think that it’s coherent to talk about repeatedly making a pain a bit less intense.
The underlying value in question doesn’t even need to be painfulness. We can just use some other quantity that correlates with pain. Suppose that you are crushed for ten minutes under a weight that doesn’t quite kill you but is extremely painful. 100,000 people being crushed under a weight .0001 g less would be worse, and 10 trillion crushed under a weight .0001 less than that would be worse than that. Iterating this process, we end up with the result that some number of people being under a tiny amount of pressure that’s just a bit uncomfortable is worse than one torturous bone-breaking crushing. Or, to give an example I gave in the piece:
Here’s a concrete way to set up the scenario. Let’s say that being boiled in 200-degree water is torture. Well surely 100 people being boiled in 199.99 degree water is worse. And 10,000 people being boiled in 199.98 degree water is worse than that. The process can continue until we arrive at the conclusion that a bunch of people being in water at some temperature where it’s just mildly uncomfortable is worse than one person being painfully boiled.
Now, lots of people for some reason had gripes about this example. For instance, functional decision-theory’s fiercest advocate Hein de Haan wrote:
This seems like a bad example to support the point you want to make. The water being a bit less hot doesn’t imply the suffering is *any* less extreme!
True enough. But the argument doesn’t require that! If the suffering is more extreme at some temperature lowerings, then of course some number of people being at the lower temperature is worse than some number at the higher level. All one needs for the argument that a bunch of people being in just slightly uncomfortable water is worse than one person being boiled is that for each temperature increment, some number at a slightly lower temperature is worse.
And again, we can replace the water temperature example with any other example of pain correlating with some degreed property—like increments of force applied to a person’s testicles—and the point still works fine.
Okay, that’s all for the comments that miss the point. What about the more serious ones?
Friend of the blog Elliott Thornley had a nice point about the risk argument:
There are other good risk-based arguments along with Huemer’s. They’re all Harsanyi derivatives. The basic thought is that -- behind a veil of ignorance -- the choice between dust specks and torture is a choice between (1) having each of an enormous number of people suffer a dust speck for sure, and (2) having each of those same people undergo a tiny risk of torture. For a tiny enough risk, they’d all be better off if you chose (1). So choose (1)!
https://www.jstor.org/stable/48799008?seq=1
https://www.cold-takes.com/defending-one-dimensional-ethics/
However, Ibrahim Dagher disagreed:
I agree the additive view has lots of good arguments in its favor — as someone who argues for bounded aggregation, I still have a very high credence (0.4?) in the additive view. For me, the argument that I really get worried about is the risk one. On risk, the way I try to salvage the bounded view is this: imagine you’re entering a world and you don’t know who you’re going to be. There is Rayo’s number people, who each experience a dust speck. There is also one person who experiences torture. So, my chance of being the person who experiences torture is 1/Rayo+1, and my chance of a dust speck is near-certain. Yet, I don’t think I’d prefer entering this world over a 100% chance of a slightly-worse-dust speck.
I don’t really share this intuition. When I stub my toe, that seems worse for me than odds of being tortured that are pretty much indistinguishable from zero—where even if everyone took this risk of torture every millisecond until the end of the universe across Graham’s number universes, odds are basically zero anyone would be tortured. But intuitions diverge, I guess. I also think that this implies that by far the worst things about mild pains—say toe stubs—is the non-zero chance that they will culminate in torture-level pains.
This also has somewhat radical practical implications. If any risk of extreme pain dominates mild pain, then surely it also dominates mild pleasure. But this would mean that it’s bad for you to get icecream, assuming doing so mildly increases your odds of being tortured. Even things as simple as scratching your nose when it is itchy would be bad ideas if they increased your net odds of torture by one in googolplex—odds much lower than those of selecting some particular atom from across the observable universe.
This view also implies that ridding the world of mild pain would be categorically dominated by preventing very low risks. The odds are non-zero, though quite low, that one second from now, a torture device will spontaneously fizz into existence and torture me brutally. This view implies that preventing all mild pains on Earth and across the closest googol galaxies would be less good than reducing the risk of the torture device fizzing into existence by 1%.
Ibrahim has an interesting view called bounded aggregation. The idea is basically that as you get more and more instances of pain at some level of intensity, the marginal badness of them each goes down. So the first guy stubbing his toe is somewhat bad, the next guy stubbing his toe is less bad, etc. As the number of toe stubs approach infinity, the total badness approaches an asymptote. This also gets you out of some weirdness with unbounded utilities.
Nonetheless, I think it’s very likely false.
I mean, for one, it delivers insane-seeming verdicts in the cases I described. It implies that there is some level of pain intensity which, if you replace all the pains at that intensity level with milder pains inflicted on googolplex times more people, leaves things better overall. That seems very implausible.
A bigger worry is that if the view implies bounded aggregation for all kinds of pains, then it implies that as the number of tortures approaches infinity, the marginal badness of extra tortures approaches zero. So if a lot of people were being tortured far away, a lot of people on Earth being tortured would be barely bad at all. Its badness would be less great than a single dust speck if that was the only dust speck anyone had ever had. That seems to be just about the least plausible result ever.
Relatedly, the view violates separability. It implies that the badness of your pain depends—to an arbitrary degree—on whether other people in far off galaxies are suffering causally isolated pains. But that doesn’t seem true. The badness of me having a headache, say, doesn’t depend on how many headaches aliens 500 billion galaxies away have!
There’s also a sense in which I don’t feel the view really accomodates the intuition it’s designed to maintain. If you have the intuition that no number of dust specks can outweigh a torture, then I think you’ll have the following intuition:
No number of things that are each as bad as the first dust speck can outweigh a torture.
But this view doesn’t secure that. It just says that the dust specks drop off in badness. But if they really maintained the badness of the first dust speck, they’d be able to outweigh a torture.
Ibrahim is very smart, so if you present these objections to him in conversation, he’ll say smart things about them. But I find the view totally nuts.
Linch (my coworker and friend!) left the following comment:
So I probably agree with the thrust of your argument but I think it has a few holes.
First of all, I don’t think a continuous chain of imperceptibility lower suffering is mathematically possible, fwiw. I think you can probably still rescue your argument without the “imperceptible” modifier, though it’s tricky.
See https://linch.substack.com/i/182589405/the-intermediate-value-theorem
Also Eliezer’s original post was on 3^^^3 dust specks. Literal infinities break everything so not having a good answer to infinite ethics doesn’t strike me as a major knock-down on any moral theory or intuition.
I used infinity as a shorthand so I didn’t have to specify a number big enough. But all the arguments establish that there is some finite number big enough to get the job done—infinity not needed.
I think there are imperceptible differences in pains. For example, the amount of pain I’ll be in if thrown in 200 degree water is imperceptibly different from the amount if I’m thrown in 199.99999999999999999999999999999999999 degree water. It’s imperceptible not in the sense that there’s no difference but in the sense that the difference is too small to notice.
But in any case, there isn’t much that hinges on this. All we need is that some people being thrown into 199.99999999999999999999999999999999999 degree water is worse than some number at 200 degree water, and some number at 199.99999999999999999999999999999999998 degree water is worse than that, and so on. Even if the differences are perceptible, the principle still holds. And, as I’ll talk about a bit later, we can even make do with something more moderate than that.
Skeptasmic (who has some really good posts providing skeptical explanations of eucharistic miracles) writes:
If I had the choice to have a dust speck in my eye to prevent any chance of someone else being tortured, I would take the dust speck. And I think just about anyone would. It doesn’t matter how many people you multiply by, if, even with no veil, knowing 100% that they personally will be the ones with dust in their eye and that there is a 0% they would be the ones tortured, people would still choose the dust.
I assume that is because at some threshold, the utility of knowing there’s a chance, however small, of saving someone from torture will outweigh the negative of the pain.
To avoid that you could assume that the large number of people with dust in their eyes are blinded to the knowledge that it is preventing torture. But intuitively, if essentially everyone involved would choose dust given full information, it doesn’t make sense to choose torture on their behalf just because they don’t have full information.
I don’t think the move about full information works. In cases where you say things like “everyone would approve of X if they had full information,” I think you need to stipulate that the process of them acquiring full information is neither harmful nor beneficial for them. Otherwise you get results like “if everyone had full information that two elderly people with oozing pus wounds were sleeping together, they’d want them to cut it out, so those people have a reason to cut it out.” Because I think infinite dust specks are worse than a torture, I think if given full information where you stipulate that them having the information wouldn’t harm them in any respect, then I think they would prefer the torture.
Raph left the following comment (there’s a bit of invective at the beginning, but the comment is quite substantive, so thought it was worth addressing):
You say :
> “Let me be clear on the minimal commitments. They are simply: 1. Replacement; 2. Transitivity
Both of these are extremely intuitive premises that we’d accept in any other domain.”
.
I disagree. I don’t find replacement “extremely intuitive”, and we wouldn’t automatically accept it in most math-adjacent domains.
Moreover — and that’s probably the main issue — Replacement as you phrased it, with “unit of suffering” comprises an implicit assumption.
Indeed, for the premisse to be intuitive, you have to commit to a certain view of what is suffering, basically that the space of possible suffering intensity can be cleanly mapped to some numerical axis (an archimedean structure on to be precise). Even if it is plausible, it is definitely a strong metaphysical claim, which is far from “extremely intuitive”. And I feel like it basically becomes a circular reasoning as you kind of smuggled your conclusion in the premisse.
Similarly, friend of the blog, coworker, and space savant Avi Parrack argued:
One thing I always think when presented with arguments of the flavor: “but imagine changing X by some minuscule amount epsilon” as an intuition pump that we should reason about things in a continuous way I’m suspicious. In nature for example this is untrue in most cases.
Suppose I have a water molecule in my thimble. That is a little bit of water! But now imagine warming the thimble by epsilon—an imperceptible increase—and adding 100,000 times as many molecules. Surely I have more water. Now warm the thimble again increasing temperature by merely epsilon for 100,000 times as many water molecules. Seems like I have yet more water.
And yet we know that after doing this many many times, I will eventually look in my thimble to find… What? No water?! It has evaporated away into the air.
These kind of phase transitions are everywhere. It’s deep at the fundaments of our universe e.g. in electroweak symmetry breaking, superconductivity, and Bose-Einstein condensation. It emerges also across every scale from the molecular to the sociological to the cosmological; e.g. magnets snapping into alignment, swarming/flocking behaviour, and cosmic inflation.
The intuition in question is not “nothing in nature ever depends on a small change.” Of course lots of things do. The intuition is simply that an arbitrarily large fluctuation in badness shouldn’t hinge on tiny and plausibly even imperceptible differences. It isn’t plausible that no number of tortures at a slightly lower intensity can outweigh tortures at a higher intensity.
Here’s one way to see the difference: the strangeness in the torture vs. dust specks cases arises from a kind of hypersensitivity—huge changes in badness hinging on small differences. And unlike in cases like the water boiling at some temperature, this isn’t because there’s some discrete threshold at which a process is triggered, making things much worse. Morality isn’t causal in that way. Instead, the torture outweighing dust specks view must simply assign arbitrary significance to a very small difference in intensity.
Additionally, even if you think there is a threshold somewhere, this doesn’t help much with the duration spectrum argument. Quoting my earlier piece:
In addition, we can turn the screws on the denier of Replacement by varying a number of the features in question. The denier of replacement must think that there’s a pain at some amount of intensity so that any number of pains at lower intensity is less bad than that single pain at the higher level of intensity.
But then let’s imagine taking that pain and varying its duration rather than its intensity. Assume the pain in question lasts 10 minutes. Surely replacing each of those pains with 100,000 pains of equal intensity that last 9 minutes and 59 seconds is worse. And replacing those with pains that last 9 minutes and 58 seconds is worse. At the end of this road, we’re left with the conclusion that a very large number of second-long pains at that level of intensity is worse than the ten minute pain at that level. But then replace each of those pains which last only a second with a pain that lasts 10 minutes at a slightly lower level of intensity. Surely that is worse. But then by transitivity, some number of pains at the lower intensity level must be worse than the pains at the higher intensity level.
Thus, the denier of Replacement must think something stronger. They must think that either:
There exists some level of pain intensity, so that if you make the pain imperceptibly less intense, and last 100,000 times as long, things haven’t gotten worse.
or
There exist pains at some levels of intensity so that shortening their duration by an imperceptible amount and inflicting them on 100,000 times more people doesn’t make things worse.
Continuing from Raph’s comment:
• Then you say:
> “The denier of replacement must think that there’s a pain at some amount of intensity so that any number of pains at lower intensity is less bad than that single pain at the higher level of intensity.”
No, not necessarily. See: https://centerforreducingsuffering.org/lexical-views-without-abrupt-breaks/
The linked piece distinguishes between two kinds of inferiority:
Strong Inferiority: An object e is strongly inferior to an object e′ if and only if e is worse than any number of e′-objects.
Weak Inferiority: An object e is weakly inferior to an object e′ if and only if for some number m, m e-objects are worse than any number of e′-objects.
It notes (correctly) that those who reject the replacement principle I gave have to only believe in Weak Inferiority, not Strong Inferiority. In other words, you don’t have to think one torture outweighs any number of less intense tortures, only that some number of tortures outweighs any number of less intense tortures.
Fair enough, but I don’t think this helps much. If you believe in Weak Inferiority, you think that if you have a sequence of tortures, and they’re each replaced with a billion slightly less intense tortures, then things haven’t gotten worse. That is very hard to believe.
Similarly, if you believe in Weak but not Strong Inferiority, you’ll have to give up separability. You’ll have to think, in other words, that the badness of replacing one torture with a bunch of other tortures depends on how many other causally-isolated tortures are going on elsewhere in the galaxy. That is very hard to believe!
Next Raph argues that there’s a plausible point where the threshold could be. It could simply arise when the pain becomes unbearable, so that the sufferer would do anything to make it stop. It doesn’t, Raph claims, seem too bad to hold that replacing one unbearable pain with a bunch of barely bearable pains would leave things better.
I don’t find this plausible. Insofar as the thing separating bearable from unbearable pains is a quintillionth of a degree of temperature, then it seems clearly like replacing the unbearable pains each with googolplex bearable pains would make things a lot worse. Barely bearable pains don’t seem substantially different from unbearable pains. Certainly not so different that replacing each barely unbearable pain with a googolplex barely bearable ones wouldn’t leave things worse.
The other thing is that the bearability of pain seems to depend on lots of morally insignificant factors. Whether a pain is bearable might depend on, say, the color of the light in the room (if that affects the threshold at which the person gives in for random psychological reasons). But then if you think that the bearable pains can’t be outweighed, then changing the room color would be worse than causing googolplex times more people to be tortured. That doesn’t seem right!
I believe your section 3 about risk is completely irrelevant. It confuses a discussion about value theory with one about decision theory. You are suggesting one difficulty about making decision implies something about axiology.
I definitely agree with the difficulties that are raised by very small likelihood of high stakes, but I don’t think it can imply any axiological claim.
> “Together, then, these principles imply that some number of dust specks are worse than a torture.”
No. They imply something about *decision*, but nothing about *axiology*.
From what I’ve skimmed, Huemer’s original argument does not make this mistake.
For the record, Huemer doesn’t think his argument is just about risk. He thinks it’s a general argument against lexical superiority. The way I set it up was having it be about lotteries, rather than decisions. If there are a lot of galaxies, each with a one in Rayo’s number chance of a person being tortured, that is better than if they all have 100 quintillion people get dust specks in their eyes. Yet all those galaxies together (assuming there are at least Rayo’s number of them) ~guarantee that one person will be tortured. Huemer’s original argument is about decision-making, but it can be suitably modified to be about lotteries. Then the only separability principle you need is that the badness of each lottery doesn’t hinge on the other causally-isolated lotteries.
> “could it really be that there’s some small jump in pain-level that spikes things from a finite level to an infinite level.”
Nope nope nope. Clear logical mistake here to imply any discontinuity. The graph would NOT have to look like this. It could totally be continuous (and would certainly be).
See https://forum.effectivealtruism.org/posts/je5TiYESSv53tWHC9/utilitarians-should-accept-that-some-suffering-cannot-be-1 (look at the graphs in section 7, but also the whole thing).
Here are the graphs in question from Aaron Bergman’s piece:
I agree that it was imprecise to suggest that this was discontinuous, as the graph can be continuous. But still, on Aaron’s graph, there is some value N, such that the Y value at N is finite, but the Y value at N+.001 is not finite. That’s what I was getting at, though I agree that this was phrased poorly.
All in all, then, it seems pretty clear to me that the case for thinking dust specks outweigh torture is strong. There are a number of premises that collectively entail this conclusion that would be accepted in other settings. Those premises are modest and would be otherwise accepted. There are also reasons not to trust intuitions about big numbers like this. Our moral intuitions are far from infallible, and this seems like one of the clearest cases where one of our moral intuitions will simply have to be revised in light of argument.



For those who say that they would prefer a dust speck with certainty over any probability of torture, consider a probability of torture a google times smaller than the probability that a random person will enter your house and torture you in the next few minutes, such that your overall probability of getting tortured remains virtually unchanged.
Once one stops conceptualizing the probability change as zero -> non-zero, it immediately becomes obvious one should avoid the dust speck.
Great post. Worth noting v quickly: on the bounded view, you can keep the intuition that the later dust specks are just as bad as the first, since it might be that the scale of value is nonlinear (this ends up looking a lot more like the view that torture is infinitely bad). Also, I agree the risk implications are wild, but some of what you say — eg low risks of random torture randomly fizzing into existence — is stuff which is outweighed by presumably similarly low risks of great joy happening. Just as fanatics don’t have to worry about muggings in part because any random made-up % of negative utility is also accompanied by an equally likely % of positive utility.