0
The brain is a strange instrument. It evolved to improve our survivals during hunter gatherer times—from the brains of fish, early mammals, and apes. It certainly is not without upside. In the absence of brains, none of you would be reading this excellent blog, and I wouldn’t be writing this excellent blog.
The brain is an imperfect device. It does some things very well. It’s quite good at doing addition and basic reasoning1. However, the flaws that infect human reasoning, producing widespread system error and error ridden verdicts that ripple throughout the whole edifice of mental machinery, spreading like a cancer, are quite pernicious. The poor quality of the reasoning of the average person is not born merely out of their inability to reason, it is also caused by particular failure modes that infect their reasoning.
One2 of the most crucial defects in the human brain3 is the inability to multiply. One particular fascination of mine other than utilitarianism relates to large numbers. No not large like a million or a billion. Large like graham’s number . Numbers that put googolplex to shame.
The immenseness of the numbers truly defies comprehension. They operate like Eldrich horrors—as far beyond the realm of anything we understand intuitively as Cthulu or Azathoth. Like Cthulu and Azathoth, picturing these numbers in your head would cause your brain to break4. They’re totally opaque to the human mind, which is left merely groping around the faint edges of their immensity.
And though we can’t understand their enormity, we can still make qualitative judgements, based on rationality, on how one ought to act in relation to them. Rationality dictates that if a single instance of something is a little bit bad, magnifying it by making it’s prevalence increase by a factor of Graham’s number, it would turn into something very bad. Very bad understates it. If it’s worse by a factor of Graham’s number, it would be the worst thing ever by unfathomable orders of magnitude.
Our brains resist the demands of rationality, demands which require we shut up and multiply. And yet this injunction to follow the dictates of rationality, to shut up and multiply is not just an optional extra. It is required to come to the correct moral conclusions. It’s indispensable to justified reasoning the way Oxygen is indispensable to fire. And like fire, we shy away from it. Some use its cruel radiance, it’s unnerving danger, to burn their enemies. To call them immoral for their willingness to sit by the fire and bask in its warmth. Yet if we are to progress, we must sit by the fire and use it.
Time and time again psychological experiments have born out the inability of the human brain to multiply. Its inability to appreciate how much larger a billion is than a million. The sheer, quantitative difference between the age of the universe and the time passed since the middle of the myocene period.
Enough specks of dust possess size that rivals mount Everest. More dust than that surpasses the size of Everest a million fold. Why would we expect morality to be any different?
When thinking through ethical problems, you, dear reader, almost certainly rely on what seems to be the case. Whatever your verdict on the trolley problem, it was probably reached by thinking about the trolley problem and coming to conclusions based on what seemed to be the case. Whether you had a positive or negative emotional reaction to the trolley problem when you first heard about it.
Perhaps, over time, you gradually changed your view about some cases. Reflecting on other analogous cases can change your emotional reaction—resulting in a shift in your reasoned judgement, but it usually doesn’t. As Heidt argues , much of reasoning is post hoc justifications of intuitive judgements.
The trap of rationalizing initial intuitions is one that avails us all. Even minds as great as Parfit were held back by an inability to accept the repugnant conclusion. I recall a quote by him, which I can’t find, along the lines of “I’d rather reject ethics as a whole than accept the repugnant conclusion.”
Rationality can be quite demanding—forcing us to shed judgements that we once held dear. However, we must abide by its dictates if we are to be ethical. This article is not just an argument against a particular thought experiment against utilitarianism—although it certainly serves that role. This article is intended to break your blind faith in the reliability of your intuitions, to show how a judgement that you likely hold as dear as most any other judgement in your arsenal is rationally indefensible. It cannot withstand philosophical scrutiny. There are not plausible counter arguments to defend the initial judgement. It is wholly unreasonable.
You have already probably accepted that lots of other people have intuitions about morality that are disastrously wrong. Wherever you fall on the trolley problem, abortion, or population ethics you probably think that lots of other people are getting things disastrously wrong. You also no doubt recognize that throughout most of human history most humans have gotten things disastrously wrong. Previous societies have tolerated slavery, genocide, repression of gay people, and many others.
As Ozy says
“But there are lots of issues where I am right and Greek and Roman philosophers are wrong.
And these are not especially controversial moral issues. Here’s a quick list of examples of beliefs I’m talking about:
You shouldn’t keep slaves.
You shouldn’t torture people.
You should also care about people who live in a different city than the one you live in, who were born in a different city than the one you live in, or who speak a different language than you.
You should let women vote, leave the house, choose who they marry, and generally exercise some amount of self-determination over their own lives.
Your national sport shouldn’t be watching people get murdered.
You shouldn’t rape pubescent children, especially if they are also your slaves.
If a country consists exclusively of former child soldiers with CPTSD and the slaves they can randomly murder on a whim, that is in fact bad, and not good.”
Parfit said that the three main moral views were climbing the mountain on three sides. If this is true, this still means that lots of moral views throughout history have involved falling off the mountain headfirst, to immediate death. The best theories may converge, but many theories are wrong on the level of little else.
But it’s very easy to accept that other people have crazy moral views, especially dead people who are no longer here to argue. It’s a very different thing to believe that you have crazy moral views. Views that would be viewed (pun intended) by a perfectly rational and moral community the way we view support for torturing people for no reason.
If we really think through it, our confidence in the correctness of our moral views begins to seem deeply unjustified. No doubt the Greeks would have agreed that past moral views were crazy. You, dear reader, are like the Greeks. You most likely hold immense confidence in the correctness of your moral views and the folly of previous ones. Yet what are the odds that you are the first generation to not have crazy moral views? The first generation to have reliable moral intuitions about the world? They are pretty damn low.
Maybe you’re not convinced. It’s very hard for people to shake their firmly held convictions about the correctness of their moral views. Yet I hope for this article to plant a seed of doubt. To show that at least one of your moral intuitions is very wrong. This also serves as an argument for utilitarianism. If every time our intuitions depart from the utilitarian line, there are independent reasons to support the utilitarian conclusion, that means that utilitarianism gets the answer that we would eventually reach upon reflection, prior to needing any reflection. Much like scientific theories make predictions, ethical ones do too. Ethical ones predict that their judgements will be born out. Utilitarianism’s judgements always are. Every single time I’ve reflected on a thought experiment, I’ve found the utilitarian line to be rationally inescapable. This has happened upwards of 30 times. Every…single…time utilitarianism turned out to be correct.
But I’m getting ahead of myself. What is this terrifying thought experiment, on which our moral intuitions are systematically wrong? One where the utilitarian verdict seems disastrously wrong but turns out to be undeniably correct? It is the following.
1
If given the choice, which should one prevent, one innocent person from being horrendously tortured or 10!!! (exclamation points are factorial signs) people from getting slightly irritating dust specks in their eyes, that are forgotten about 5 seconds later. The common sense view, even espoused by allegedly serious philosophers, is that one should prevent torture. This view is wrong. Very wrong. So wrong that the English language struggles to express the sheer folly of this view. If one reflects for five minutes it becomes very obvious.
David Friedman says the following
“Economists are often accused of believing that everything-health, happiness, life itself-can be measured in money. What we actually believe is even odder. We believe that everything can be measured in anything. My life is much more valuable than an ice cream cone, just as a mountain is much taller than a grain of sand, but life and ice cream, like mountain and sand grain, are measured on the same scale. This seems plausible if we are considering different consumption goods: cars, bicycles, microwave ovens. But how can a human life, embodied in access to a kidney dialysis machine or the chance to have an essential heart operation, be weighed on the same scale as the pleasure of eating a candy bar or watching a television program? The answer is that value, at least as economists use the term, is observed in choice. If we look at how real people behave with regard to their own lives, we find that they make trade-offs between life and quite minor values. Many smoke even though they believe that smoking reduces life expectancy. I am willing to accept a (very slightly) increased chance of a heart attack in exchange for a chocolate sundae.”
This basic principle, that bad things are commeasurable, that there are no evils that take on a different qualitative texture beyond the reach of any number of more minor ills, is a necessary feature of badness. In no other domain do we think that no number of minor instances of some even can add up to have greater impact than an upscaled version. Whales are large—but their size can be surpassed by vast numbers of Amoeba. Fires are hot, but their heat can be surpassed by vast numbers of Icecubes. Arsenic is far more unhealthy than Ice-cream, but the detrimental health effects of everyone eating 30 billion gallons of Ice-cream would surpass those of one person eating an iota of arsenic. Monkeys typing out Shakespeare is far less likely than flipping a heads, but there is a number of coin flips required to be heads which are conjunctively less likely than monkeys typing out Shakespeare.
If you think that dust specks are a little bit bad and torture is very bad, if we multiply the badness of dust specks by a large enough number, we’ll get the badness of torture. There must be some number of dust specks that can outweigh the badness of torture
2
The problem gets worse for those who think that one should prevent the torture over the dust specks. To think this, they must accept that there are some types of suffering that are so heinous that no lower level sufferings can ever outweigh.
However, this view is mistaken. Suppose that we were deciding between 1 torture causing 1000 units of pain (which we’ll take to be the amount of pain caused by tortures on average) versus 1000 tortures each causing 999 units of pain. It seems clear that the thousand tortures would clearly be worse. Now, we can do this process again. Which would be worse 1000 tortures with 999 units of pain or 1 million tortures each with 998 units of pain. This process can continue until we conclude that some vast numbers of “tortures” each inflicting as much misery as a speck of dust should be preferred to 1 torture causing 1000 units of pain. To hold the view that there’s a lexical difference between different types of pain, one would have to hold the view that there’s some threshold of pain which has the odd characteristic of being the cutoff point. At this cutoff point any tiny amount of suffering above the cutoff point outweighs any amount of suffering below the cutoff point. For example, if one claims that the cutoff point is at an amount of pain equivalent to stubbing one's toe, then they’d have to claim that infinite people experiencing pain one modicum below a toe stub is less bad than 1 person having a 1 in 100 quadrillion chance of experiencing one unit of suffering above a toe stub
One could hold a separate view, a liberty based view, according to which torture outweighs because it involves violating peoples rights, which categorically matters more. Yet this view is wrong. If given the choice between preventing one person from having 5 dollars stolen, which would clearly be a liberty violation, and getting rid of all deadly diseases, it seems clear that preventing all diseases would matter more than preventing one small liberty violation. There’s a reason we spend more money trying to eradicate disease than we do trying to prevent single individuals from having their stuff stolen. If we held the view that preventing theft mattered more than eradicating disease, then we should devote no money to preventing disease until all theft had been eradicated. Thus if natural phenomenon, like nature can matter more than liberty violations, other natural phenomena like dust specks can also matter more.
One could hold the view that these types of suffering are so different that they can’t be compared, yet this view is also wrong. The pain of losing a loved one is a very different type from the pain of a toe stub, yet it’s clear that losing a loved one is worse than a toe stub. Difference does not indicate incomparability.
3
The gradual diminishment in pain objection to the clear cutoff view is decisive. So was the first objection—lots of bad things can, by necessity, add up to one very bad thing. One final issue with the anti torture radicals is that their view is not reflective of anyone’s revealed preferences.
Lots of dust specks can obviously possess enough collective badness to outweigh the benefits of going to the pub for an hour. Preventing googolplex people from getting dust specks in their eyes would be far better than taking an action which has a 1 in a billion chance of letting one extra person go to the pub. So we know that dust specks and journey’s to the pub are measured on the same qualitative scale, even if going to the pub is more valuable. Maybe going to the pub is as good as preventing 1000 dust specks—the precise ratio isn’t clear.
Well, people often go to the pub. If people stayed at home, their odds of getting abducted, kidnapped, and tortured would be much lower. If people really held the view that torture was so horrendous that it couldn’t be outweighed by any positive benefits, they wouldn’t risk getting brutally tortured and kidnapped just for trivial pleasure. It’s much easier to think torture is infinitely dis-valuable when one is pontificating philosophically. However, no one applies it in their actual life.
If you drive to buy ice-cream you’re risking your life to buy ice-cream. Lives are not infinitely more valuable than ice-cream cones. They’re just a lot more valuable.
4
Maybe you still hold the intuition, feel like I must be engaging in some trickery. No plausible ethical principle can justify lots of dust specks being collectively worse than torture. Well, remember that your brain has no ability to conceive of large numbers and is unable to multiply well in moral matters.
The number of dust specks is 10!!!. Take a moment to consider just how vast a number this is. 10! = 3628800. This is already an immense number. Now we take that number factorial. We get 9.051994 × 10^2,2228,103. This number is beyond immense. It’s more than the numbers of atoms in the universe. It’s more than the number of atoms in the universe raised to its own power. The number of seconds of billions of years of pain experienced from this many dust specks is orders of magnitude greater than the number of milliseconds humans have been in pain. This number defies comprehension.
There are between 10^78 to 10^82 atoms in the observable universe. To write those numbers you’d need between 78 and 82 digits. The number of dust specks would require millions of digits to write. And this is only with two factorial signs.
With three factorial signs, the number is 10^(2.012087 × 10^22228111). You couldn’t write this number in the universe—it wouldn’t fit. Not even close. Not even if you wrote down every number on its own atom. Not only does this number defy comprehension—the size of a paper needed to write down this number defies comprehension.
In terms of total pain experienced, the difference between 10!! and 10!!! is greater than the sum total of all pain that will ever be experienced by humans unless we achieve immortality and survive forever. And yet to our intuitions, it just looks like an extra exclamation point.
So, dear reader, wouldn’t you think that maybe, just maybe, the vastness of these numbers does have moral significance. That a number of very minor bad things that couldn’t be imagined without our brains collapsing into a black hole—that would cause a brain the size of the universe to collapse into a black hole, might be able to produce enough total badness to outweigh more salient very bad things like torture. Things that we can get our minds around.
A day of torture is no doubt dreadful. I’ve had the excellent fortune of never having been brutally tortured5. However, I’ve also had the good fortune of never having experienced enough lifetimes worth of dust specks induced pain and irritation, that the universe would burn out before one was able to write down the number of years. Contemplation of the sheer horror of torture leaves me with the very firm conviction that torture is very very bad. But no amount of abstract contemplation about the horrors of torture should leave one with the conviction that it’s infinitely worse than trivial ills.
If X is infinitely worse than Y one ought to consider that however low the odds are of X, the mere possibility of it is worse than Y. So let’s apply this principle to the torture versus dust specks. If you think that torture is infinitely worse than dust specks—or at least enough worse than dust specks that 10!!! dust specks are less bad than a single torture, you’d have to accept that a if you had a choice to cure the world of irritating and painful dust specks at the cost of a 1 in 10000000000000000000000^10000000000000000000000000000000000 chance of one person being tortured would be worth it. Take a moment to consider how low these odds are.
The brilliant Scott Alexander gives us a point of apt comparison. “Or second, that despite our best efforts, a research institute completes an unfriendly superintelligence. They are seconds away from running the program for the first time when, just as the lead researcher’s finger hovers over the ENTER key, a tornado roars into the laboratory. The researcher is sucked high into the air. There he is struck by a meteorite hurtling through the upper atmosphere, which knocks him onto the rooftop of a nearby building. He survives the landing, but unfortunately at precisely that moment the building is blown up by Al Qaeda. His charred corpse is flung into the street nearby. As the rubble settles, his face is covered by a stray sheet of newspaper; the headline reads 2016 PRESIDENTIAL ELECTION ENDS WITH TRUMP AND SANDERS IN PERFECT TIE. In small print near the bottom it also lists the winning Powerball numbers, which perfectly match those on a lottery ticket in the researcher’s pocket. Which is actually kind of funny, because he just won the same lottery last week.
Well, the per-second probability of getting sucked into the air by a tornado is 10^-12; that of being struck by a meteorite 10^-16; that of being blown up by a terrorist 10^-15. The chance of the next election being Sanders vs. Trump is 10^-4, and the chance of an election ending in an electoral tie about 10^-2. The chance of winning the Powerball is 10^-8 so winning it twice in a row is 10^-16. Chain all of those together, and you get 10^-65. On the other hand, Matthews thinks it’s perfectly reasonable to throw out numbers like 10^-66 when talking about the effect of x-risk donations. To take that number seriously is to assert that the second scenario is ten times more likely than the first!”
This deeply implausible scenario has odds of about 10^-65. This is a vanishing scintilla of the vastness of 10!!!. 10^65 is not a large enough number to write down the number of digits in 10!!!. So, for those of you who wouldn’t prevent the torture over the dust specks would you hold that eradicating all of the world’s painful dust specks at the cost of one torture in the following scenario would be worth it? This is an obvious modification of the Alexander case.
A particular person is seconds away from running the program for an AGI for the first time when, just as the lead researcher’s finger hovers over the ENTER key, a tornado roars into the laboratory. The researcher is sucked high into the air. There he is struck by a meteorite hurtling through the upper atmosphere, which knocks him onto the rooftop of a nearby building. He survives the landing, but unfortunately at precisely that moment the building is blown up by Al Qaeda. His charred corpse is flung into the street nearby. In small print near the bottom it also lists the winning Powerball numbers, which perfectly match those on a lottery ticket in the researcher’s pocket. Which is actually kind of funny, because he just won the same lottery last week.
Meanwhile, two hours drive away another particular person is also in the midst of building the first AGI when they face an identical scenario to the first one—winning the lottery for the second time, being tornado struck, meteor struck, and then murdered by Al Queda.
Meanwhile a third person who was chosen at the beginning flips 5000 fair coins which all turn up heads. Meanwhile all men in North America get a simultaneous heart-attack and die. Meanwhile ten beginners simultaneously beat stockfish ten by selecting random moves.
I think you would agree that it makes sense to eradicate the world’s dust specks and accept the risk. But if dust specks are infinitely less bad than torture, you cannot accept this view. It’s risking infinite badness for finite badness.
Conclusio latior articulus
So what have we learned from this6?
Our intuitions are not reliable sources of truth. They’re sometimes right—they provide us weak evidence for the conclusion. But the truth often diverges from our intuitions.
Much like truths of physics, which are so mind bending that people find quantum mechanics hopelessly confusing, the truths of ethics are often deeply counter intuitive. Attempting to reconcile our starting intuitions with the truths of physics is no way to do physics. It’s similarly no way to do economics. Is it a way to do ethics?
The answer is, of course, a resounding no. Previous articles have provided numerous examples of when our intuitions are just backwards. This article was not unique—it just provided a particularly obvious and salient example.
The other interesting implication of this is that intuitions can cause us to support things that are truly evil. If you accept that evils are commensurate, that some number of dust specks (say 1 million) are as bad as a torture, then the intuition that one shouldn’t prevent 10!!! dust specks, and instead should prevent one torture isn’t just wrong. If one actually were in a situation where they had those two options, and chose to prevent the torture, they would have committed the single most horrific act in history.
The murder of millions of people is not in the same ballpark in terms of badness as 10!!! dust specks. The badness of history’s worst ills, the starvation brought about by the Russian revolution, the millions murdered by the Nazi’s—all of these would be child’s play compared to the evil brought about by good hearted people relying on their intuitions and refusing to shut up and multiply.
Fortunately, no one is in a position to make that choice. But it’s telling that this immense reliance on intuitions results in otherwise moral people being willing to commit history’s worst act. Most of your probably would have done that before reading this article—some probably still would.
Many would find this conclusion revolting. If a politician wrote publicly in favor of the dust specks prevention over torture, they’d be seen as sociopathic. Yet it is really the common sense morality that is sociopathic, devoid of empathy. Each dust speck is not just a number on a page, it is a life made worse. The common sense view holds that the badness of the dust specks caps out at some point—ignoring the difference between 10!! and 10!!!. Yet this difference is a difference in pain that surpasses all pain in human history—ignoring it is a massive ethical failure.
Much like the correct ethical view must care more about a billion deaths than a million, and more about a million than a thousand, it too must care more about 10^(2.012087 × 10^22228111) dust specks than 9.051994 × 10^2,2228,103 dust specks. Small irritations add up.
The utilitarian devotion to multiply is born out of caring equally about all moments of existence—even when some are more salient than others.
Wrong acts often have a superficial plausibility, caused by our empathy being skewed towards only some of the affected parties. It is impossible for the human mind to imagine what the sum total of conscious experience will be like.
Perhaps leaving this article, even if convinced of the thesis about torture versus dust specks, this only slightly undercuts your faith in intuitions. Surely, one instance of intuitions failing is inadequate to undermine them across the board.
My reply to Huemer pointed out many more intuitions that are super unreliable. This article is not intended to single handedly prove that our intuitions are very often systematically wrong. It’s just part of the cumulative case for utilitarianism. But three things are worth taking away.
The torture versus dust specks argument against utilitarianism does not work.
Utilitarianism got the right answer to the torture versus dust specks problem immediately, rendering the correct judgement, rather than needing to shift its verdict after the results of the careful reflection came in.
The way that most everyone, including you probably, make decisions would leave them concluding that they ought to commit the worst act in human history. Even when the reasoning bears out that it’s wrong, it’s still hard to accept that you ought prevent the dust specks. The hypothetical worst act in history is surprisingly alluring, even after reasoning reveals how awful of an act it is.
These are perhaps not enough to undermine your conviction in the project of using intuitions to throw out utilitarianism. But it should leave some cracks in the foundations. Cracks large enough to fit universes worth of dust specks.
Very basic reasoning. Most people reason at the level of the average person.
Of many.
Construing defects broadly so as to include virtue ethics and deontology
Okay, technically break in different senses. In one sense we’re talking about breaking as in turning into a black hole and in the other we’re talking about going insane, but the basic idea is the same.
With the possible exception of when I was compelled to read modern libertarian arguments for argumentation ethics
Other than, of course, that Latin always sounds impressive.
The main problem here is that this is an fundamentally unserious proposition. 10!! exceeds the number of people who will ever live by about 10!!. It is not necessary to argue that at some point the dust specs are worse than torture, because that point quite obviously is not physically possible to achieve.
The quantifiable utility units seem like less of a problem. Other concerns though -
1] Qualitative judgements are an issue. Getting a dust speck in my eye is a minor inconvenience. Even if I aggregate the total dust specks (assuming that these dust specks are dispersed through the population and do not torture anyone), the nature of getting dust in the eyes is still the same. It's overall a minor inconvenience
2] How do you weigh the value of biological life and death against dust specks? If life /= infinite value then could a non-death impact outweigh a death impact