See here for part 1
A Bayesian Analysis Of When Utilitarianism Diverges From Our Intuitions
Why we shouldn't worry too much about the cases in which utilitarianism goes against our intuitions
“P(B)|A=P(B)xP(A)|B/P(A)”
—Words to live by
Lots of objections to utilitarianism, like the problem of measuring utility, rest on conceptual confusions. However, of the ones that don’t rest on basic conceptual confusions, most of them rely on the notion that utilitarianism is unintuitive. Utilitarianism entails the desirability of organ harvesting, yet some people have strange intuitions that oppose killing people and harvesting their organs (I’ll never understand such nonsense!).
In this post, I will lay out some broad considerations about utilitarianisms’ divergence from our intuitions and explain why these are not very good evidence against utilitarianism.
The fact that there are lots of cases where utilitarianism diverges from our intuitions is not surprising on the hypothesis that utilitarianism were correct. This is for two reasons.
There are enormous numbers of possible moral scenarios. Thus, even if the correct moral view corresponds to our intuitions in 99.99% of cases, it still wouldn’t be too hard to find a bunch of cases in which the correct view doesn’t correspond to our intuitions.
Our moral intuitions are often wrong. They’re frequently affected by unreliable emotional processes. Additionally, we know from history that most people have had moral views we currently regard as horrendous.
Because of these two factors, our moral intuitions are likely to diverge from the correct morality in lots of cases. The probability that the correct morality would always agree with our intuitions is vanishingly small. Thus, given that this is what we’d expect of the correct moral view, the fact that utilitarianism diverges from our moral intuitions frequently isn’t evidence against utilitarianism. To see if they give any evidence against utilitarianism, let’s consider some features of the correct moral view, that we’d expect to see.
The correct view would likely be able to be proved from lots of independent plausible axioms. This is true of utilitarianism.
We’d expect the correct view to make moral predictions far ahead of its time—like for example discerning the permissibility of homosexuality in the 1700s.
While our intuitions would diverge from the correct view in a lot of cases, we’d expect careful reflection about those cases to reveal that the judgments given by the correct moral theory to be hard to resist without serious theoretical cost. We’ve seen this over and over again with utilitarianism, with the repugnant conclusion , torture vs dust specks , headaches vs human lives, the utility monster, judgments about the far future, organ harvesting cases and other cases involving rights, and cases involving egalitarianism. This is very good evidence for utilitarianism. We’d expect incorrect theories to diverge from our intuitions, but we wouldn’t expect careful reflection to lead to the discovery of compelling arguments for accepting the judgments of the incorrect theory. Thus, we’d expect the correct theory to be able to marshal a variety of considerations favoring their judgments, rather than just biting the bullet. That’s exactly what we see when it comes to utilitarianism.
We’d expect the correct theory to do better in terms of theoretical virtues, which is exactly what we find.
We’d expect the correct theory to be consistent across cases, while other theories have to make post hoc changes to the theory to escape problematic implications—which is exactly what we see.
There are also some things we’d expect to be true of the cases where the correct moral view diverges from our intuitions. Given that in those cases our intuitions would be making mistakes, we’d expect there to be some features of those cases which make our intuitions likely to be wrong. There are several of those in the case of utilitarianism’s divergence from our intuitions.
A) Our judgments are often deeply affected by emotional bias
B) Our judgments about the morality of an act often overlap with other morally laden features of a situation. For example, in the case of the organ harvesting case, it’s very plausible that lots of our judgment relates to the intuition that the doctor is vicious—this undermines the reliability of our judgment of the act.
C) Anti utilitarian judgments get lots of weird results and frequently run into paradoxes. This is more evidence that they’re just rationalizations of unreflective seemings, rather than robust reflective judgment.
D) Lots of the cases where our intuitions lead us astray involve cases in which a moral heuristic has an exception. For example, in the organ harvesting case, the heuristic “Don’t kill people,” has a rare exception. Our intuitions formed by reflecting on the general rule against murder will thus be likely to be unreliable.
Conclusions
Suppose we had a device that had a 90% accuracy rate in terms of identifying the correct answer to math problems. Then, suppose we were deciding whether the correct way to solve a math problem was to use equation 1 or equation 2. We use the first equation to solve 100 math problems, and the result is the same as the one given by the device 88 times. We then use equation 2 and find the results correspond with those given by the device in all 100 cases.
We know one of them has to be wrong, so we look more carefully. We see that the 12 cases in which the first equation gets a result different from that of the second equation are really complex cases in which the general rules seem to have exceptions—so they’re the type of cases in which we’d expect the equation that’s merely a heuristic to get the wrong result. We also look at both of the equations and find that the first equation has much more plausibility, it seems much more like the type of equation that would be expected to be correct.
Additionally, the second equation has lots of subjectivity—some of the values for the constants are chosen by the person applying it. Thus, there’s some room for getting the wrong result based on bias and the assumptions of the person applying it.
We then see a few plausible seeming proofs of the first equation and see that the second equation isn’t able to make sense of lots of different math problems, so it has to attach auxiliary methods to solve those. We then hear that previous students who have used the first equation have been able to solve very difficult math problems—ones that the vast majority of people (most of whom use the second equation), have almost universally gotten wrong. We also see that equation 2 results in lots of paradoxical seeming judgments—we have to call in our professional mathematician friend (coincidentally named Michael Huemer) to find a way to make them not straightforwardly paradoxical, and the judgment he arrives at requires lots of implausible stipulations to rescue equation 2 from paradox. Finally, we find that in all 12 of the cases in which the correct answer diverges from the answer given by the device, there are independent pretty plausible proofs of the results gotten by equation 1 and they’re more difficult problems—the type that we’d expect the device to be less likely to get right.
In this case, it’s safe to say that equation 1 would be the equation that we should go with. While it sometimes gets results that we have good reason to prima facie distrust (because they diverge from the method that’s accurate 90% of the time), that’s exactly what we would expect if it were correct, which means all things considered, that’s not evidence against it. Additionally, the rational judgment in this case would be that equation 2—like many moral systems—is just an attempt to mirror our starting beliefs, rather than to figure out the right answer, forcing revision of our beliefs.
Our moral intuitions are the same—they’re right most of the time, but on the hypothesis that they’re usually right, we’d still expect there to be lots of cases in which our moral intuitions are wrong. When we carefully reflect on the things we’d expect if utilitarianism were correct, they tend to match exactly what we see.
Torres Is Wrong About Longtermism
Very, very wrong
Phil Torres has written an article criticizing longtermism. Like most articles criticizing longtermism, it does so very poorly, making arguments that rely on some combination of question begging and conceptual confusion. One impressive feature of the article is that the cover image managed to make Nick Bostrom and Will MacAskill look intimidating—so props for that I guess.
So-called rationalists have created a disturbing secular religion that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites.
This is the article’s subtitle. Lets see if throughout the article Torres is able to substantiate such claims.
The religion claim is particularly absurd. Religion in such contexts is just used as a term of abuse—it has no meaning beyond “X is a group that is doing things, that we don’t like.” As Huemer points out, to be a religion an organization needs to have many of the following: faith, supernaturalism, a worldview, a source of meaning, self support, religious emotions, ingroup identification, source of identity, and organization. Longtermism has very few of these—far fewer than the democratic party.
I won’t quote Torres’ full article for word count purposes—I’ll just quote the relevant parts.
In a late-2020 interview with CNBC, Skype cofounder Jaan Tallinn made a perplexing statement. “Climate change,” he said, “is not going to be an existential risk unless there’s a runaway scenario.”
So why does Tallinn think that climate change isn’t an existential risk? Intuitively, if anything should count as an existential risk it’s climate change, right?
Cynical readers might suspect that, given Tallinn’s immense fortune of an estimated $900 million, this might be just another case of a super-wealthy tech guy dismissing or minimizing threats that probably won’t directly harm him personally. Despite being disproportionately responsible for the climate catastrophe, the super-rich will be the least affected by it.
But I think there’s a deeper reason for Tallinn’s comments. It concerns an increasingly influential moral worldview called longtermism.
The reason Tallinn thinks climate change isn’t an existential risk is because climate change isn’t an existential risk—at least not a significant one. To quote FLI
An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living.
Global warming is likely to be very bad, but will not achieve that. It is thus not an existential risk. This is not to downplay it—crime, disease, and poverty are all also very bad but not existential risks. Thus, Torres’ first objection can be summarized as.
Premise 1: Longtermism claims that climate change isn’t an existential risk absent a runaway scenario
Premise 2: Climate change is an existential risk absent a runaway scenario
Therefore, longtermism is wrong.
However, both premises are suspect. Finding one longtermist saying a statement doesn’t mean it’s the position accepted by all. I accept climate change will likely increase international instability, leading to some greater existential risks, even absent cataclysm. However, it is not likely to end the world absent a runaway scenario.
Next, Torres says
At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.
The points about space colonization are wrong. A vast number of beings living rich and happy lives would be good—happiness is good, so unfathomable happiness would be unfathomably good. Torres concerns about space colonization have been devastatingly taken down here. Putting rich and happy lives in scare quotes doesn’t undermine the greatness of rich and happy lives—ones unfathomably better than any we can currently imagine.
An existential risk, then, is any event that would destroy this “vast and glorious” potential, as Toby Ord, a philosopher at the Future of Humanity Institute, writes in his 2020 book The Precipice, which draws heavily from earlier work in outlining the longtermist paradigm.
Torres has correctly described existential risks.
The point is that when one takes the cosmic view, it becomes clear that our civilization could persist for an incredibly long time and there could come to be an unfathomably large number of people in the future. Longtermists thus reason that the far future could contain way more value than exists today, or has existed so far in human history, which stretches back some 300,000 years. So, imagine a situation in which you could either lift 1 billion present people out of extreme poverty or benefit 0.00000000001 percent of the 10^23 biological humans who Bostrom calculates could exist if we were to colonize our cosmic neighborhood, the Virgo Supercluster. Which option should you pick? For longtermists, the answer is obvious: you should pick the latter. Why? Well, just crunch the numbers: 0.00000000001 percent of 10^23 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As the FHI longtermists Hilary Greaves and Will MacAskill—the latter of whom is said to have cofounded the Effective Altruism movement with Toby Ord—write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
A few points are worth making.
The correct view will often go against our intuitions. Pointing out that this seems weird isn’t especially relevant.
The ratio of future to current humans is greater than 1 trillion to one, if we accept Bostrom’s assumptions. If there were 2 people and they knew that whether 2 trillion people existed depended on what they did, it would seem pretty intuitive that their primarily obligation would be making sure the 2 trillion people had good lives. Presentism and status quo bias combined with our inability to reason about large numbers undermine our intuitions here.
The way to improve the future is often to improve the present. A world with war, poverty, disease, and violence will be worse at solving existential threats.
This brings us back to climate change, which is expected to cause serious harms over precisely this time period: the next few decades and centuries. If what matters most is the very far future—thousands, millions, billions, and trillions of years from now—then climate change isn’t going to be high up on the list of global priorities unless there’s a runaway scenario.
This is true, yet hard to see from our present location. Let’s consider past historical events to see if this is really unintuitive when considered rationally. The black plague very plausibly lead to the end of feudalism. Let’s stipulate that absent the black plague, feudalism would still be the dominant system, and the average income for the world would be 1% of what it currently is. Average lifespan would be half of what it currently is. In such a scenario, it seems obvious that the world is better because of the black plague. It wouldn’t have seemed that way to people living through it, however, because it’s hard to see from the perspective of the future.
In the same paper, Bostrom declares that even “a non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback,” describing this as “a giant massacre for man, a small misstep for mankind.” That’s of course cold comfort for those in the crosshairs of climate change—the residents of the Maldives who will lose their homeland, the South Asians facing lethal heat waves above the 95-degree F wet-bulb threshold of survivability, and the 18 million people in Bangladesh who may be displaced by 2050. But, once again, when these losses are juxtaposed with the apparent immensity of our longterm “potential,” this suffering will hardly be a footnote to a footnote within humanity’s epic biography.
Several facts are worth noting.
EA’s are working on combatting climate change, largely for the reasons Torres describes.
There are robust philosophical arguments for caring overwhelmingly about the far future. Just because they don’t exist yet doesn’t mean they shouldn’t factor into our moral consideration. This preference is as irrational as the preference for those who are close geographically, over those who are far away.
Consider one such argument. (Note, I’ll use utility, happiness, and well-being interchangeably and disutility, suffering, and unpleasantness interchangeably).
1 It is morally better to create a person with 100 units of utility than one with 50. This seems obvious—pressing a button that would make your child only half as happy would be morally bad.
2 Creating a person with 150 units of utility and 40 units of suffering is better than creating a person with 100 units of utility and no suffering. After all, the one person is better off and no one is worse off. This follows from a few steps.
A) The expected value of creating a person with utility of 100 is greater than of creating one with utility zero.
B) The expected value of creating one person with utility zero is the same as the expected value of creating no one. One with utility zero has no value or disvalue to their life—they have no valenced mental states.
C) Thus, the expected value of creating a person with utility of 100 is positive.
D) If you are going to create a person with a utility of 100, it is good to increase their utility by 50 at the cost of 40 units of suffering. After all, 1 unit of suffering’s badness is equal to the goodness of 1 unit of utility, so they are made better off. They would rationally prefer 150 units of utility and 40 units of suffering to 100 units of utility and no suffering.
E) If one action is good and another action is good given the first action, then the conjunction of those actions is good.
These are sufficient to prove the conclusion. After all, C shows that creating a person with a utility of 100 is good, D shows that creating a person with utility of 150 and 40 units of suffering is better than that, so from E, creating a person with utility of 150 and 40 units of suffering is good.
This broadly establishes the logic for caring overwhelmingly about the future. If creating a person with any positive utility and no negative utility is good, and then increasing their utility and disutility by any amount, such that their positive utility increases more than their disutility does, is good, then you should create a person if their net utility is positive. This shows that creating a person with 50 units of utility and 49.9 units of disutility would be good. After all, creating a person with .05 units of utility would be good, and increasing their utility by 49.95 at the cost of 49 units of disutility would also be good, then creating a person with 50 units of utility is good. Thus, the moral value of increasing the utility of a future person by N is greater than the moral disvalue of increasing the disutility of a future person by any amount less than N.
Now, let’s add one more stipulation. The moral value of causing M units of suffering to a current person is equal to that of causing a future person M units of suffering. This is very intuitive. When someone lives shouldn’t affect how bad it is to make them suffer. If you could either torture a current person or a future person, it wouldn’t be better to torture the future person merely in virtue of the date of their birth. Landmines don’t get less bad the longer they’re in the ground.
From these we can get our proof that we should care overwhelmingly about the future. We’ve established that increasing the utility of a future person by N is better than preventing any amount of disutility for future people of less than N and that preventing M units of disutility for future people is just as good as preventing M units of disutility for current people. This would mean that increasing the utility of a future person by N is better than preventing any amount of disutility of current people of less than N. Thus, bringing about a person with utility of 50 is morally better than preventing a current person from enduring 48 units of suffering, by transitivity.
The common trite that it’s only good to make people happy, not to make happy people is false. This can be shown in two ways.
It would imply that creating a person with a great life would be morally equal to creating a person with a mediocre life. This would imply that if given the choice between bringing about a future person with utility of 5 and no suffering or a future person with utility of 50,000 and no suffering, one should flip a coin.
It would say (ironically given Torres’ pitch) that we shouldn’t care about the impacts of our climate actions on future people. For people who won’t be born yet, climate action will certainly change whether or not they exist. If a climate action changes when people have sex by even one second, it will change the identities of the future people that will exist. Thus, when we decide to take climate action that will help the future, we don’t make the future people better off. Instead, we make different future people who will be better off than the alternative batch of future people would have been. If we shouldn’t care about making happy people, then there’s no reason to take climate action for the future.
These aren’t the only incendiary remarks from Bostrom, the Father of Longtermism. In a paper that founded one half of longtermist research program, he characterizes the most devastating disasters throughout human history, such as the two World Wars (including the Holocaust), Black Death, 1918 Spanish flu pandemic, major earthquakes, large volcanic eruptions, and so on, as “mere ripples” when viewed from “the perspective of humankind as a whole.” As he writes:
“Tragic as such events are to the people immediately affected, in the big picture of things … even the worst of these catastrophes are mere ripples on the surface of the great sea of life.”
In other words, 40 million civilian deaths during WWII was awful, we can all agree about that. But think about this in terms of the 1058 simulated people who could someday exist in computer simulations if we colonize space. It would require trillions and trillions and trillions of WWIIs one after another to even approach the loss of these unborn people if an existential catastrophe were to happen. This is the case even on the lower estimates of how many future people there could be. Take Greaves and MacAskill’s figure of 1018 expected biological and digital beings on Earth alone (meaning that we don’t colonize space). That’s still a way bigger number than 40 million—analogous to a single grain of sand next to Mount Everest.
This is true. It is only unintuitive if one considers things from their own perspective rather than from the standpoint of humanity broadly. When considered from the perspective of humanity broadly, it becomes clear that our impacts on whether we go extinct have a bigger impact on the future than the first humans’ actions had on us up until that point. It seems pretty intuitive that the first humans should have avoided getting wiped out, even if they didn’t desire to do so, given the vast positive potential of civilization.
If pushed, the first would save the lives of 1 million living, breathing, actual people. The second would increase the probability that 10^14 currently unborn people come into existence in the far future by a teeny-tiny amount. Because, on their longtermist view, there is no fundamental moral difference between saving actual people and bringing new people into existence, these options are morally equivalent. In other words, they’d have to flip a coin to decide which button to push. (Would you? I certainly hope not.) In Bostrom’s example, the morally right thing is obviously to sacrifice billions of living human beings for the sake of even tinier reductions in existential risk, assuming a minuscule 1 percent chance of a larger future population: 10^54 people.
Torres just goes on and on about how unintuitive this is, without giving any arguments against it. Remember: we are but the tiniest specks from the standpoint of humanity. If humanity were a person’s lifetime, we wouldn’t even be the first hour. So, much like it makes sense to plan for the future—to delay an hour to reduce risk of dying by a small amount, it makes sense to undergo sacrifices (if they were necessary to reduce existential risks, which they aren’t really!) to improve the quality of the future.
Additionally, even if one is a neartermist, they should still be largely on board with reducing existential threats. Ord argues in his book risk of extinction is 1 in 6, Bostrom concludes risks are above 25%, Leslie concludes they’re 30%, Rees says they’re 50%. Even if they’re only 1%—a dramatic underestimate, that still means existential risks will in expectation kill 79 million people, many times more than the holocaust. Thus, existential risks are still terrible if one is a neartermist, so Torres should still be on board with the project.
If this sounds appalling, it’s because it is appalling. By reducing morality to an abstract numbers game, and by declaring that what’s most important is fulfilling “our potential” by becoming simulated posthumans among the stars, longtermists not only trivialize past atrocities like WWII (and the Holocaust) but give themselves a “moral excuse” to dismiss or minimize comparable atrocities in the future.
Longtermists don’t dismiss such attrocities. We merely say that the future is overwhelmingly important. Pointing out that there are things that are much bigger than earth doesn’t dismiss the size of the earth—it just points out that there are other things that are bigger.
If future people matter at all—even if their well-being matters only .001% as much as current people, the far future would still dominate our moral considerations. We live in a morally weird world—one which makes our intuitions often unreliable. If our intuitions lead to the conclusion that we should ignore the billions of years of potential humans, in favor of caring about what affects current people only, then are intuitions have gone wrong. It would be awfully suspicious if the correct morality happened to justify caring unfathomably more about what happens in the 21st century than in the 22nd, 23rd, 24th…through the 10 billionth century.
Torres has no argument against this thesis—all he has is a series of cantankerous squawks of outrage. His intuitions are not surprising and unlikely to be truth tracking. American’s care much more about American domestic policy, even though America’s effect on other countries is much more significant than the impact of our policies domestically. Given that we cannot talk to future people, it’s very easy to prioritize visible suffering. This is, however, unwise. There is no way to design a successful version of population ethics that does not care about the existence of 10^52 future people with excellent lives.
EA’s don’t dismiss attrocities in the future. Every single historical atrocity has come from excluding beings from our moral circle. Caring about everyone is what has prevented atrocities—not ignoring 10^52 possible sentient beings. Utilitarians like Bentham and Mill have tended to be opposed to such attrocities. Bentham supported homosexuality in the 1700s. Thus, it’s Torres’ non utilitarian nonsense—one which goes against every plausible view of population ethics—that is at the root of historical evils.
. This is one reason that I’ve come to see longtermism as an immensely dangerous ideology. It is, indeed, akin to a secular religion built around the worship of “future value,” complete with its own “secularised doctrine of salvation,” as the Future of Humanity Institute historian Thomas Moynihan approvingly writes in his book X-Risk. The popularity of this religion among wealthy people in the West—especially the socioeconomic elite—makes sense because it tells them exactly what they want to hear: not only are you ethically excused from worrying too much about sub-existential threats like non-runaway climate change and global poverty, but you are actually a morally better person for focusing instead on more important things—risk that could permanently destroy “our potential” as a species of Earth-originating intelligent life.
Several points are worth making.
If people were looking for a philosophical excuse for not helping others, they’d choose objectivism. What type of person wants to take action on reducing existential threats, so they come up with a philosophical rationalization to justify that. People want to justify inaction—not action on a weird issue relating to the far future.
Longtermism is largely part of EA—a movement specifically build around using resources to do as much good as we can. One has to be delusional to think Ord or MacAskill is using longtermism as an excuse for not helping others, particularly when they donate all above roughly 35,000 dollars per year.
Even if one only cares about current people, reducing existential risks is still overwhelmingly important.
Calling it a religion is not an argument—it’s just invective.
To drive home the point, consider an argument from the longtermist Nick Beckstead, who has overseen tens of millions of dollars in funding for the Future of Humanity Institute. Since shaping the far future “over the coming millions, billions, and trillions of years” is of “overwhelming importance,” he claims, we should actually care more about people in rich countries than poor countries. This comes from a 2013 PhD dissertation that Ord describes as “one of the best texts on existential risk,” and it’s cited on numerous Effective Altruist websites, including some hosted by the Centre for Effective Altruism, which shares office space in Oxford with the Future of Humanity Institute. The passage is worth quoting in full:
Notice, there is no argument given by Torres against this conclusion. He just complains about unintuitive conclusions—particularly those that run afoul of social justice taboos, rather than offering real arguments. I’ve argued against Torres’ view here.
Additionally, EA’s actions on global health and development are entirely about improving the quality of life in poor countries. Thus, even if saving the life of a person in a rich country is intrinsically more important than saving the life of a person in a poor country, the best actions to take in terms of saving lives in the short term will be about saving lives in poor countries.
Never mind the fact that many countries in the Global South are relatively poor precisely because of the long and sordid histories of Western colonialism, imperialism, exploitation, political meddling, pollution, and so on. What hangs in the balance is astronomical amounts of “value.” What shouldn’t we do to achieve this magnificent end? Why not prioritize lives in rich countries over those in poor countries, even if gross historical injustices remain inadequately addressed? Beckstead isn’t the only longtermist who’s explicitly endorsed this view, either. As Hilary Greaves states in a 2020 interview with Theron Pummer, who co-edited the book Effective Altruism with her, if one’s “aim is doing the most good, improving the world by the most that I can,” then although “there’s a clear place for transferring resources from the affluent Western world to the global poor … longtermist thought suggests that something else may be better still.”
Torres pointing out irrelevant historical facts and putting scare quotes around value is, once again, not an argument. The Pummer quote describes why longermists should focus mostly on longtermist thing, rather than combatting global poverty, which is plausible even if we accept short termism. Thus, Torres lies about what Pummer believes. He additionally fails to pinpoint any action being done towards this end that he’d actually disagree with.
The reference to AI, or “artificial intelligence,” here is important. Not only do many longtermists believe that superintelligent machines pose the greatest single hazard to human survival, but they seem convinced that if humanity were to create a “friendly” superintelligence whose goals are properly “aligned” with our “human goals,” then a new Utopian age of unprecedented security and flourishing would suddenly commence. This eschatological vision is sometimes associated with the “Singularity,” made famous by futurists like Ray Kurzweil, which critics have facetiously dubbed the “techno-rapture” or “rapture of the nerds” because of its obvious similarities to the Christian dispensationalist notion of the Rapture, when Jesus will swoop down to gather every believer on Earth and carry them back to heaven. As Bostrom writes in his Musk-endorsed book Superintelligence, not only would the various existential risks posed by nature, such as asteroid impacts and supervolcanic eruptions, “be virtually eliminated,” but a friendly superintelligence “would also eliminate or reduce many anthropogenic risks” like climate change. “One might believe,” he writes elsewhere, that “the new civilization would [thus] have vastly improved survival prospects since it would be guided by superintelligent foresight and planning.”
Once again Torres has no argument—he just has slogans. Nothing that he has said should change anyone’s assessment of AI existential risks. The experts who have considered the issue rather than sneering at it tend to be pretty worried.
Torres’ article levies bad critiques, takes things out of context from hundred page books, and flagrantly misrepresents many points. It is not a serious objection to longtermism.
Addressing A Bad Objection To Utilitarianism
Yes, we can compare utility--even interpersonally.
One common yet poor objection to utilitarianism claims we cannot make interpersonal comparisons of utility so utilitarianism fails. It relies on an unmeasurable metric. This objection is nonsense.
For one, we frequently do make interpersonal comparisons of utility. We often make judgments that people tend to be worse off when they’re in poverty, subject to bombing, and when affected by pollution. Being precise about such matters poses difficult empirical questions, but interpersonal comparisons of utility are required in all political decisions.
If our set of values is to be consistent, we need to have a coherent utility function. If one adopts a view that one person in poverty is less bad than a person who is dead, but they do not give a number of people in poverty with equal moral weight to one person being dead, their view runs into a problem. Suppose one has the ability to either lift twenty people out of poverty or to prevent one death. In that situation, they have to come to a decision about whether preventing a death is more important than pulling twenty people out of poverty. If they would rather prevent the one death, then their utility function values preventing a single death more than pulling twenty people out of poverty. If they’d rather prevent twenty people from being put into poverty, the opposite would be true. If their answer is one of neutrality, then they would be indifferent between those two options. However, if their view is neutral between those two then they must judge preventing twenty-one people from being in poverty as more important than preventing one death. If they remain indifferent even in the case of twenty-one people, then they judge preventing twenty people from being in poverty to be morally equivalent to preventing one death, which they judge to be morally equivalent to preventing twenty people from being put into poverty. Thus, by transitivity, they’d be indifferent between twenty-one people being in poverty and twenty being in poverty. This is an implausible view.
As has been well documented by economists, as long as one’s judgments meet certain minimal standards for rationality, it must be able to be modeled as optimizing for some utility function. Thus, in order for a moral system to be robustly rational, it must make certain judgments about utility. If other moral systems cannot be modeled as a utility function, that means they are false.
To see this with a simple example, it is clear that setting one on fire is considerably morally worse than shoving someone. However, it is not infinitely worse. A coherent utility function would have a ratio of disutility caused by setting someone on fire to shoving someone of N, where they’d be indifferent between a 1/N chance of setting someone on fire and a certainty of shoving someone.
Additionally, given that utility here describes a type of mental state, there’s nothing problematic about making interpersonal comparisons of utility. Much like it would be possible in theory to judge whether or not a particular action increases the total worldwide experience of the color red, the same is true of utility. A way to conceptualize the metric is to imagine oneself as if they experienced every single thing that will be experienced by anyone and then act as if you were maximizing the expected quality of your experiences.
When it is conceptualized this way, it is evidently conceptually coherent. Judgments of the collective experience are logically no different from judgments of one's own experience. People very frequently make judgments about whether or not particular actions would increase their own happiness--for example when they decide upon a job, college, or choice of vehicle.
Additionally, every plausible theory will hold that the considerations of utilitarianism are true, except in particular cases. Every plausible moral view holds, for example, that if you could benefit one of two people of similar moral character, and you could benefit one of them more than the other, you should benefit the one that you could benefit more. Thus, issues surrounding evaluating consequences would plague all plausible moral theories.
Lots Of Bad Things Are Like Drunk Driving
Clearing up a confusion about utilitarianism
Content Warning: Sexual Assault
Consider the question: is drunk driving always bad? In a trivial sense—no, there have no doubt been some cases in history in which a person was slightly drunk but needed to drive to, for example, rush a wounded person to a near-bye hospital. However, this question is really asking about negligent drunk driving, where someone drives drunk for no greater purpose, posing significant risks.
In this case, there seems to be a clear distinction that needs to be made between being bad and being wrong. Drunk driving for trivial gains is always wrong, it poses dramatic risks for little benefit. However, drunk driving isn’t always bad—if it harms no one then it was an unnecessary risk, but it ended up not being bad.
Thus, we need to be clear about the distinction between badness and wrongness. An action can be called wrong if it shouldn’t have been done by the agent, knowing what they knew at the time. On the other hand, an action is bad if it ends up causing more harm than benefits, such that it would be better for it never to have taken place. Hitler’s grandmother having sex was bad, but it wasn’t necessarily wrong.
With this distinction in place, many of the counterexamples to utilitarianism end up dissipating. Worries that utilitarianism says that attempted murders that fail aren’t necessarily wrong are false, when wrongness is understood this way. It could turn out that they might not end up being bad. However, they are always bad in expectation, such that they’re the type of thing that ought not be done. Oftentimes, our judgments going against utilitarianism rely on oversimplifications of the real world, such that we imagine a more realistic real world stand in, in place of the bizarre stipulations of the hypothetical.
Consider the case of the sexual abuse of a comatose patient who will never find out. In any realistic situation, there will always be some risk that the person will be harmed. First of all, there’s a high chance that you will be discovered sexually assaulting them. To rule this out, god himself would have to appear to you and guarantee that you won’t be found out. Second, there’s a high chance of inducing pregnancy or spreading an STD. Thus, there needs to be some 100% safe method of having sex that leaves no risk of any harm. Third, there must be total certainty that the person will never find out. Fourth, there must be a guarantee that it won’t have any negative effect on your character—something that would never be true in a realistic situation. Realistically, sexually assaulting a comatose patient would make someone a worse person. Fifth, there must be some absolute guarantee that you will never feel guilty about it. Sixth, there must be some absolute guarantee that violating the norms against such acts in this case won’t spill over to other acts. Seventh, there must be a guarantee that the subjective experience of every single person except you will be no different if you take the act—so no else’s mental states will be any different. In such a bizarre case that involves divine revelation, the absolution of guilt or other negative character effects, and many others—would we really expect our intuitions to translate over? Especially because taking such an act would be clear evidence of vicious character—something our moral intuitions tend to find objectionable.
This case is thus much like the drunk driving case. Drunk driving is wrong because it’s harmful in expectation—you shouldn’t do it. However, if we somehow stipulate with metaphysical certainty that no one will be harmed by your drunk driving, then it becomes not objectionable. The act will still be morally wrong in any realistic situation—it just may not end up being bad. To be bad, it must be bad for someone.
To show the intuition, imagine if every second aliens were constantly sexually assaulting all humans, in ways that the humans never found out about. The aliens committed about 100^100^100 sexual assaults per second. In the absence of any sexual assault, the aliens would subjectively experience the sum total of all misery experienced during the holocaust every single second. However, each assault (which no one ever finds out about) reduces their misery by a tiny amount, such that if they commit the 100^100^100 assaults per second, their subjective experience will be equivalent to the sum total of all human happiness ever experienced, experienced every second. In this case, the marginal utility from each assault is very small. Yet it seems quite intuitive that the aliens’ actions would be permissible. When it’s sufficiently divorced from the real world that we understand clearly the ways in which the act is optimific, it becomes quite clear that the act can be, in certain bizarre counterfactual cases, permissible.
Thus, we need to be much clearer when conceptualizing thought experiments. Sometimes bad things aren’t wrong and wrong things aren’t bad. When we mix those up, we get false beliefs and confusion. Lots of wrong things end up not being bad—in most cases drunk driving probably doesn’t cause harm, but it still shouldn’t be done, because it causes harm in expectation.
Utilitarianism Wins Outright Part 26
A Sidgwick Inspired Proof
Suppose we accept the following principles.
Morality describes what we’d do if we were fully rational and impartial.
If we were fully rational we’d regard all moments of existence as equal, independently of when they occur.
If we were fully impartial, we’d regard benefits for all beings as equally important intrinsically
If we were fully rational and impartial we’d only care about things that make beings better off.
Only desirable mental states make people better off.
These are sufficient to derive utilitarianism. If we should only care about mental states, care about all mental states equally, care about all people equally, and maximize desirable mental states, that is just utilitarianism.
Premise 1 was defended here.
Premises 2 and 3 were defended extensively by (Singer and Lazari-Radek, 2014). Premises 4 and 5 were defended in here and here.
Premise 2 seems to follow straightforwardly from rationality. If one is rational, they wouldn’t regard the time in which an event happens as significant to how good it is. We think it foolish, for example, to procrastinate, ignore one's pains on a future tuesday merely in virtue of it being tuesday, and in other ways care about the time in which particular actions take place.
(Williams, 1976, p.206-209) objects to this notion, writing
“The correct perspective on one's life is from now.”
Williams claims that it’s rational to do what we currently desire to do, regardless of whether it would harm us later. However, as (Singer and Lazari-Radek, 2014) potently object, this would lead to it being possible for one to make fully rational decisions that they predict they will regret, ones that artificially discount the future. This seems absurd. Similarly, if a person endures infinite suffering tomorrow to avoid a pinprick now, that seems clearly irrational.
Singer and Lazari-Radek go on to describe another view, according to which the end of one's life matters more, espoused by many. This argument usually involves appealing to the intuition that it’s worse for a life to start out good but then end bad than for the opposite to occur, even if the lives are equally good. They argue against this intuition, pointing out that one primary reason this seems the case is that a life that starts out good before turning bad will in general be a worse life. The people will, in their old age, be disappointed about all that they’ve lost. However, if we stipulate that the quality of life is exactly identical, it becomes harder to maintain.
Several other objections can be given to this view. One of them is that it results in strange moral implications of when people are born. Suppose that one is born with memories of the end of their life, but in the end of their life they lose their memories of earlier parts of their life. In that case, it seems like the earlier part of their life is more important, because in that part they remember life being unpleasant and appreciate the improvement. The point in time at which their life takes a turn for the better seems less important than facts about whether they remember the turn for the better, and that’s best explained by hedonism.
Additionally, according to the B theory of time, there is no objective present. Each point in time is a point on a four dimensional spacetime block--there is no objective now. This would mean that while some points causally precede others, there is no objectively real before than relationship. The phil papers survey shows that the B theory of time is the consensus view among philosophers about time. Thus, it’s not clear that the before than relationship is ontologically real.
Even if it is, it only seems to matter if it affects experience. Imagine a scenario in which the world in the year 4000 is exactly the same as the world in the year 5000. Consider a scenario in which a person is created in the year 4000, as a full 60 year old adult. They have memories of a previous life from before they were 60, but those memories were falsely implanted given that they did not exist prior to being created at 60. Prior to being created at 60, a philosophical zombie filled in for them.
Additionally, the person is created in the year 4000 as an infant and experiences the first 60 years of life, before disappearing and being replaced by a philosophical zombie. It seems intuitively that it’s more important for the 60 year old to have a good life. However, the 60 year old was created later in time, and is technically younger. This scenario is analogous to the 60 year old not having the first 60 years of its life, before going into cryogenic sleep, and then awakening having had its aging reversed at the age of 1.
Such scenarios show that the precise temporal point at which experiences happen doesn’t matter. Rather, the significant thing is how those experiences relate to other experiences. Yet that is a hedonistic consideration.
Premise 3 seems to follow straightforwardly from impartiality. From a fully impartial view, there is no reason to privilege the good of anyone over the good of any other.
(Chappell, 2011) argues for value holism, according to which the value of lives should be judged as a whole, rather than merely by adding up the value of each moment. He first argues that directional trends matter--a claim addressed above.
Next (p.7) he cites Kahneman’s research, finding people often will prefer additional pain as long as the end of some experience is good. People judge 60 seconds of very painful cold water followed by 30 seconds of less painful cold water to be less unpleasant than merely 60 seconds of very painful cold water. However, the fact that people do judge particular moments to be more unpleasant than other ones does not mean that those moments are in fact more unpleasant than the other ones. Additionally, if asked during the experience, people would clearly prefer to not have to undergo the extra 30 seconds.
Chappell responds to this (p.8), writing,
“Yet when making an overall judgment from ‘above the fray’, so to speak, the subjects express a conflicting preference, and merely noting the conflict does not tell us how to resolve it. As a general rule, we tend to privilege (reflective) global preferences over (momentary) local ones: such a hierarchy is, after all, essential for the exercise of self-control.”
This is true in general. However, some judgments can be unreliable. The judgment that extra pain makes an experience better seems very plausibly a result of biases, as people privilege the end of an experience over the beginning. The Kahneman research seems more like a debunking of the judgments Chappell appeals to.
Next, Chappell says (p.9)
“But for this to qualify as independent evidence of factual error, we must assume that subjects were interpreting ‘overall discomfort’ to mean ‘aggregate momentary discomfort’. This seems unlikely. It’s far more plausible to think that subjects were simply reiterating their holistic judgment that the longer trial was less unpleasant on the whole. So these considerations leave us at a dialectical impasse.”
Additionally, people are often unaware of their motivations and introspection is often unreliable (Schwitzgebel, 2008). We thus shouldn’t be overly deferential to people’s judgments of their own experience.
The principles Chappell appeals to are far less intuitive than the notion that previous events that are no longer causally efficacious cannot causally impact the goodness of a particular action. For example, if I spawned last Thursday with full memories, it would seem unintuitive that that undermines the value of future experiences, even if they serve the same functional role.
In the case given by Kahneman, imagine that one was brought into existence after the minute of pain, with full memories as if they’d experienced the minute of pain, despite never having done so. In that case, it seems like it would be better for them to endure no suffering, rather than to endure the extra 30 seconds of suffering. To accomodate this intuition, combined with value holism, one would have to say that one of the experiences somehow changes the badness of the other one, in a way not achieved if it were replaced by a memory that’s functionally isomorphic--at least when it comes to their evaluation of the other experience.
(McNaughton & Rawling, 2009) additionally argue against this view, while still maintaining a version of value holism, arguing that the value of a collection of experiences can produce more momentary experience than the value of each part of it. They give (p.361) the following analogy.
“An analogy might help in drawing the distinction between our position and Moore’s. One might think of a state of affairs as in a some ways like a work of art—say, Michelangelo’s David. (Moore discusses the value of a human arm,16 and our discussion here will draw on this to some extent.) On both our account and Moore’s, the value of David is significant. For Moore, however, it might well be that its entire value is its value as a whole. On this account of matters, any part taken in isolation (David’s nose, say) has zero value. We agree that any part taken in isolation has zero value—but we contend that this way of valuing the parts is simply irrelevant to the evaluation of the statue. Rather, the relevant value of David’s nose is the value of its contribution to the statue. Perhaps David’s hand contributes more than his nose, in which case the value of the former is more than the value of the latter.”
An additional worry with value holism is that it requires strong emergence--positing that the whole value of an experience is not reducible to its parts. Rather, its parts take on a fully different property when combined--a property that is not merely the collection of a variety of parts operating at lower levels. As (Chalmers, 2006) argues, we have no clear examples of strong emergence. The only potentially strongly emergent phenomena is consciousness, which means we have good reason to doubt any theory that posits strong emergence. If value holism requires positing a property that exists nowhere else in the universe--that makes it extraordinarily implausible.
(Alm, 2006) presents an additional objection to value holism--defending atomism, writing (p.312)
“Atomism is defined as the view that the moral value of any object is ultimately determined by simple features whose contribution to the value of an object is always the same, independently of context.” Over the course of the article, (p.312) “Three theses are defended, which together entail atomism: (1) All objects have their moral value ultimately in virtue of morally fundamental features; (2) If a feature is morally fundamental, then its contribution is always the same; (3) Morally fundamental features are simple.”
The first premise is relatively uncontroversial. There are certain basic features that confer moral significance. Nothing other than those features could, even in principle, confer value on a state of affairs.
The second premise is likewise obvious. For one feature to be significant in one case but not in another case, there would have to be some element of the feature that varies across cases. Yet it could only do so by the feature conferring moral worth only in some cases, which would mean it doesn’t vary across cases. The rule “donate to charity only if it maximizes well-being,” doesn’t vary across cases because the rule is always the same, even though its application varies from case to case.
The third premise defines simple features as ones not composed of simpler parts. Several reasons are given to think the fundamental features will be simple (p. 324-326).
We have general prima facie reason to expect explanations to be simple, for the general reason that simplicity is a theoretical virtue. A more complex account that has to posit more things is intrinsically less likely. In physics, for example, we take there to be a strong reason to reject complex features as fundamental, that operate on the levels of organisms, rather than fundamental physics.
Positing new properties gained when fundamental properties are combined involves immense metaphysical weirdness. It seems hard to imagine that pleasure would have no value, knowledge has no value, but pleasure and knowledge gain extra value when they combine--in virtue of no fact about either of them.
Utilitarianism Wins Outright Part 27
A Few More Cases That Other Theories Struggle To Account For
Introduction
Utilitarianism, unlike other theories, gives relatively clear verdicts about a variety of cases. This makes it easily criticizeable—one can very easily find seemingly unintuitive things it says. Other theories, however, are far vaguer, making it harder to figure out exactly what they say.
There are lots of situations that utilitarianism adequately accounts for that other theories can’t account for. I’ve already documented several of them. I shall document more of them here.
1 Children
Consequentialism provides the only adequate account of how we should treat children. Several actions being done to children are widely regarded as justifiable, yet are not for adults.
Compelling them to do minimal forced labor (chores).
Compelling them to spend hours a day at school, even if they vehemently dissent and would like to not be at school.
Forcing them to learn things like multiplication, even if they don’t want to.
Forcing them to go to bed when their parents think will make things go best, rather than when they want to.
Not allowing them to leave your house, however much they protest.
Disciplining them in ways that cause them to cry, for example putting them on time-out.
Controlling the food they eat, who they spend time with, what they do, and where they are at all times.
However, lots of other actions are justifiable to do with adults, yet not with children.
Having sex with them if they verbally consent.
Not feeding them (i.e. one shouldn’t be arrested if they don’t feed a homeless person nearby. They should, however, if they don’t feed their children. Not feeding ones children is morally worse than beating their children, while the same is not true of unrelated adults.)
Employing them in damaging manual labor.
Consequentialism provides the best account of these obligations. Each of these obligations makes things go best, which is why they apply. Non consequentialist accounts have trouble with these cases.
One might object that children can’t consent to many of these things, which makes up the difference. However, consent fails to provide an explanation. It would be strange to say, for example, that the reason you can prohibit someone from leaving your house is because they don’t consent to leaving your house. Children are frequently forced to do things without consent, like learn multiplication, go to school, and even not put their fingers in electrical sockets. Thus, any satisfactory account has to explain why their inability to consent only bars them from consenting to some of those things.
2 War
Consequentialism also provides the best account of when war is justified. Usually, it’s immoral to kill innocent people. However, in war there is an exception.
(Lazar 2016) provides an account based in just war theory of when war is justified. However, Lazar’s criteria are bizarre, ad hoc, and clearly derivative of more basic principles. Utilitarianism provides a much better account of just war.
On utilitarianism, war is justified if it maximizes desirability of mental states of sentient beings. If it makes sentient beings' lives better overall, then war is justified.
Lazar lays out several necessary criteria of just war.
“1. Just Cause: the war is an attempt to avert the right kind of injury.”
This diverges slightly from utilitarianism in two ways, which make the utilitarian account clearly better. First, the utilitarian account does not take into account intentions. If an actor achieves a desirable end, even if their aim was ignoble, the war would be good overall. For example, if the U.S. decision to intervene in world war two was driven by bad motivations, that would not mean that U.S. intervention decreasing the length of the holocaust and war wouldn’t make the war good overall.
Second, the utilitarian account is proportional. Rather than saying that wars can only be justified to avert the right kind of injury, it would say that wars can only be justified if the benefits of the war outweigh the costs. Thus, even if the injury were very great, if the costs of the war were greater, war would not be justified. This is rather intuitive, if combatting a genocide going on in China would result in nuclear anhiliation of the world, it would not be desirable. Similarly, it says that the bar for justifying a more modest intervention is much lower. This is also intuitive, if the cost to prevent several hundred thousand deaths would be a drone strike that would cost a few hundred lives, intervening would be good. Rather than drawing arbitrary lines for how great the atrocity has to be to justify an intervention, utilitarianism rightly holds that analysis should compare the benefits and costs of expected intervention.
When justifying this principle, Lazar appeals to the immense harms of war. However, analyzing the harms of war to justify treading lightly when it comes to war appeals to consequentialism, because it uses the negative consequences as a justification for being hesitant about going to war.
Next, Lazar says “Legitimate Authority: the war is fought by an entity that has the authority to fight such wars.”
The utilitarian account is once again better here. First, as I argued in part 1 of the chapter, utilitarianism provides the only adequate account of political authority. Second, it seems clear that the authority of the entity going to war is not relevant for evaluating the desirability of a war. If an illegitimate criminal enterprise went to war against the Nazi’s, thereby quickly ending the holocaust, even though the illegitimate criminal enterprise wouldn’t be legitimate, the war would still be desirable. Third, this standard is clearly ad hoc. Utilitarianism provides the best grounding of the arbitrary list of requirements for war to be justified.
Lazar next says “Right Intention: that entity intends to achieve the just cause, rather than using it as an excuse to achieve some wrongful end.”
Intention is irrelevant, as I argued above. It’s also unclear how we ascribe intentions to a government. Different people in the government no doubt have different intentions relating to war. There is no single intention behind the decision to intervene.
Lazar’s fourth criterion is “Reasonable Prospects of Success: the war is sufficiently likely to achieve its aims.”
Once again, consequentialism provides a better account. First, even if a war isn’t likely to achieve its aims, if it ends up producing good outcomes, consequentialism explains why it’s justified. If the decision to intervene intends to bring democracy to a region, fails to do so, but ends up saving the world, that intervention would be good overall. Second, even if a war is unlikely to achieve its aims, if its aims are sufficiently important, the war can still be justified. If a war has a 3% chance of saving the world, it would be justified, despite being unlikely to succeed. Third, a war only gains justification from being likely to achieve its aims if those aims are desirable. Fourth, consequentialism provides an adequate account of why likelihood of success matters. Better consequences are brought about by wars that are likely to produce good outcomes.
Fifth, Lazar says “Proportionality: the morally weighted goods achieved by the war outweigh the morally weighted bads that it will cause.”
This is true, yet perfectly in accordance with the consequentialist account. This criterion is essentially that war is justified only if the expected benefits outweigh the costs, which is identical to the utilitarian account of when war is justified.
Sixth, Lazar says “Last Resort (Necessity): there is no other less harmful way to achieve the just cause.”
This criteria is ambiguous. If the claims is merely that war should only be done if it’s the best option, and if other options are better they should be done instead, that’s clearly true, and it’s accounted for by utilitarianism. However, the mere existence of less harmful ways of achieving the aim of war wouldn’t affect the morality of the war. If intervening would be desirable, but sanctions would be even more desirable, that’s a reason to choose sanctions if one has the option of selecting either sanctions or war. However, if one does not have the option of imposing sanctions, the existence of potentially desirable sanctions wouldn’t affect the desirability of going to war.
Lazar goes on to provide necessary criteria for the conduct practiced in war to be justified, even if the war itself is justified.
“Discrimination: belligerents must always distinguish between military objectives and civilians, and intentionally attack only military objectives.”
This is a good heuristic, but the utilitarian account is better. If intentionally targeting one enemy civilian would end a war and save millions of lives, doing so would be justified.
Additionally, this criteria seems to draw a line between intentionally targeting civilians and targeting them as collateral. Utilitarianism adequately explains why that distinction is morally irrelevant. If given the choice between killing 100 enemy civilians as collateral damage or 10 intentionally, utilitarianism explains why killing the 10 would be preferable.
Second, Lazar says “Proportionality: foreseen but unintended harms must be proportionate to the military advantage achieved.”
Utilitarianism explains why this is false. First, it explains why there’s no morally relevant difference between a foreseen but unintended harm and an intended harm. It would be better to kill 10 people intentionally than 15 people in a way that’s foreseen yet not intended. Second, it explains why harm to civilians that doesn’t achieve a military advantage can still be desirable. If, for example, bombing of 15 civilians prevented 500 extra civilian deaths, even if no military advantage was achieved, the bombing would still be justified. Third, utilitarianism provides an account of why the harms and benefits should be proportional in war.
Third, Lazar says “Necessity: the least harmful means feasible must be used.”
Utilitarianism can account for why the least harm should be caused. However, it also accounts for why it can be good for an action to happen even if it’s not the least harmful. Even if there’s a better option than bombing people, if bombing people makes the world better, utilitarianism explains why it’s still justified.
3 Pollution
Some types of pollution should clearly be prohibited. (Slaper et al, 2013) argues that the Montreal Protocol, for example, prevented millions of cases of skin cancer per year. It did this at a relatively low cost. Most would agree that if a policy produced minimal economic harm and prevented millions of cases of cancer every year, it would be good overall.
However, some types of pollution clearly shouldn’t be polluted. When a person lights a candle, the ash goes into the air, affecting people for miles around. When people exhale they release a bit of CO2, without the consent of anyone else. Consequentialism provides the best account of when pollution is justified.
Any time someone pollutes, they affect people who do not consent. Pollutants are breathed in by those who do not consent. However, it only makes sense to ban pollutants that are actually harmful. Consequentialism best accounts for this fact.
It seems clear that a pollutant should be banned only if it’s harmful. (Lewis, 2015) argues that PFAS pollutants are quite common, affecting vast numbers of people. Whether or not it makes sense to ban or regulate PFAS seems to hinge on whether or not they are harmful. If it turned out that PFAS was not harmful, there would be no rationale for banning it. Other theories have difficulty accounting for why pollutants being harmful are the necessary and sufficient conditions for a pollutant being ban-worthy. Indeed, the importance of banning a pollutant scales proportionally to how much harm the pollutant does.
4 Political Authority
(Huemer, 2013) has argued persuasively that government action looks suspiciously like theft. After all, governments take our money without consent. That tends to be how most people define theft. However, despite this, the notion that taxation is good if it produces greater overall benefits is quite intuitive. Thus, utilitarianism gives the best account of why political authority is not necessarily objectionable. If the government takes money to spend it to save the life of someone else, that is overall good.
Huemer considers a case wherein a person often kidnaps people and puts them in their basement whenever they commit a crime, while demanding payments from people. This would clearly be unjust. However, a government is structurally similar, demanding payment without consent and imprisoning people. What would be the morally relevant difference between those two cases? The utilitarian can supply an adequate account: bands of roving vigilantes does not bring about good outcomes, while a state (plausibly) does. Theft causes harm in a way that taxation does not.
One might object that the legitimacy of political authority comes from implicit consent. However, Huemer explains why this does not work. For consent to be valid, people have to be able to opt out reasonably, explicit dissent has to trump implicit consent, consent has to be voluntary, and obligations have to be both mutual and conditional. Political authority lacks any of these features. One cannot easily opt out of government. Huemer quotes Hume, who said “We may as well assert that a man, by remaining in a vessel, freely consents to the dominion of the master; though he was carried on board while asleep, and must leap into the ocean, and perish, the moment he leaves her.”
Pushing The Guy Off The Bridge And Flipping The Switch Aren't Morally Different By More Than An Arbitrarily Small Finite Amount Of Utility
Assuming modest assumptions
People often hold two judgments which appear to be in tension with each other. In the case of the trolley problem, where there is a train going towards 5 people unless you flip the switch to kill one person, people think you should flip the switch. However, in a different case, where the only way to stop a train from killing 5 people is to push one person off a bridge, people tend to think1 you shouldn’t push the guy. As I shall argue here, the moral equivalent is not merely superficial—those two actions are not morally different from each other. One’s judgment should thus be consistent across the cases.
This follows from an even more modest pareto principle, which says that if some action makes all affected parties better off, it is a good action. This is a rather plausible principle.
Next, consider the following case. One person is on a track over a bridge. A train is coming. There is a different track that leads up to the person over the bridge. There are two ways of stopping the train.
Flip the switch to redirect the train to to one person above.
Push the person onto the track. Doing so would have the additional effect of very very slightly increasing the utility of both the person pushed and of all of the other people on the tracks, relative to redirecting the train2.
It seems obvious that the better option is option 2. Option 2 is better than option 1 for all of the affected parties. However, option 2 is only an infinitesimal amount better than the ordinary case in which one pushes the person off the bridge. Thus, 2 which is an infinitesimal amount better pushing the guy is better than flipping the switch.
My Issue With The Way Lots Of Utilitarians Argue For Utilitarianism
Against Huemerless utilitarianism
Philosophy rarely proceeds by way of knock down, deductive argument. Instead, a better way to proceed is to compare theories holistically and abductively, as explanations of phenomena. Thus, as a cautionary note to other utilitarians, I’d recommend that, rather than attempt to provide a single knockdown deductive argument, they proceed abductively and compare a wide range of verdicts. This is probably the biggest evolution in my thinking over the years.
Given that the deductive arguments are only as intuitive as the conjunction of all of the premises, even the deductive arguments proceed by analysis of the intuitive plausibility of certain notions. And yet if there’s a pretty intuitive premise, but it entails dozens of hideously unintuitive things, that premise should likely be rejected.
To illustrate with an example, the simplest application of the argument of (Harsanyi, 1975) would entail average utilitarianism, though it can certainly be employed to argue for total utilitarianism, if we include future possible people in our analysis. However, the reason I reject Harsanyi’s argument as showing average utilitarianism is not because I think the argument trivially provides greater support for average utilitarianism than for total utilitarianism. Instead, it’s because average utilitarianism produces wildly implausible results. Consider the following cases.
We have 1 billion people with -100^100^100^100^100^100^100^100 utility. You have the choice of bringing an extra 100^100^100^100^1000 people into existence with average utility of -100^100^100^100^100^100^100^99. Should you do it? It would increase average utility, yet it still seems clearly wrong—as clearly wrong as anything. Bringing miserable people into existence who experience more than the total suffering of the holocaust every second is not a good thing, even if there are existing slightly less miserable people.
There is currently one person in existence with utility of 100^100^100. You can bring a new person into existence with utility of 100^100^10. Average utilitarianism would imply that, not only should you not do this, doing it would be the single worst act in history, orders of magnitude worse than the holocaust in our world.
You are in the garden of Eden. There are 3 people, Adam (utility 5), Eve (utility 5), and God (utility 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000^1000000000000000000000000000000000000000000000000000000000000000000000000000^100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000). Average utilitarianism would say that Adam and Eve existing was a tragedy and they should certainly avoid having children.
You’re planning on having a kid with utility of 100^100, waaaaaaaaaaaaaaaaaay higher than that of most humans. However, you discover that there are oodles of aliens with utility much higher than that. Average utilitarianism would say you shouldn’t have the child, because of the existence of the far away aliens who you’ll never interact with.
Average utilitarianism would say that if fetuses had a millisecond of joy at the moment of conception, this would radically decrease the value of the world, because fetuses would bring down average utility.
Similarly, if you became convinced that there were lots of aliens with bad lives, AU would say you should have as many kids as possible, even if they had bad lives, to bring up the average.
These cases are why I reject average utilitarianism. If total utilitarianism had implications that were as unintuitive as those of average utilitarianism, I would similarly reject it, despite the deductive arguments. The deductive arguments count strongly in favor of the theory, but would not be enough to overcome the hurdles of the theory, if it were truly unintuitive across the board.
Utilitarians will often try to discredit intuitions as a way of gaining knowledge, (E.G. Sinhababu 2012). They will often point out the poor track record of intuitions. However, this does mean that intuitions are less reliable than they would otherwise be, but it does not mean we should simply ignore intuitions. Absent relying on what seems to be the case after careful reflection, we could know nothing, as (Huemer, 2007) has argued persuasively. Several cases show that intuitions are indispensable towards having any knowledge and doing any productive moral reasoning.
Any argument against intuitions is one that we’d only accept if it seems true after reflection, which once again relies on seemings. Thus, rejection of intuitions is self defeating, because we wouldn’t accept it if its premises didn’t seem true.
Any time we consider any view which has some arguments both for and against it, wecan only rely on our seemings to conclude which argument is stronger. For example, when deciding whether or not god exists, most would be willing to grant that there is some evidence on both sides. The probability of existence on theism is higher than on atheism, for example, because theism entails that something exists, while the probability of god being hidden is higher on atheism, because the probability of god revealing himself on atheism is zero. Thus, there are arguments on both sides, so any time we evaluate whether theism is true, we must compare the strength of the evidence on both sides. This will require reliance on seemings. The same broad principle is true for any issue we evaluate, be it religious, philosophical, or political.
Consider a series of things we take to be true which we can’t verify. Examples include the laws of logic would hold in a parallel universe, things can’t have a color without a shape, the laws of physics could have been different, implicit in any moral claim about x being bad there is a counterfactual claim that had x not occurred things would be better, and assuming space is not curved the shortest different between any two points is a straight line. We can’t verify those claims directly, but we’re justified in believing them because they seem true--we can intuitively grasp that they are justified.
The basic axioms of reasoning also offer an illustrative example. We are justified in accepting induction, the reliability of the external world, the universality of the laws of logic, the axioms of mathematics, and the basic reliability of our memory, even if we haven’t worked out rigorous philosophical justifications for those things. This is because they seem true.
Our starting intuitions are not always perfect, and they can be overcome by other things that seem true. However, merely ignoring all intuitions will not do, if we want to justify utilitarianism.
How should we decide which intuitions to rely on? This is a difficult question that would require an immense amount of time to address. However, a few points are worth making here, as it relates to utilitarianism.
First, our intuitions are pretty unreliable (Beckstead, 2013). Thus, even if we have a few strongly held intuitions conflicting with utilitarianism, this should be insufficient to make us reject utilitarianism.
Second, if we can employ debunking accounts of our intuitions, we should trust them less. If the reason we have an intuition about a case is that we’re bad at reasoning about big numbers, that’s a reason to distrust that intuition.
Third, as (Ballantyne and Thurow, 2013) have argued, there are specific facts about moral reasoning that make it often unreliable. Those are partiality, bias, emotions, and disagreement. If any of these are present, they undermine the reliability of our judgements. However, utilitarianism falls prey to these far less than other theories.
The example of partiality is obvious. Utilitarianism is often objected to for being too demanding, so it clearly isn’t supported for reasons of self interest. In fact, utilitarianism is explicitly impartial, treating everyone’s interests equally.
The point about bias was supported here. When people reflect more, they’re more utilitarian. Many non utilitarian intuitions can be explained by reliance on heuristics and biases like risk aversion, large number biases, and many others.
The point about emotions is supported by (Greene 2007). Greene finds the following four things. First, inducing positive emotions leads to more utilitarian conclusions. This supports the DPT, because inducing positive emotions causes people to be less affected by negative emotions that, according to the DPT, are largely responsible for non utilitarian responses to moral questions. Second, patients with Frontotemporal dementia are more willing to push the person in front of the trolley to save 5, in the trolley problem.
People with Frontotemporal dementia have emotional blunting, a phenomena in which they’re less affected by emotions. Thus, people whose emotions are inhibited are more utiiltarian.
Third, cognitive load makes people less utilitarian. Cognitive load relates to people being under mental strain. When people are under mental strain, they’re less able to carefully analyze a situation, and they are more affected by emotions.
Fourth, people with damaged VMPC’s are more utilitarian. The VMPC is a brain region responsible for generating emotions.
Disagreement is shared by all of our moral theories. However, many of the assumptions of utilitarianism have virtually no disagreement. Utilitarianism merely objects to adding extra things that other theories add. Additionally, as the points above have shown, our moral intuitions are often unreliable, so we would expect the correct theory to be disagreed with by many people.
Thus, while all intuitions are potentially error prone, utilitarianism’s are considerably less error prone than average. They represent the types of intuitions that we’d expect to be especially reliable.
So as utilitarians, we shouldn’t aim for the single master argument that will defeat all objections to utilitarianism. Instead, we should compare the theories holistically. There are reason to favor utilitarian intuitions, and we can provide independent arguments for accepting them across a variety of cases. This is the best way to defend utilitarianism.
Longtermism Is Correct Part 1
Beginning a series of articles defending longtermism--hopefully this shall be a longterm project :)
Longtermism is, in my view, the most important project being currently done by present people. This article series shall defend why I think this. A reasonable definition is the following.
'Longtermism' is the view that positively influencing the long-term future is a key moral priority of our time.
If we think that the well-being of future people matters, we should be longtermists.
Stage One Of The Argument
In a previous article, I’ve presented a case for this position.
Here is that argument in full. (Note, I’ll use utility, happiness, and well-being interchangeably and disutility, suffering, and unpleasantness interchangeably).
1 It is morally better to create a person with 100 units of utility than one with 50. This seems obvious—pressing a button that would make your child only half as happy would be morally bad. If some environmental policy would halve future well-being, even if it would change the distribution of future people, it would be bad.
2 Creating a person with 150 units of utility and 40 units of suffering is better than creating a person with 100 units of utility and no suffering. After all, the one person is better off and no one is worse off. This follows from a few steps.
A) The expected value of creating a person with utility of 100 is greater than of creating one with utility zero.
B) The expected value of creating one person with utility zero is the same as the expected value of creating no one. One with utility zero has no value or disvalue to their life—they have no valenced mental states.
C) Thus, the expected value of creating a person with utility of 100 is positive.
D) If you are going to create a person with a utility of 100, it is good to increase their utility by 50 at the cost of 40 units of suffering. After all, 1 unit of suffering’s badness is equal to the goodness of 1 unit of utility, so they are made better off. They would rationally prefer 150 units of utility and 40 units of suffering to 100 units of utility and no suffering.
E) If one action is good and another action is good given the first action, then the conjunction of those actions is good.
These are sufficient to prove the conclusion. After all, C shows that creating a person with a utility of 100 is good, D shows that creating a person with utility of 150 and 40 units of suffering is better than that, so from E, creating a person with utility of 150 and 40 units of suffering is good.
This broadly establishes the logic for caring overwhelmingly about the future. If creating a person with any positive utility and no negative utility is good, and then increasing their utility and disutility by any amount, such that their positive utility increases more than their disutility does, is good, then you should create a person if their net utility is positive. This shows that creating a person with 50 units of utility and 49.9 units of disutility would be good. After all, creating a person with .05 units of utility would be good, and increasing their utility by 49.95 at the cost of 49 units of disutility would also be good, then creating a person with 50 units of utility is good. Thus, the moral value of increasing the utility of a future person by N is greater than the moral disvalue of increasing the disutility of a future person by any amount less than N.
Now, let’s add one more stipulation. The moral value of causing M units of suffering to a current person is equal to that of causing a future person M units of suffering. This is very intuitive. When someone lives shouldn’t affect how bad it is to make them suffer. If you could either torture a current person or a future person, it wouldn’t be better to torture the future person merely in virtue of the date of their birth. Landmines don’t get less bad the longer they’re in the ground.
However, even if you reject this, as long as you accept a more modest principle, namely, that the disvalue of the suffering of future people matters at least a little bit (say, at least .001%) as much as the disvalue of the suffering of current people. As we will show, there could be so many future people that these considerations will dominate if we care about future people at all.
From these we can get our proof that we should care overwhelmingly about the future. We’ve established that increasing the utility of a future person by N is better than preventing any amount of disutility for future people of less than N and that preventing M units of disutility for future people is just as good as preventing M units of disutility for current people. This would mean that increasing the utility of a future person by N is better than preventing any amount of disutility of current people of less than N. Thus, bringing about a person with utility of 50 is morally better than preventing a current person from enduring 48 units of suffering, by transitivity.
The common trite that it’s only good to make people happy, not to make happy people is false. This can be shown in two ways.
It would imply that creating a person with a great life would be morally equal to creating a person with a mediocre life. This would imply that if given the choice between bringing about a future person with utility of 5 and no suffering or a future person with utility of 50,000 and no suffering, one should flip a coin.
It would say that we shouldn’t care about the impacts of our climate actions on future people. For people who won’t be born yet, climate action will certainly change whether or not they exist. If a climate action changes when people have sex by even one second, it will change the identities of the future people that will exist. Thus, when we decide to take climate action that will help the future, we don’t make the future people better off. Instead, we make different future people who will be better off than the alternative batch of future people would have been. If we shouldn’t care about making happy people, then there’s no reason to take climate action for the future.
Stage 2
This logic has, so far, established that we ought to care a lot about bringing about lots of future people who will live good lives. However, we have not yet established either
We can affect future people a lot.
There could be lots of future people.
The first is relatively easy to show. There is a reasonably high chance that existential threats will end humanity in the next century. To quote my earlier article
“Additionally, even if one is a neartermist, they should still be largely on board with reducing existential threats. Ord argues in his book risk of extinction is 1 in 6, Bostrom concludes risks are above 25%, Leslie concludes they’re 30%, Rees says they’re 50%. Even if they’re only 1%—a dramatic underestimate, that still means existential risks will in expectation kill 79 million people, many times more than the holocaust. Thus, existential risks are still terrible if one is a neartermist, so Torres should still be on board with the project.”
There are lots of ways to reduce these existential risks. Examples include research into AI alignment to reduce AI risks, working in biosecurity, a variety of actions to reduce nuclear risks including
Nevertheless, some plausible goals include:
Reducing arsenal sizes.
Removing particular destabilising weapons (or preventing their construction), such as nuclear-armed cruise missiles.
Committing to "No first use" policies.
Committing to not target communications networks, cities, or nuclear power stations.
Preventing proliferation of nuclear weapons or materials to additional countries.
Reducing stockpiles of fissile material.
Improving relations between nuclear powers.
There are lots of other options listed here. You can donate here to help reduce existential risks and take other actions to improve the long term future. Lots of smart people spending money and using their careers to try to prevent extinction and improve the future in other ways can plausibly reduce existential risks and improve the quality of the future.
How good could the future be? Very very good. As I’ve argued here
the scenario with the highest expected value could have truly immense expected value.
1) Number of people. The future could have lots of people. Bostrom calculated that there could be 10^52 people by reasonable assumptions. This is such a vast number that even a 1 in 10 billion chance of there being an excellent future produces in expectation 10^42 people in the future.
Additionally, it seems like there's an even smaller but far from zero probability that it would be possible to bring about vastly greater numbers of sentient beings living very good lives. Several reasons to think this.
1) Metaculus says as of 2/25/22 that the odds that the universe will end are about 85%. Even if we think that this is a major underestimate, if it's even 99%, then it seems imminently possible for us to have a civilization that survives either forever or nearly forever.
2) Given the large number of unsolved problems in physics, the correct model could be very different form what we believe.
3) Given our lack of understanding of consciousness, it's possible that there's a way to infinitely preserve consciousness.
4) As Inslor says on Metaculus "I personally subscribe to Everett Many Worlds interpretation of QM and it seems to me possible that one branch can result in infinitely many downstream branches with infinity many possible computations. But my confidence about that is basically none."
5) Predictions in general have a fairly poor track record. Claims of alleged certainty are wrong about 1/8th of the time. We thus can't be very confident about such matters relating to how we can affect the universe 10 billion years from now.
Sandberg and Manheim argue against this, writing "This criticism cannot be refuted, but there are two reasons to be at least somewhat skeptical. First, scientific progress is not typically revisionist, but rather aggregative. Even the scientific revolutions of Newton, then Einstein, did not eliminate gravity, but rather explained it further. While we should regard the scientific input to our argument as tentative, the fallibility argument merely shows that science will likely change. It does not show that it will change in the direction of allowing infinite storage."
It's not clear that this is quite right. Modern scientific theories have persuasively argued against previous notions of time, causality, substance dualism, and many others. Additionally, whether or not something is aggregative or revisionist seems like an ill defined category. Theories may have some aggregationist components and other revisionist ones. Additionally, there might be interesting undiscovered laws of physics that allow us to do extra things that we currently can't.
While it's unlikely that we'll be able to go faster than light or open up wormholes, it's certainly far from impossible. And this is just one mechanism by which the survival of sentient beings could advance past the horizon imagined by Sandberg and Manheim. The inability of cavemen to predict what would go on in modern society should leave us deeply skeptical of claims relating to the possibilities of civilizations hundreds of millions of years down the line.
Sandberg and Manheim add "Second, past results in physics have increasingly found strict bounds on the range of physical phenomena rather than unbounding them. Classical mechanics allow for far more forms of dynamics than relativistic mechanics, and quantum mechanics strongly constrain what can be known and manipulated on small scales." This is largely true, though not entirely. The aforementioned examples explain how more modern physics allows us to figure out true things.
Sandberg and Manheim finish, writing "While all of these arguments in defense of physics are strong evidence that it is correct, it is reasonable to assign a very small but non-zero value to the possibility that the laws of physics allow for infinities. In that case, any claimed infinities based on a claim of incorrect physics can only provide conditional infinities. And those conditional infinities may be irrelevant to our decisionmaking, for various reasons."
I'd generally agree with the assessment. I'd currently give about 6% credence in it being theoretically possible for a civilization to last forever. However, the upside is literally infinite, so even low risks matter a great deal.
One might be worried about the possibility of dealing with infinities. This is a legitimate worry. However, rather than thinking of it as infinity, for now we can just treat it as some unimaginably big number (say Graham's number). This avoids paradoxes relating to it and is justified if we rightly think that an infinity of bliss is better than Graham's number years of bliss.
One might additionally worry that the odds are sufficiently low that this potential scenario can be ignored. This is, however, false, as can be shown with a very plausible principle called The Level Up Principle:
Let N be a number of years of good life
M is a different number of years of good life where M<N
P is a probability that's less than 100%
The principle states the following: For any state of the world with M, there is some value of P and N for which P(N) is overall better than certainty of M.
This is a very plausible principle.
Suppose that M is 10 trillion. For this principle to be true there would have to be some much greater amount of years of happy life for which a 99.999999999999999999% chance of it being realized is more choice worthy than certainty of 10 trillion years of happy life. This is obviously true. A 99.9999999999999999999999999999999999999999999999999999999999999999999999999999999% chance of 10^100^100^100 years of happy life is more choice worthy than certainty of 10 trillion years of happy life. However, if we take this principle seriously then we find that chances of infinity or inconceivably large numbers of years of happy life dominate all else. If we accept transitivity (as we should), then we would conclude that each state of the world has a slightly less probable state of the world that's more desirable because the number of years of good life is sufficiently greater. This would mean that we can keep diminishing the probability of the event, but increasing the number of years of good life, until we get to a low probability of some vast number of years of good life (say Graham's number) being better than a higher probability of trillions of years of happy life. This conclusion also follows straightforwardly if we shut up and multiply.
Other reasons to think that the far future could be very good.
2) The possibility of truly excellent states of consciousness.
We currently don't have a very well worked out theory of consciousness. There are lots of different scientific and philosophical views about consciousness. However, there are good reasons to be optimistic about the possibility of super desirable consciousness.
The immense malleability of consciousness. Our experiences are so strange and varied that it seems like conscious experience can take a wide number of forms. One would be a priori surprised to find that an experience as horrific as brutal torture, as good as certain pleasurable experiences, or as strange and captivating experience as people have when taking psychedelics drugs, are able to actually exist in the real world. All of these processes are extremely strange contours of conscious experience, showing that consciousness is at least very malleable. Additionally, all of these experiences were produced by the blind process of darwinian evolution, meaning that the true possibilities of conscious experience opened up by AI's optimizing for good experiences are far beyond that which randomly emerged.
The fact that these experiences have emerged despite our relatively limited computational capacities. Consciousness probably has something to do with mental computation. The human brain is a relatively inefficient computational device. However, despite that, we can have very vivid experiences--ones that are extremely horrific. The experience of being fried to death in an iron bull, being beaten to death, and many others discussed here , show that even with our fairly limited computational abilities, we have the ability to experience intensely vivid experiences. It seems like it should be possible to--with far more advanced computation--create positive experiences with hedonic value that far surpasses even the most horrific of current experiences. We don't have good reason to believe that there's some computational asymmetry that makes it more difficult to produce immensely positive experiences than immensely negative experiences. Darwinian evolution provides a perfectly adequate account of why the worst experiences are far more horrific than the best experiences are good, based on their impact on our survival. Dying in a fire hampers passing on ones genes more than having sex one time enables passing on of gene. This means that the current asymetry between the best and worst experiences shouldn't lead us to conclude that there's some fundamental computational difference between the resources needed to produce very good experiences and the resources needed to produce very bad experiences.
Based on the reasons given here, including peoples descriptions of intense feelings of pleasure, it seems possible to create states of unfathomable bliss even with very limited human minds, resulting in a roughly logarithmic scale of pain.
Even if we did have reason to think there was a computational asymmetry, there's no reason to think that the computational asymmetry is immense. No doubt the most intense pleasures for humans can be far better than the most horrific suffering is for insects.
I'd thus have about 93% credence in, if digital consciousness were possible, it being possible to create pleasure that's more intense than the most horrific instances of suffering are bad. Thus, the value of a future utopia could be roughly as good as the disvalue of dystopia would be bad. This gives us good reason to think that the far future could have immense value if there is successful digital sentience.
This all relies on the possibility of digital sentience. I have about 92% confidence in the possibility of digital sentience, for the following reasons.
1 The reason described in this article, "Imagine that you develop a brain disease like Alzheimer’s, but that a cutting-edge treatment has been developed. Doctors replace the damaged neurons in your brain with computer chips that are functionally identical to healthy neurons. After your first treatment that replaces just a few thousand neurons, you feel no different. As your condition deteriorates, the treatments proceed and, eventually, the final biological neuron in your brain is replaced. Still, you feel, think, and act exactly as you did before. It seems that you are as sentient as you were before. Your friends and family would probably still care about you, even though your brain is now entirely artificial.[1]
This thought experiment suggests that artificial sentience (AS) is possible[2] and that artificial entities, at least those as sophisticated as humans, could warrant moral consideration. Many scholars seem to agree.[3]"
2 Given that humans are conscious, unless one thinks that consciousness relates to arbitrary biological facts relating to the fleshy stuff in the brain, it should be possible at least in theory to make computers that are conscious. It would be parochial to assume that the possibility of being sentient merely relates to the specific line of biological lineage that lead to our emergence, rather than more fundamental computational features of consciousness.
3 Consider the following argument, roughly given by Eliezer Yudkowsky in this debate.
P1 Consciousness exerts a causally efficacious influence on information processing.
P2 If consciousness exerts a causally efficacious influence on information processing, copying human information processing would generate to digital consciousness.
P3 It is possible to copy human information processing through digital neurons.
Therefore, it is possible to generate digital consciousness. All of the premises seem true.
P1 is supported here.
P2 is trivial.
P3 just states that there are digital neurons, which there are. To the extent that we think that there are extreme tail ends to both experiences and numbers of people this gives us good reason to expect the tail end scenario for the long terms to dominate other considerations.
The inverse of these considerations obviously apply for a dystopia.
In later parts, we’ll explore more reasons for accepting the longtermist thesis.
Longtermism Is Correct Part 2
Defending the thesis more
1
This argument will show that we should care greatly about future people.
P1 If you could either create a person with a disability today that would decrease their quality of life or create a person in an alternative civilization of aliens with lower average quality of life than that of people with disabilities, you should create the person with the disability
P2 It would be moral to create a person in an alternative civilization of aliens with lower average quality of life than that of people with disabilities
Therefore, by transitivity, it would be moral to create a person with a disability, rather than creating no one.
P3 Causing a person to have a disability who would not otherwise would be very morally wrong
P4 Creating a person who does not have a disability with positive quality of life N1 and then causing someone else to have a disability would be morally equivalent to creating a person with a disability, who would have a positive quality of life N, in the absence of having a disability
P5 If A and B together are good and B is bad then A is at least good enough to offset the badness of B
Therefore, creating a person who would have a positive quality of life N is good enough to offset the badness of giving a person a disability
If A is good enough to offset B then the combination of creating any number of instances of A and B would be a good thing
Therefore, creating any number of future people with quality of life N would be worth giving an equal number of people a disability
Giving a number of future people a disability would be seriously morally wrong
Therefore, failing to create any number of future people with quality of life N would be seriously morally wrong
If there are 10^30 future people in expectation then failing to bring them about would be morally worse than giving 10^30 people a disability
2
Nick Beckstead provides an argument for caring overwhelmingly about the far future. He begins with the following assumption.
Period Independence: By and large, how well history goes as a whole is a function of how well things go during each period of history; when things go better during a period, that makes the history as a whole go better; when things go worse during a period, that makes history as a whole go worse; and the extent to which it makes history as a whole go better or worse is independent of what happens in other such periods
This assumption is hard to deny. Any other method for determining the value of history as a whole would seem unacceptably ad-hoc. It seems intuitive that two centuries of things going well would be better than only one century of things going well. Even if one ends up rejecting this view after careful reflection, it can’t be denied that it’s intuitive as a starting assumption.
Beckstead provides the following
A rationale for Period Independence To appreciate the rationale for Period Independence consider the following scenario:
Asteroid Analysis: World leaders hire experts to do a cost-benefit analysis and determine whether it is worth it to fund an Asteroid Deflection System. Thinking mostly of the interests of future generations, the leaders decide that it would be well worth it.
And then consider the following ending:
Our Surprising History: After the analysis has been done, some scientists discover that life was planted on Earth by other people who now live in an inaccessible region of spacetime. In the past, there were a lot of them, and they had really great lives. Upon learning this, world leaders decide that since there has already been a lot of value in the universe, it is much less important that they build this device than they previously thought.
On some views in population ethics, the world leaders might be right. For example, if we believe that additional lives have diminishing marginal value, the total value of the future could depend significantly on how many lives there have been in the past. Intuitively, it would seem unreasonable to claim that how good it would be to build the Asteroid Deflection System depends on this information about our distant past. Parfit and Broome appeal to analogous arguments when attacking diminishing marginal value and average views in population ethics. See Parfit (1984, p. 420) and Broome (2004, p. 197) for examples.
This supporting argument seems even more intuitive and must be denied by deniers of period independence. If periods are dependent, then the previous history of the world will affect the desirability of creating a new person and of preserving the world. This, however, seems obviously false. No information about the quality of life of ancient Egyptians would affect the decision to continue the world.
Beckstead adds another assumption
Additionality: If “standard good things” happen during a period of history—there are people, the people have good lives, society is organized appropriately, etc.—that makes that period go better than a period where nothing of value happens.
This again seems intuitively plausible. Beckstead provides a justification, which I’ll quote in a minute, but first some background. Beckstead earlier gave an analogy about a computer program that was designed to figure out how good the world was, based on judging certain facts about the world.
What is the intuition behind Additionality? It goes back to our “computer program” analogy above. When we write the computer program that estimates the value of a possible future, it makes sense for that computer program to look at how well things go during each period in that possible future. And it seems that in order to know how well things go during a given period, the computer program should just have to look at qualitative facts about what happens during that period, such as what kind of “standard good things” are happening during that period. All the standard good things that are happening now are good, and better than a “blank” period where no standard good things happen. So if similar things happen in future periods, that should be good as well.
In my argument, Additionality rules out strict Person-Affecting Views. There might be other reasons people would deny Additionality, but I have no such reasons in mind. According to strict Person-Affecting Views, the fact that a person's life would go well if he lived could not, in itself, imply that it would be in some way good to create him. Why not? Since the person was never created, there is no person who could have benefitted from being created. On this type of view, the only be important to ensure that there are future generations if it would somehow benefit people alive today, or people who have lived in the past (perhaps by adding meaning to their lives). If one does not accept a view of this kind, I see no reason to think that it doesn't matter whether “standard good things” happen in the future.
Beckstead’s next assumption is the following
Temporal Impartiality: the value of a particular period is independent of when it occurs.
This assumption is not very controversial among philosophers, but many economists reject it. On their view, we should count benefits that come in the future as intrinsically less important than benefits that come sooner, and the value of future benefits should decrease exponentially with time. Since Parfit (1984, Appendix F), Cowen and Parfit (1992), and Broome (1992) have convincingly argued against this position and few philosophers believe it anyway, I will only briefly explain why it should be rejected.
Some rather obvious examples suggest that there is no fundamental significance to when benefits and harms take place. To take an example from Parfit (1984), suppose I bury some broken glass in a forest. In one case, a child steps on the broken glass 10 years from now and is injured. In another case, a child steps on the broken glass 110 years from now and is injured in precisely the same way. If we discount for time, then we will count the first alternative as much worse than the second. If we use a 5% discount rate per year, we should count this alternative as over one hundred times worse. This is very implausible.
Finally, Beckstead’s last assumption is
Risk Neutrality: The value of an uncertain prospect equals its expected value. This assumption is important because, in all probability, any given project will do very little to affect the long-term prospects of civilization. Therefore, my argument must proceed by arguing that the value of the future is extremely large, so that reducing existential risk by a small probability, or having some small probability of creating some other positive trajectory change, is also very large. The most straightforward way to do this is to use the Risk Neutrality assumption to argue that reducing existential risk by some fraction is as important as achieving that fraction of the potential value of the future.
Beckstead provides a variety of objections to person affecting views. He objects to the simplest person affecting view, according to which non-existence is incomparable to existence, such that neither creating nor not creating a person can be moral or immoral.
Having explained the argument, let's examine its problems. The most troubling issue is that the argument delivers a standard Person-Affecting View, rather than an asymmetric one. To see this, let's consider one of Parfit's cases: The Wretched Child: Some woman knows that, if she has a child, he will be so multiply diseased that his life will be worse than nothing. He will never develop, will live for only a few years, and will suffer from pain that cannot be wholly relieved. If she has this child, it will not be good or bad for anyone else. Of this case, Parfit says, “Even if we reject the phrase ‘worse than nothing,’ it is clear that it would be wrong knowingly to conceive such a child.” (Parfit, 1984, p. 391). Parfit is surely right about this. However, Incomparability of Non-Existence entails that non-existence cannot be better for the child than living this wretched life. By stipulation, the child's existence affects no one else, so the child's existence cannot make the outcome worse. Thus, the Person-Affecting Restriction implies that it cannot be worse if the Wretched Child exists.
Next, he objects to asymetric person affecting views which says
it is bad to create people who would have miserable lives, but not good to create people who would have good lives
Beckstead objects, writing
The most obvious problem is that Strict Asymmetric Views cannot explain why, when choosing which of two “extra” people to create, it is better to create someone who would have an excellent life rather than someone who would have a pretty good life. Since the interests of all “extra” people are ignored, there is nothing to choose between these alternatives.8 But this is very implausible
He delivers another objection shortly thereafter
Though many people think that Asymmetric Views can best capture our thinking about the morality of having children, this is not true. According to common sense, it is not bad to have children under ordinary conditions, provided one can be reasonably confident that one's child will have a good life, one can fulfill one's duties to the child, and having the child does not interfere with one's pre-existing obligations. But if we accept a Strict Asymmetric View, if we create a happy child, we do something that is not good. However, if we create a person with a bad life, such as the Wretched Child, we do something bad. If some action could be bad, but could not be good, then it must be bad (in expectation).
Beckstead then argues that “Strict Asymmetric Views have their least plausible implications in cases of extinction.” He provides the following argument.
Voluntary Extinction: All people collectively decide not to have any children. No one is ever made upset, irritated, or otherwise negatively affected by the decision. In fact, everyone is made a little better off.
As (Temkin, 2000, 2008) points out, it would be bad if this happened, the benefits to present people notwithstanding.
On an Asymmetric Person-Affecting View, we must count the interests of "extra" people if they have bad lives, but not if they have good lives. This leads to another troubling conclusion:
Mostly Good or Extinction: In one future, all but a few people have excellent lives. But a very small percentage of these people suffer from a painful disease that makes life not worth living. In the other future, no people exist.9
Intuitively, the first future is better than the second. But, given an Asymmetric Person-Affecting View, this is not true. On that view, all the good lives are ignored but the bad lives are not, and that makes existence worse than extinction. Ordinarily, we believe that there is a trade-off between bad lives and good lives. But on an Asymmetric View, we give no weight to the good lives, so the trade-off is not made properly
This trio of objections is enough to sink the view.
Next, Beckstead objects to moderate person affecting views.
According to Moderate Person-Affecting Views, we cannot ignore the well-being of "extra" people, but their well-being counts for less. On the simplest version of this view, we add up the well-being of everyone who exists in an alternative to determine how good it would be, giving slightly less weight to the "extra" people.
Beckstead then presents several objections.
Moderate Person-Affecting Views let us say some intuitive things about cases like Sight or Paid Pregnancy, but not all of them. On these views, it is good to create additional people, and it can be better to create additional people even if it means that existing people will be worse off. Therefore, in some versions of these cases, Moderate Person-Affecting Views will have some implausible implications, at least if Unrestricted Views do.
Moderate Person-Affecting Views also face diffculties in Parfit's pregnancy cases:
The Medical Programmes: There are two rare conditions, J and K, which cannot be detected without special tests. If a pregnant woman has Condition J, this will cause the child she is carrying to have a certain handicap. A simple treatment would prevent this effect. If a woman has Condition K when she conceives a child, this will cause this child to have the same particular handicap. Condition K cannot be treated, but always disappears within two months. Suppose next that we have planned two medical programmes, but there are funds for only one; so one must be canceled. In the first programme, millions of women would be tested during pregnancy. Those found to have Condition J would be treated. In the second programme, millions of women would be tested when they intend to try to become pregnant. Those found to have Condition K would be warned to postpone conception for at least two months, after which this incurable condition will have disappeared. Suppose finally that we can predict that these two programmes would achieve results in as many cases. [Either one will decrease the total number of children with disabilities by 1000.] Parfit (1984, p. 367)
Suppose we fund Post-Conception Screening. The children of women with Condition J would have existed regardless of what we did, but this is not true for the children of women with Condition K; this is because if we tell women with Condition K about their condition, they will wait to conceive, and different children will exist. Therefore, the children who would have existed if we funded PreConception Screening are “extra” people; because of this, if we adopt any kind of Person-Affecting View, funding Post-Conception Screening was better than funding Pre-Conception Screening. Intuitively though, it seems that the two programs are equally good.10
Some people claim that when they consider this case, they find that Post-Conception Screening is better for precisely the reasons stated above. If they believe this, we can ask how much better they think it is. Suppose, for instance, Post-Conception Screening only prevented half as many handicaps as Pre-Conception Screening. Unless we assign very significant weight to "extra" people, all Moderate Person-Affecting theorists must hold that Post-Conception screening would be preferable even in this case. It is hard to believe that Person-Affecting considerations could make Post-Conception Screening that much better than Pre-Conception Screening. To accommodate this judgment, we must place very significant weight on the interests of "extra" people (at least 50%).
To be fair, we should admit that Unrestricted Views will have their share of problems in variations of this case. Rather than funding Pre-Conception or Post-Conception treatment, defenders of Unrestricted Views must claim it is better to pay 1000 women to have healthy, non-blind children, as in Sight or Paid Pregnancy. But we have already acknowledged this problem.
Another case is quite problematic for Moderate Person-Affecting Views. Consider:
Disease Now or Disease Later: A non-fatal disease will harm a large number of people. It will either do this now, or it will do it in the future. If the people are affected in the future, a greater number will be so affected; which future people exist will depend on our choice. Whatever we do, everyone will have a life that is worth living. (Doing it later will have no desirable compounding effects.)
According to Moderate Person-Affecting Views, both Symmetric and Asymmetric, it would be better to let future people face the disease. How much extra harm we are willing to tolerate will depend on our choice of weighting. Again, this puts pressure on us to make sure the weighting is fairly high. (The lower it is, the more "extra" people we will allow to suffer in order to protect people alive today.) The source of the problem here is that while it may be intuitive that it is better to help present people than to create additional happy people, it is not very plausible that it is less important to prevent harm to future people.
Finally, Beckstead says
Notice how the challenges facing Moderate Person-Affecting Views interact. To avoid implausible conclusions in The Medical Programmes and Disease Now or Disease Later?, the weight assigned to potential people must be reasonably high. To avoid implausible conclusions in Sight or Paid Pregnancy, the weight must be fairly low. Of course, this is unsurprising: having a very low weight for "extra" people is like accepting some kind of Strict Person-Affecting View, and accepting a very high weight is very much like accepting an Unrestricted View.
Often, in this kind of case, a moderate path will seem reasonable. But here, a moderate path seems to have little speaking in its favor. A moderate weighting (roughly 50%, say) would have implausible consequences in all of these cases.
Beckstead goes on to argue against views that deny period independence. One such view claims that value overall has declining marginal worthwhileness. On this view, adding extra value to a very valuable world is less important than adding extra value to a barely valuable world. However, this view has a very unacceptable implication. On this view, whether or not we should add new people depends on causally inefficacious other people who exist. Thus, lots of happy aliens galaxies away would affect the desirability of future people. Even if these views say that world value is determined by the value of current people, it still has an objectionable implication. On such a view, it would be more important to have a child if the world is worse. Thus, if having a child who would live a good life in a cave that they’d never escape from, you should consider geopolitics. This is clearly ludicrous.
Beckstead additionally writes
Such an asymmetry may also have strange practical implications. Suppose that there were an upper bound on the value of ensuring the existence of additional people within periods but not across periods and consider this case: Sam's Delayed Birth: We have a sperm and an egg which we can use to create Sam now, or in the future. If we create Sam now, his life will go somewhat better, and the timing of his existence will not affect anyone else's well-being. However, an enormous people are alive right now. In the future, fewer people will be alive.
Additionally, this view has another weird implication. Suppose you are uncertain about the value of the present. You think it may have 100^100^100 units of value, but it may have only 58 units of value overall. The odds of each of those are 50%.
On this view, if given the following offer
Offer: Take option 1 which will cause you to have a child with 50 units of value if the world currently has 100^100^100 units of value, or option 2 which will cause you to have a child with 5 units of value if the world has 58 units of value.
On this view, option 2 would be better, because it has a higher chance of adding value to a world where value isn’t nearly capped out. However, this is clearly absurd.
Beckstead provides more arguments—his Ph.D thesis is well worth reading. He gives the subject a more thorough treatment than I can give it here. However, one thing has, I think, been made clear from consideration of these arguments: longtermism is correct. Denying the conclusions of longtermism commits people to ludicrous views—ones much more unintuitive than longtermism. We’ll see more of those in the future.
Longtermism Is Correct Part 3
Arguing for truth, justice, and all that is good
Huemer has given 3 very plausible principles, when arguing we should accept the repugnant conclusion. However, they also have important implications for population ethics.
“The Benign Addition Principle: If worlds x and y are so related that x would be the result of increasing the well-being of everyone in y by some amount and adding some new people with worthwhile lives, then x is better than y with respect to utility.7
Non-anti-egalitarianism: If x and y have the same population, but x has a higher average utility, a higher total utility, and a more equal distribution of utility than y, then x is better than y with respect to utility.8
Transitivity: If x is better than y with respect to utility and y is better than z with respect to utility, then x is better than z with respect to utility”
Huemer goes on to explain that this entails we accept
Total Utility Principle: For any possible worlds x and y, x is better than y with respect to utility if and only if the total utility of x is greater than the total utility of y
This can get even more interesting results under uncertainty. For the populations we’ll discuss, let the first number be the number of people and the second number be their utility (eg (6,8) would be 6 people with utility of 8). Additionally + denotes we’re talking about another part of the world (6,8 + 5,3 would mean 6 people have 8 utility and 5 people have 3 utility).
Now compare our world to a different world. Our world has about 7.8 billion people. Let’s say their average utility is 99. Thus, our world is worse than a world with the values (7.8 billion, 100). This in turn would be worse than one with (7.9 billion, 100 + 10^30, 1). This would be worse than one with (10^30, 2). Thus, from this we conclude that our current world is worse than one with 10^30 people with life value only 1/50th as good as our lives are. Let’s add a principle
Risk Making Principle: For any population with utility of (X,Y), a 1/N chance of a population with (NX, 1.5Y) would be better. This would mean that, for example, a world with utility value of (50,50) would be less good than a 1/5 chance of one with (250, 75).
This is very intuitive. The predicted number of people will be higher, and their utility will also be higher. However, this principle entails that, if we assume Bostrom is right and the future could have 10^52 people, but there’s only a 1/10^10 chance of that, and we assume future people will have average utility equal only 1.5 times better than present people (an assumption I dispute strongly), the far future is more important than the present. Heck, even if the odds of that glorious future were only 1/10^20, it would still be a 1/10^20 chance of (10^52, 150). This is better, according to the risk making principle, than the current world. Thus, even if we take a very conservative estimate (10^10) of the odds of 10^52 future people, ignore other future values—treating any number of future people greater than 10^52 as equal to 10^52 and ignoring any smaller number of future people, it would still be so valuable that a 1 in 10^10 chance of it is more valuable than the current world. This shows the overwhelming value of the far future.
I’ve defended transitivity elsewhere and the other premises. Each of them is incredibly hard to deny.
In terms of the risk making principle, Tomasik provides additional arguments for accepting that we should maximize expected value—a principle far less modest than the one risk making principle I outline. However, it shows how well supported the principle is—it’s worth checking out if you have no already.
To recap—the principles that are obvious and entail we should accept the repugnant conclusion also entail, when combined with other modest principles, we should care overwhelmingly about the far future.
Longtermism Is Correct Part 4
Providing another argument
There are some radical deontologists who hold that you shouldn’t kill one person, even to save several thousands. However, most people hold that when the utility considerations are substantial enough, they overpower deontic considerations. If we accept some moderate form of threshold deontology, then the conclusion that we should care overwhelmingly about the far future follows, because the stakes are so high, in terms of utility.
As we’ve already seen, the future could contain a vast number of future people—perhaps infinite people with vast amounts of average utility. Even if we assume no significant changes to physics as we know it, there could be 10^54 years of future life, with unfathomable utility per year. This means that, if we give any weight to utility, it will dominate other considerations. Beckstead quotes Rawls saying
All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.
A view which ignores utility would have to hold that stealing to increase global utility a thousand fold would be bad. Given how vast the future could be, even if we stipulate that caring about the future entails violating some rights (it obviously doesn’t), the ratio of rights violated to utility would be such that any plausibly non absolutist deontological view would hold that we should violate rights to reduce existential threats, and improve the quality of the future in other ways.
I’ve argued for utilitarianism elsewhere in great detail. But utilitarianism doesn’t have to be accepted to accept that we should care overwhelmingly about the future. As long as one holds that millions of centuries of unfathomable bliss for unfathomable numbers of people is a good thing, and that it’s really important to bring about good things, the longtermist conclusion follows.
This means that, for example, existential threats should wholly decide ones political decisions. After all, if a politician had a low chance of making the world 10 quadrillion times better, it would be worth voting for them. The same is true of existential threats based considerations. The utility is too great for anything else to matter for practical considerations.
Thus, contrary to what’s commonly believed, it’s non-longtermists who have to take the extreme view. The reason longtermism seems extreme is because of the empirical details. However, when confronted with extreme factual considerations, a good theory should get extreme results. If there was a reasonable chance that everyone would be infinitely tortured unless some action was taken, then that action would be worth taking, even if it seemed otherwise undesirable. Longtermism is just a response to an extreme factual situation—with a future that is vaster than anything we could imagine.
Compare this to the view that the primary facts worth considering when deciding upon policy would be the impact of some policy upon atoms. This seems like an extreme view—atoms are probably not sentient and if they are, we don’t know what effect our actions have on them. However, now imagine that we somehow did know what impact our actions would have on atoms. Not only that, it turns out that our actions currently are causing immense harm to 100,000,000,000,000,000,000,000,000,000,000 atoms. Well, at that point, while the whole “caring primarily about atoms,” thing seems a bit extreme, if our current policy is bad for 100,000,000,000,000,000,000,000,000,000,000 atoms, who are similarly sentient to us, a good theory should say that this is the primary consideration of relevance.
The general heuristic of “don’t care too much about atoms,” works pretty well most of the time. But this all changes when our choices start fucking over 100,000,000,000,000,000,000,000,000,000,000 of them—a number far vaster than the number of milliseconds in the universe so far. The same is true of the future—even if you’re not a utilitarian, even if you think that they matter much less, when our choices harm 10^52 them (which is conservative!!)—which can be written out as 10, 000, 000 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, this becomes the most important thing. This number defies comprehension—if each person who had ever lived was a world that contained the number of people who had ever lived, that would still have fewer people than 10^52.
If there are going to be a billion centuries better than this one, the notion that we should mostly care about this one starts to seem absurd. Much like it would be absurd to hold that the first humans’ primary moral concerns should have been their immediate offspring, it would be similarly ridiculous to hold that we should care more about the few billion people around today than the 10^52 future people.
This also provides a powerful debunking account of contrary views. Of course the current billions seem more important than the future to us today. People in the year 700 CE probably thought that the people alive at that time contained more importance than the future. However, when considered impartially, from “the point of view of the universe,” this century is revealed to be obviously less important than the entire future.
Imagine a war that happens in the year 10,000 and kills 1 quadrillion people. However, after the war, society bounces back and rebuilds, and the war is just a tiny blip on the cosmic time-scale. This war would clearly be worse than a nuclear war that would happen today and kill billions of people. However, this too would be a tiny occurrence—barely worth mentioning by historians in the year 700,000—and entirely ignored in the year 5 million.
Two things are worth mentioning about this.
The future would be worthwhile overall even if this war happened. However, this was is worse than a global war that would kill a billion people now. Thus, by transitivity, maintaining the future would be worth a global war that would kill a billion people. And if it would be worth killing a billion people in a global nuclear war, it’s really really important.
It becomes quite obvious how insignificant we are when we consider just how many centuries there could be. Much like we (correctly) recognize that events that happened to 30 people in the year 810 CE aren’t “globally,” significant, we should similarly recognize that what happens to us is far less significant than the affect we have on the future. Imagine explaining the neartermist view to a child in the year 5 million—explaining how raising taxes a little bit or causing a bit of death by slowing down medicine slightly was so important that it was worth risking fifty-thousand centuries of prosperity—combined with all of the value that the universe could have after the year 5 million—with billions more centuries to come!
As I’ve said elsewhere “The black plague very plausible lead to the end of feudalism. Let’s stipulate that absent the black plague, feudalism would still be the dominant system, and the average income for the world would be 1% of what it currently is. Average lifespan would be half of what it currently is. In such a scenario, it seems obvious that the world is better because of the black plague. It wouldn’t have seemed that way to people living through it, however, because it’s hard to see from the perspective of the future.”
The people suffering through the black plague would have found this view absurd. However, to us it now seems obvious, and would seem more obvious a thousand centuries from now. The sacrifices longtermism demands currently aren’t even really sacrifices—it’s better for current people if we reduce existential threats which will kill millions of people in expectation, even ignoring the future.
So our case for prioritizing the present is infinitely weaker than the case that those living through the black plague could make for eradicating the black plague, even if it would lock in eternal feudalism. And if our position is weaker than the arguments for eternal feudalism, something has gone awry in our thinking.
Thus, even if we had to make immense sacrifices to reduce existential risks, it would be unambiguously worth it. And we don’t! The actions taken to reduce existential risks will be better for both current and future people. Those actions are thus no brainers.
Longtermism Is Correct Part 5
More considerations favoring longtermism
Epistemic Status: Pretty confident and almost entirely based on ideas expressed here
1. In the past, there was widespread, correlated, biased moral error, even on matters where people were very confident.
2. By induction, we make similar errors, even on matters where we are very confident.
Humans throughout history have made a truly mind-bending number of moral errors. For just a sample of things we know now that we didn’t previously
You shouldn’t keep slaves.
You shouldn’t torture people.
You should also care about people who live in a different city than the one you live in, who were born in a different city than the one you live in, or who speak a different language than you.
You should let women vote, leave the house, choose who they marry, and generally exercise some amount of self-determination over their own lives.
Your national sport shouldn’t be watching people get murdered.
You shouldn’t rape pubescent children, especially if they are also your slaves.
If a country consists exclusively of former child soldiers with CPTSD and the slaves they can randomly murder on a whim, that is in fact bad, and not good.
It was in the lifetime of my grandparents that Jim Crow laws were on the books. I heard a story from my grandfather of refusing to eat at an Italian restaurant because it wouldn’t serve black people. These egregious errors aren’t so far in the past, and we’re not barred from making them. So ideally, we want some moral system to mitigate these horrendous moral errors. Karnofsky argues
The most credible candidate for a future-proof ethical system, to my knowledge, rests on three basic pillars:
Systemization: seeking an ethical system based on consistently applying fundamental principles, rather than handling each decision with case-specific intuitions. More
Thin utilitarianism: prioritizing the "greatest good for the greatest number," while not necessarily buying into all the views traditionally associated with utilitarianism. More
Sentientism: counting anyone or anything with the capacity for pleasure and suffering - whether an animal, a reinforcement learner (a type of AI), etc. - as a "person" for ethical purposes. More
He additionally explains in an appendix why these are good ways of doing future based ethics.
Of course, these aren't the only two options. There are a number of other approaches to ethics that have been extensively explored and discussed within academic philosophy. These include deontology, virtue ethics and contractualism.
These approaches and others have significant merits and uses. They can help one see ethical dilemmas in a new light, they can help illustrate some of the unappealing aspects of utilitarianism, they can be combined with utilitarianism so that one avoids particular bad behaviors, and they can provide potential explanations for some particular ethical intuitions.
But I don’t think any of them are as close to being comprehensive systems - able to give guidance on practically any ethics-related decision - as the approach I've outlined above. As such, I think they don’t offer the same hopes as the approach I've laid out in this post.
One key point is that other ethical frameworks are often concerned with duties, obligations and/or “rules,” and they have little to say about questions such as “If I’m choosing between a huge number of different worthy places to donate, or a huge number of different ways to spend my time to help others, how do I determine which option will do as much good as possible?”
The approach I've outlined above seems like the main reasonably-well-developed candidate system for answering questions like the latter, which I think helps explain why it seems to be the most-attended-to ethical framework in the effective altruism community.
As Opotow says
Exclusion from the scope of justice, or moral exclusion, occurs when individuals or groups are seen as outside the boundary in which justice applies. As a result, moral values and rules and considerations of fairness do not apply to those outside the scope of justice.
Opotow adds
When I began research on moral exclusion, among my first empirical findings was a Scope of Justice Scale consisting of three attitudes toward others: (1) believing that considerations of fairness apply to them; (2) willingness to allocate a share of community resources to them; and (3) willingness to make sacrifices to foster their well-being (Opotow, 1987, 1993). This scale defines moral inclusion and operationalizes it for research.
Opotow is specifically discussing the holocaust, however, the same broad principle applies to most historical atrocities. Karnofsky (I think quotes, though maybe this footnote is an extrapolation from the things Singer says) Singer
At first [the] insider/ outsider distinction applied even between the citizens of neighboring Greek city-states; thus there is a tombstone of the mid-fifth century B.C. which reads:
This memorial is set over the body of a very good man. Pythion, from Megara, slew seven men and broke off seven spear points in their bodies … This man, who saved three Athenian regiments … having brought sorrow to no one among all men who dwell on earth, went down to the underworld felicitated in the eyes of all.
This is quite consistent with the comic way in which Aristophanes treats the starvation of the Greek enemies of the Athenians, starvation which resulted from the devastation the Athenians had themselves inflicted. Plato, however, suggested an advance on this morality: he argued that Greeks should not, in war, enslave other Greeks, lay waste their lands or raze their houses; they should do these things only to non-Greeks. These examples could be multiplied almost indefinitely. The ancient Assyrian kings boastfully recorded in stone how they had tortured their non-Assyrian enemies and covered the valleys and mountains with their corpses. Romans looked on barbarians as beings who could be captured like animals for use as slaves or made to entertain the crowds by killing each other in the Colosseum. In modern times Europeans have stopped treating each other in this way, but less than two hundred years ago some still regarded Africans as outside the bounds of ethics, and therefore a resource which should be harvested and put to useful work. Similarly Australian aborigines were, to many early settlers from England, a kind of pest, to be hunted and killed whenever they proved troublesome.
So an important lesson of history seems to be “don’t exclude lots of beings from your moral circle—exclusion involves thinking considerations of fairness don’t apply, being unwilling to allocate community resources to them, and being unwilling to make sacrifices for them.” This is exactly what we do to the far future. There is a possibility of vast numbers of future humans, and yet despite the compelling philosophical arguments for taking their interests into account, common sense often tells us not to take them into account. If history has taught us anything it’s that ignoring the interests of large numbers of beings, when we can’t provide an adequate philosophical defense of doing so is a very bad idea.
Much like a person’s spatial location shouldn’t determine their moral significance, neither should their temporal location, or when they exist. The two-hundred-thousandth century is intrinsically just as important as the twenty-first.
One final point worth noting made by Karnofsky is the following.
An interesting additional point is that this sort of ethics arguably has a track record of being "ahead of the curve." For example, here's Wikipedia on Jeremy Bentham, the “father of utilitarianism” (and a major sentientism proponent as well):
He advocated individual and economic freedom, the separation of church and state, freedom of expression, equal rights for women, the right to divorce, and the decriminalizing of homosexual acts. [My note: he lived from 1747-1832, well before most of these views were common.] He called for the abolition of slavery, the abolition of the death penalty, and the abolition of physical punishment, including that of children. He has also become known in recent years as an early advocate of animal rights.
Thus, the record of history shows that the moral circle exclusion required to reject longtermism has a poor track record, and is worth rejecting in general.
Contra Herok On MacAskill On Bentham On Homosexuality
Herok is wrong about the force of MacAskill's argument
You will know them by their fruits.
Matthew 7:16
One Tomasz Herok has written an article arguing against an argument made by MacAskill for utilitarianism. MacAskill said in a podcast
Will MacAskill: One [argument] that I think doesn’t often get talked about, but I think actually is very compelling is the track record. When you look at scientific theories, how you decide whether they’re good or not, well significant part by the predictions they made. We can do that to some extent, got much smaller sample size, you can do it to some extent with moral theories as well. For example, we can look at what the predictions, the bold claims that were going against common sense at the time, that Bentham and Mill made. Compare it to the predictions, bold moral claims, that Kant made.
When you look at Bentham and Mill they were extremely progressive. They campaigned and argued for women’s right to vote and the importance of women getting a good education. They were very positive on sexual liberal attitudes. In fact, some of Bentham’s writings on the topic were so controversial that they weren’t even published 200 years later.
Robert Wiblin: I think, Bentham thought that homosexuality was fine. At the time he’s basically the only person who thought this.
Will MacAskill: Yeah. Absolutely. Yeah. He’s far ahead of his time on that.
Also, with respect to animal welfare as well. Progressive even with respect to now. Both Bentham and Mill emphasized greatly the importance of treating animal… They weren’t perfect. Mill and Bentham’s views on colonialism, completely distasteful. Completely distasteful from perspective for the day.
Robert Wiblin: But they were against slavery, right?
Will MacAskill: My understanding is yeah. They did have pretty regressive attitudes towards colonialism judged from today. It was common at the time. That was not something on the right side of history.
Robert Wiblin: Yeah. Mill actually worked in the colonial office for India, right?
Will MacAskill: That’s right, yeah.
Robert Wiblin: And he thought it was fine.
Will MacAskill: Yeah, that’s right.
Robert Wiblin: Not so great. That’s not a winner there.
Will MacAskill: Yeah. I don’t think he defended it at length, but in casual conversations thought it was fine.
Contrast that with Kant. Here are some of the views that Kant believed. One was that suicide was wrong. One was that masturbation was even more wrong than suicide. Another was that organ donation is impermissible, and even that cutting your hair off to give it to someone else is not without some degree of moral error.
Robert Wiblin: Not an issue that we’re terribly troubled by today.
Will MacAskill: Exactly, not really the thing that you would stake a lot of moral credit on.
He thought that women have no place in civil society. He thought that illegitimate children, it was permissible to kill them. He thought that there was a ranking in the moral worth of different races, with, unsurprisingly, white people at the top. Then, I think, Asians, then Africans and Native Americans.
Robert Wiblin: He was white, right?
Will MacAskill: Yes. What a coincidence.
Herok gives a few objections to this.
One problem with this argument is that MacAskill conflates what philosophers think follows from their theories with what actually follows from their theories. It’s true that Kant held all sorts of weird moral views, however it’s often far from clear how those views – mostly expressed in his Metaphysics of Morals – can be justified with the theory expressed in his Groundwork for the Metaphysics of Morals. Many contemporary Kantians argue that Kant was simply wrong about the implications of his own theory for issues like women’s rights, animal rights, and many others.
Several things are worth noting.
It’s obviously more likely that a theory entails crazy things if its proponents use it to argue for crazy things. While not decisive, it’s certainly some evidence.
It’s very easy to claim that a theory after the fact has implications differently from what it was used to show at the time—especially if the theory is not well defined. However, if a theory is able to make correct moral “predictions,” based on limited data, that’s very good evidence. Imagine if you had two ways of deciding the correct answer to a math problem—one of them used one formula and another used a different formula. However, both formulas involved lots of subjectivity—the numbers weren’t strictly determined by the problem. Both of them involved looking at a data set and coming up with a hard to verify rough approximation, such that different observers would get different results based on different approximations. We then look at the rack record of both of them. One of them is used to solve math problems that it takes us an extra 200 years to solve independently—and happens to be right. This extraordinary record of correctness happens repeatedly. The other one, however, gets results that are wrong—spectacularly so. It’s off by orders of magnitude. However, two centuries later, now that everyone is in agreement, the proponents of equation two claim that people were just picking out bad numbers. Had they applied it correctly, theory two proponents claim, they would have gotten the right answer. They have some handwaivey proof of this relying on a narrow range of numbers that no one choose at the time. I think it’s safe to say this would be good evidence for equation 1. And yet, this is exactly the situation with morality. Morality is not a super precise formula that would lead to the same judgments for all people, so if a moral theory is able to independently get the right result over and over again, that’s really good evidence for the theory.
But this is not even a very serious problem, compared to the main one, which has to do with what MacAskill means by “predictions”. If moral theories make predictions, they aren’t about what people will consider morally acceptable in 200 or 300 years, they are about what actually is morally acceptable. So in order to evaluate the predictions, we need to first know what’s morally right and wrong. How can we know that? How can we know that, for example, there’s nothing wrong with gay marriage? There are two options: either it follows from utilitarianism, or from some other theory. If it follows from utilitarianism, then MacAskill’s argument is circular. If it follows from another theory, then what’s the point of justifying utilitarianism, given we already know that some non-utiliarian theory is correct? Moreover, MacAskill seems to believe that utilitarianism is generally incompatible with alternative theories. So if some non-utilitarian theory is correct, then utilitarianism cannot be correct.
In short, MacAskill is saying that utilitarianism is true because utilitarianism is true, or that utilitarianism is true because utilitartianism is false. Either way, his argument takes the biscuit.
This is badly confused about moral methodology. Morality doesn’t only work by deducing a normative ethic from first principles and then applying it to cases. Rather, we can employ other arguments establishing a normative conclusion. Totalist utilitarianism entails the repugnant conclusion, but many like Huemer have argued for it independently.
So utilitarianism said that slavery was wrong and homosexuality was permissible hundreds of years ago. Now, we can employ other arguments that don’t rely on any single normative ethic, to determine that slavery is in fact morally wrong and homosexuality is permissible. We can employ independent analysis to “double check,” the claims made by the utilitarians. One would be hard pressed in the modern age providing a rational defense of slavery or a criticism of homosexuality.
Additionally, assuming there’s a non trivial chance of moral progress, the probability of morality becoming more utilitarian over time, given utilitarianism, is higher than the probability of it if utilitarianism were false.
These can be easily seen with an analogy. Darwin’s theory made lots of predictions that were later confirmed. The process of confirming them didn’t just involve plugging in the theory—it involved testing them. One could make a parallel statements about evolution to the one made by Herok.
But this is not even a very serious problem, compared to the main one, which has to do with what MacAskill means by “predictions”. If scientific theories make predictions, they aren’t about what people will consider correct in 200 or 300 years, they are about what actually is scientifically correct. So in order to evaluate the predictions, we need to first know what’s scientifically right and wrong. How can we know that? How can we know that, for example, there are transitionary fossils? There are two options: either it follows from evolution by natural selection, or from some other theory. If it follows from evolution by natural selection, then MacAskill’s argument is circular. If it follows from another theory, then what’s the point of justifying evolution by natural selection, given we already know that some non-evolution by natural selection theory is correct? Moreover, MacAskill seems to believe that evolution by natural selection is generally incompatible with alternative theories. So if some non-evolution by natural selection theory is correct, then evolution by natural selection cannot be correct.
In short, MacAskill is saying that evolution by natural selection is true because evolution by natural selection is true, or that evolution by natural selection is true because evolution by natural selection is false. Either way, his argument takes the biscuit.
A Quick Account Of Why Temporal Discounting Makes No Sense
At least, intrinsically
A temporal discount rate involves holding that the further in time, the less important things are. For example, holding that something which would kill 100 people gets 1% less bad, for each year delay. Discounting makes some sense on instrumental grounds. After all, given uncertainty, we can’t be that confident about the far future, so the further in time we project, the less weight we should give towards our projections.
However, discounting intrinsically makes no sense, as we’ll see, despite the frequent fondness of economists for discounting. Suppose your discount rate is only 1%—a very modest rate. Well, on this view, one person being killed in the year zero would be as bad as 546786506.396 people being killed in the year 2022. This is clearly ludicrous.
A Potent Objection To Desire Theory
One of the many reasons desire theory is untenable
The desire theory of well-being says that what makes a person well off is achieving their desires. Such a view has many powerful objections—many of which I spelled out here. However, I recently came across a paper that presents a powerful objection to desire theory—one that should be the nail in the coffin of desire theory. Spaid has a very simple thesis.
This dissertation argues that the desire satisfaction theory, arguably the dominant theory of well-being at present, fails to explain why depression is bad for a person. People with clinical depression desire almost nothing, but the few desires they do have are almost all satisfied. So it appears the theory must say these people are relatively welloff. A number of possible responses on behalf of the theory are considered, and I argue that each response either fails outright, or requires modifications to the desire satisfaction theory which make the theory unattractive for other reasons.
It also had this amusing dedication
I dedicate this dissertation to my parents, without whom neither it nor I would exist.
One might object that appeal to deeper desires can explain why this is not so. However, as Spade explains (p.36)
“The problem with this response is that empirical evidence indicates that, in many cases, depression eliminates even these deeper desires. Of the criteria listed by the DSMV for a diagnosis of a major depressive episode, one of two criteria that must be met is “markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day”32 This suggests that depression involves a general loss of interest in things, rather than the loss of merely superficial desires. Not only is the depressed person unmotivated to shower or go to work, but they are also unmotivated to spend time with friends and family and engage in leisure activities they once enjoyed.33.”
Spade adds (p.38)
“However, while it may be true that all depressed people continue to make evaluative judgments, there is evidence that, in some cases of depression, these judgments do not reflect the kind of deep desires which could explain why their life is not going well. Some depressed people say that nothing is worth doing, that there is no purpose or point in life, or that they feel empty.38 In an autobiographical account of his depression, author Andrew Solomon says that, in his depression, “...the meaninglessness of every enterprise and every emotion, the meaninglessness of life itself, becomes selfevident. The only feeling left in this loveless state is insignificance.”39 Computational neuroscientist Walter Pitts writes, “I have noticed in the last two or three years a growing tendency to a kind of melancholy apathy or depression. [Its] effect is to make the positive value seem to disappear from the world, so that nothing seems worth the effort of doing it, and whatever I do or what happens to me ceases to matter very greatly…”40 The evaluative judgments expressed in these claims appear to reflect either an absence of concerns altogether, or concerns of the wrong sort.”
Spade provides objections to a series of other ways of attempting to rescue the desire theory. He argues none of these are successful. Definitely worth reading if you’re interested in the subject of desire theory!
Utilitarianism Wins Outright Part 28: Responding To Objections
None of them succeed
Much to my surprise, shock, and horror, not everyone is a utilitarian. I’ve previously argues that the intuitions employed against utilitarianism backfire—see earlier posts in this series. However, people sometimes have other objections that are worth responding to.
One common objection to utilitarianism is that it is too demanding. Utilitarianism says that we should do whatever is best, whatever personal sacrifices are involved. However, many people argue that we can’t be obligated to endure sacrifices that are too terrible. This is a bad objection, amounting to little more than a complaint that it’s psychologically difficult to be perfectly moral.
First, utilitarianism is intended as a theory of right action not as a theory of moral character. Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection. However, it is far less counterintuitive to suppose that it would be good to sacrifice oneself to save two others than it is to suppose that one is a bad person unless they sacrifice themselves to save two others. In fact, it seems that any plausible moral principle would say that it would be praiseworthy to sacrifice oneself to save two others. If a person sacrificed their lives to protect the leg of another person, that act would be bad, even if noble, because they sacrificed a greater good for a lesser good. However, it’s intuitive that the act of sacrificing oneself to save two others is a good act.
The most effective charities can save a life for only a few thousand dollars. If we find it noble to sacrifice one's life to save two others, we should surely find it noble to sacrifice a few thousand dollars to save another. The fact that there are many others who can be saved, and that utilitarianism prescribes that it’s good to donate most of one’s money doesn’t count against the basic calculus that the life of a person is worth more than a few thousand dollars.
Second, while it may seem counterintuitive that one should donate most of their money to help others, this revulsion goes away when we consider it from the perspective of the victims. From the perspective of a person who is dying of malaria, it would seem absurd that a well off westerner shouldn’t give up a few thousand dollars to prevent their literal death. It is only because we don’t see the beneficiaries that it seems too demanding. It seems incredibly demanding to have a child die of malaria to prevent another from having to donate.
If a privileged wealthy aristocracy objected to a moral theory on the grounds that it requested they donate a small share of their luxury to prevent many children from dying, we wouldn’t take that to be a very good objection to that moral theory. Yet the objection to utilitarianism is almost exactly the same—minus the wealthy aristocracy part. Why in the world would we expect the correct moral theory to demand we give up so little, when giving up a vacation could prevent a child from dying. Perhaps if we consulted those whose deaths were averted as a result of the foregone vacation or nicer car, utilitarianism would no longer seem so demanding.
Third, we have no a priori reason to expect ethics not to be demanding. The demandingness intuition seems to diffuse when we realize our tremendous opportunity to do good. The demandingness of ethics should scale relative to our ability to improve the world. Ethics should demand a lot from superman, for example, because he has a tremendous ability to do good.
Fourth, the drowning child analogy from (Singer, 1972) can be employed against the demandingness objection. If we came across a drowning child with a two thousand dollar suit, it wouldn’t be too demanding to suggest we ruin our suit to save the child. Singer argues that this is analogous to failing to donate to prevent a child from dying.
One could object that the child being far away matters. However, distance is not morally relevant. If one could either save five people 100 miles away, or ten 100,000 miles away, they should surely save the ten. When a child is abducted and taken away, the moral badness of the situation doesn’t scale with how far away they get.
A variety of other objections can be raised to the drowning child analogy, many of which were addressed by Singer.
Another objection provided to utilitarianism is that it fails to respect the separateness of persons. This objection seems to be fundamentally confused. Utilitarianism obviously realizes the persons are separate. The question is merely whether that provides for moral significance. Two things being different doesn't mean that we can’t make comparisons between them. It is rational to endure five minutes of pain now to prevent thirty minutes of pain later. This does not fail to respect the “separateness of moments.”
We can make tradeoffs regarding separate things. My happiness is separate from the happiness of others, but we can obviously make tradeoffs. It would be wrong to save my life instead of the lives of hundreds, despite our separateness. If we take the view that interpersonal tradeoffs in happiness are prima facie wrong that runs afoul of pareto optimality. Suppose we can take an action that gives person A five units of happiness and costs person B 2 units of happiness. If we take seriously the “separateness of persons,” we would perhaps not take that action. However, suppose additionally that we can give person B 5 units of happiness and cost person A 2 units of happiness. This act would similarly be wrong. This would make an act that makes everyone better off morally wrong.
The believer in the separateness of persons has to provide a reason to assign normative significance to the separateness of persons in a way that undermines utilitarianism. Impartiality demands that we regard everyone as equal. This does not require that we lose sight of the fact that people are, in fact, different.
(McCloskey, 1973) presents two more objections to utilitarianism. The first is that utilitarianism denies the notion that ought implies can. McCloskey objects to the notion that if one takes an action like bringing orphans to the beach that was benevolent, but that had bad consequences, then they ought not to have done it, despite it being impossible to predict that their action would produce an action.
This is confused. On utilitarianism, what matters fundamentally is not whether or not people take moral actions. Rather the entire aim of morality, conceived of broadly as the entire set of actions one will take, will be to produce the best outcomes. Sometimes it may be right for example, to put oneself in a situation in which they will take some wrong actions. It might be right to follow a promise, even if following that promise ends up having bad consequences.
Generally the term ought is used to denote blameworthiness. In this sense, ought clearly implies can. One isn’t blameworthy for failing to do things they couldn’t do. Similarly, the person who takes orphans to the beach isn’t blameworthy.
Utilitarianism says nothing about how we can use particular words. There is a strong consequentialist case for using the word ought to describe actions which are praiseworthy. Thus, semantically it makes little sense to say one ought to have done things that they couldn’t do or know about.
However, utilitarianism is concerned more fundamentally with what is important and worth aiming for. Thus, purely semantically, a better way of describing the case of the person who drives orphans to the beach would be along the lines of “it would be better if they hadn’t driven those orphans to the beach.” This is an utterly uncontroversial claim. When a utilitarian makes a claim that the person shouldn’t have driven the orphans to the beach, that is the sentiment they are expressing. This is only an apparent problem because of semantic ambiguity.
McCloskey’s second objection is that utilitarianism values the happiness and suffering of animals, such that it would say that if you could either save some large number of horses from burning to death or one human from burning to death, you ought to save the large number of horses.
As was shown in part two of chapter two, in the comparison of torture and dust-specks, any plausible moral theory will hold that some large number of dust specks will be worse than one torture. If that is true, then unless one thinks that one dust speck is worse than a horse burning to death, or that no number of people undergoing brutal lethal torture can ever be as bad as one person burning to death, then they’d have to accept that saving some number of horses from burning to death would be more important than saving some larger number of humans.
When McCloskey says (p.63)
“As Kant long ago pointed out, it is not morally reasonable to weigh the pleasure of animals (value) against the worth of a person as a person,”
it is his view that is morally unreasonable. If one thought that the pleasure of a human was categorically more important than anything that could happen to animals, then they should be willing to horrifically torture infinite animals to prevent one human death. This is absurd!
Several other problems arise for a view like McCloskey’s. First, it’s not at all clear what about humans makes them unfathomably more morally important than animals. It can’t be intelligence for there are some severely mentally disabled humans who are less intelligent than some animals. As many such as Singer in Animal Liberation, have argued, no such distinction can establish a morally relevant difference between humans and animals of sufficient magnitude to justify not caring about animals.
Second, there are clear biases when it comes to our judgment of the moral status of animals. It is no surprise that human judgments will conclude that humans matter vastly more than animals. There are clear evolutionary reasons for such a prejudice. But unless that prejudice can withstand philosophical scrutiny, it should be discarded.
Third, consider a series of entities going back through evolutionary history, as humans slowly evolved from non-human animals. If at each stage, the sentient creatures growing gradually more humanoid are not infinitely more important than the previous generation, then by transitivity humans would be logically barred from being infinitely more important than non-human animals.
Fourth, one clearly doesn’t gain moral value merely by virtue of being biologically human. If a person took a DNA test and was confirmed to be technically non-human, though still appeared human and was sentient, this would not rob them of their moral worth. Given that there are some humans who are mentally similar to non-human animals, unless merely being biologically human makes one morally relevant, non-human animals would have to have similar moral worth (intrinsically) to those mentally disabled humans.
A reasonable way to test if a particular entity has moral value is to imagine oneself being that entity and seeing if they would care about what would happen to that entity. This is the principle of empathy. If I slowly turned into a pig (assuming that transformation was able to preserve personal identity), I would care a great deal about what happened to me after I became the pig. I would undergo large sacrifices to avoid being burned alive while being a pig. If, however, I was turned into a non-sentient plant, I would not care what would happen to me after I became a plant. If we would care about a being were we in its shoes, but don’t care about the being currently, that is just bare prejudice, and is not justifiable philosophically.
Utilitarianism Wins Outright Part 29: Clearing Up Confusion
The pattern of counterintuitive implications of utilitarianism
As I’ve documented in great detail throughout the other 28 parts of this series and my response to Huemer, utilitarianism has many implications that seem unintuitive at first. This is the core of the objections to utilitarianism—at least, those that aren’t conceptually confused in some way1. This isn’t terribly surprising. Economists often remark about the unintuitiveness of economics—the truth is unlikely to be what we expect.
Of course, these implications don’t provide good evidence against utilitarianism, both because they’re eviscerated by prolonged reflection and they’re the type of unintuitiveness we’d expect on the hypothesis that utilitarianism is correct. But one can still ask whether there’s a common pattern to utilitarianism’s divergence from our intuitions—whether they all share something in common. And they do.
One thing that they have in common is that our non-utilitarian intuitions are false. This comment of mine is, however, not a shock if one has read any of my writings on the subject.
Yet there’s something else fundamental that they have in common that, once fully grasped, makes the “objections,” to utilitarianism lose all of their force. When you can see the algorithm behind your non-utilitarian judgments, it becomes obvious that they’re off track in very similar ways. It’s easy to explain how they all go off track. This is not a single specific error, but a cluster of related cognitive errors.
Let’s begin with an example—the organ harvesting case. A doctor can kill a patient and harvest their organs to save five people. Should they? Our intuitions generally say no.
What’s going on in our brains—what’s the reason we oppose this? Well, we know that social factors and evolution dramatically shape our moral intuitions. So, if there’s some social factor that would result in strong pressure to hold to the view that the doctor shouldn’t kill the person, it’s very obvious that this would affect our intuitions. Are there?
Well, of course. A society in which people went around killing other people for the greater good would be a much worse society. We have good rules to place strong prohibitions on murder, even for the allegedly greater good.
Additionally, it is a practical necessity that we accept, as a society, some doing allowing distinction. Given that doing the maximally good thing all the time would be far too demanding, as a society, we treat there as being some fundamental distinction between doing and allowing. Society would collapse if we treated murder as being only a little bit bad. Thus, it’s super important that we treat murder as very bad. But given that we can’t treat failing to do something unfathomably demanding as horrendous—equivalent to murder—we have to treat there as being some distinction between doing and allowing.
After this distinction is in place, our intuitions about organ harvesting are very obviously explainable. If killing is treated as unfathomably evil, while not saving isn’t, then killing to save will be seen as horrendous.
To see this, imagine if things were the other way. Imagine if we were living in a world in which every person will kill one person per day, in an alternative multiverse segment, unless they fast during that day. Additionally, imagine that, in this world, each person saved dozens of people per day in alternative multiverse segment, unless they take drastic action. In this world, it seems clear that failing to save would be seen as much worse than killing, given that saving is easy, but failing to kill is very difficult. Additionally, imagine that these people saw those who they were saving, and they felt empathy for them. Thus, not saving someone would provoke similar internal emotional reactions in that world as killing does in ours.
So what do we learn from this. Well, to state it maximally bluntly and concisely, many of our non-utilitarian intuitions are the results of social norms that we design to have good consequences, which we then take to be significant independently of their good consequences. These distinctions are never derivable from plausible first principles, never have clear delineations, and always result in ridiculous reductios. They are more epiphenomena—an unnecessary byproduct of correct moral reasoning. We correctly see that society needs to enshrine rights as a legal concept, and then incorrectly feel an attachment to them as an intrinsic feature of morality.
When we’re taught moral norms as a child, we’re instructed with rigid norms like “don’t take other people’s things.” We try to reach reflective equilibrium with those intuitions, carefully reflecting until they form coherent networks of moral beliefs. Then, later in life, we take them as the moral truth, rather than derivative heuristics.
Let’s take another example: desert. Many people think others intrinsically deserve things. Well, there’s a clear social and evolutionary benefit to thinking that. Punishment effectively deters crime and prevents people from harming others, assuming they’re locked up. The callousness of means ends reasoning and complexity of the moral calculus that, if recognized, may be itself undermined and self defeating, makes this intuition very strong.
Unreliable emotional reactions play a major role in our moral intuitions—particularly when they’re non-consequentialist. Many of our moral intuitions don’t rely on what seem like bad states of affairs, but what make a person seem like a bad person. Well, if we accept that a murderer is a bad person—which we surely need to—then it’s not at all surprising hat we’d have the intuition that it’s bad to kill one to save five; after all, it turns you into a bad person. Good actions surely don’t make bad people!
But this is not the only thing behind our non-utilitarian intuitions. Many of them rely on a selective lack of empathy. It’s much harder to empathize with those who can’t talk—or who we don’t listen to. If you heard the screams of the children as they withered away from malaria, whose lives you could’ve saved by foregoing your vacation, it would seem much more intuitive that you should do so.
We know that humans have a specific moral circle—a limited range of entities that they care about. It was hard enough getting slave owners to include black people in the moral circle. Yet it’s much more difficult when the people are far away and can’t talk.
Some humans are mentally very similar to some non-human animals. There are severe mental disabilities that make people roughly similarly capable to cows, pigs, or chickens. And yet we all feel like Bree matters, while it’s much harder to empathize in the same way with a cow or a pig. The reason for this is simple; pigs can’t talk, advocate for themselves, and they don’t look like us. If pigs looked like people, we almost certainly wouldn’t eat bacon.
It’s hard to empathize with future people because they can’t talk to us. If we could talk with a merely possible person who would describe their life, we’d care much more about their interests. That’s why Bostrom’s letter from utopia was so compelling.
The ideal morality wouldn’t use our faulty system of empathy. Instead, when evaluating the importance of an entity, we’d ask whether we’d care about that entities interests if we were going to become that entity. If one would slowly turn into a pig, they’d care what happened to them after they became a pig. The same is not true of plants.
If you’re skeptical about this theory of empathy consider the following question: which entities are part of our moral circle? Well, everyone who is in our moral circle either can speak up for themselves or look like someone who can advocate for themselves2. Isn’t that funny? What are the odds that the beings that ultimately matter would happen to look like us mostly—or at least beings that can reason with us?
Up until this point, the moral circle has only expanded to those who have advocates. Yet that obviously is a morally arbitrary factor. If chickens could speak up for themselves, we almost certainly wouldn’t eat them.
There are a few small exceptions here. One of them relates to retributivism. We think that people who commit brutal crimes deserve to suffer, even if they can advocate for themselves. However, in this case, we have a blinding hatred for these people. It’s unsurprising that our moral circle wouldn’t include those who we hate with a passion. Additionally, this is easily explained by the heuristics account presented before.
But what about cases like torture vs dust specks, or the repugnant conclusion. In these cases, we have empathy for the people harmed. But nonetheless, our intuitions go off-track. What’s going on?
Well, for one, there’s an obvious reason that we have a social norm that involves treating torture as far worse than shutting off a sports game. Society would be worse if a careless utilities worker was treated worse than Jeffrey Dahmer.
Yet another important error that we make involves simple mathematical errors in reasoning. As Huemer points out
When we try to imagine a billion years, our mental state is scarcely different, if at all, from what we have when we try to imagine a million years. If promised a billion years of some pleasure, most of us would react with little, if any, more enthusiasm than we would upon being promised a million years of the same pleasure. Intellectually, we know that one is a thousand times more pleasure than the other, but our emotions and felt desires will not reflect this.
He later notes
In many cases, we make intuitive errors when it comes to compounding very small quantities. In one study, psychologists found that people express greater willingness to use seatbelts when the lifetime risk of being injured in a traffic accident is reported to them, rather than the risk per trip (Slovic, Fischhoff, and Lichtenstein 1978). This suggests that, when the very small risk per trip is presented, people fail to appreciate how large the risk becomes when compounded over a lifetime. They may see the risk per trip as ‘negligible’, and so they neglect it, forgetting that a ‘negligible’ risk can be large when compounded many times.
For an especially dramatic illustration of the hazards of trusting quantitative intuitions, imagine that there is a very large, very thin piece of paper, one thousandth of an inch thick. The paper is folded in half, making it two thousandths of an inch thick. Then it is folded in half again, making it four thousandths of an inch thick. And so on. The folding continues until the paper has been folded in half fifty times. About how thick would the resulting paper be? Most people will estimate that the answer is something less than a hundred feet. The actual answer is about 18 million miles.15
For a case closer to our present concern, consider the common intuition that a single death is worse than any number of mild headaches. If this view is correct, it seems that a single death must also be worse than any amount of inconvenience. As Norcross observes, this suggests that we should greatly lower the national speed limit, since doing so would save some number of lives, with (only) a great cost in convenience.16 Yet few support drastically lowering the speed limit. Indeed, one could imagine a great many changes in our society that would save at least one life at some cost in convenience, entertainment, or other similarly ‘minor’ values. The result of implementing all of these changes would be a society that few if any would want to live in, in which nearly all of life’s pleasures had been drained.
In all of these cases, we find a tendency to underestimate the effect of compounding a small quantity. Of particular interest is our failure to appreciate how a very small value, when compounded many times, can become a great value. The thought that no amount of headache-relief would be worth a human life is an extreme instance of this mistake—as is the thought that no number of low-utility lives would be worth as much as a million high-utility lives.
Thus, our intuitions about very big numbers are incredibly unreliable. When it comes to the utility monster, for example, we literally can’t imagine what it’s like to be a utility monster. When comparing a 1 in 1 billion chance of 100 quadrillion utility to a certainty of 10 utility, it’s obvious why we’d prefer the 10 utility. Given our inability to do precise intuitive calculations about such numbers, we just round the 1 in a billion chance to zero.
Similarly, as Yetter-Chappell points out, there’s lots of status quo bias. A major reason why the repugnant conclusion seems repugnant, a world where the utility monster eats everyone seems wrong, and it seems wrong to push the guy off the bridge in the trolley problem is because that would deviate from the status quo. If we favor the status quo, then it’s no surprise that utilitarianism would go against our intuitions about favoring the status quo. Our aversion to loss also explains why we want to keep things similar to how they are currently.
The repugnant conclusion seems pretty unintuitive when we’re comparing 10 billion people with awesome lives to 100^100 people with lives barely worth living. However, if we compare 1 person with an awesome life to 100,000 people with lives barely worth living, the one person being more important seems much less intuitive. This shows that our judgments are dramatically shaped with status quo bias—when the number of people will be similar to the current number, it seems much more unintuitive.
Another related bias is the egocentric bias. Much of morality comes from empathy—imagining ourself in the other person’s position. However, it’s nearly impossible to imagine being in the position of many people. We may have a bias to not care about harms that befall immense stupidity, because we think we’d never be that stupid. Similarly, we may be biased against future humans because it’s hard to imagine being a person far in the future, and the same goes for non-human animals. People are better at remembering things that may affect them in the future.
When we think about the repugnant conclusion, given the difficulty of imagining not existing, it’s very easy to think which world we’d rather exist in. Our quality of life is higher in the world with 10 billion people, so it’s unsurprising that we’d rather exist in that world, as Huemer notes.
Similarly, people care much more about preventing harms from foes with faces than foes without faces. So in the organ harvesting case, for example, when the foes have a face (namely, you) it’s entirely clear why the one murder begins to seem worse than the five deaths.
This is clearly a non-exhaustive list of cases in which our non-utilitarian judgments go awry. But once you feel the way the algorithm goes awry from the inside, the judgments fade.
Utilitarianism Wins Outright Part 30: The Trolley Problem
Flip the switch!
The trolley problem involves a train going down a track that will hit five people. However, if you flip the switch, it will redirect one. Should you flip the switch? No! I shall present lots of reasons in this article.
Objection 1: The No Good Account Objection
According to this objection, there is no adequate account of why flipping the switch is objectionable that wouldn’t apply to many other cases.
I will defend the view that one ought to flip the switch in the trolley problem. The utilitarian defense of this is somewhat straightforward, flipping the switch saves the most lives, and thus makes the world maximally better. However, there are a series of principles to which one could appeal to argue against flipping the switch. In this section, I will argue against them
The first view shall be referred to as view A.
According to view A, it is impermissible to flip the switch, because a person ought not take actions that directly harm another without their consent.
However, this view seems wrong for several reasons
First, an entailment of this view is that one oughtn’t steal a dime, in order to save the world, which seems intuitively implausible
Second, this view is insufficient to demonstrate that one oughtn’t flip the switch unless one has a view of causality that excludes negative causality, ie a view which claims that harm caused by inaction is not a direct harm caused by action. Given that not flipping the switch is itself an action, one needs a view of direct causation that excludes failing to take an action. However, such a view is suspect for two reasons.
First, it’s not clear that there is such a view that is coherent. One could naively claim that a person could not be held accountable for inaction, yet all actions can be expressed in terms of not taking actions. Murder could be expressed as merely not taking any action in the set of all actions that do not include murder, in which case one could argue that murder is permissible, given that punishing murder is merely punishing a persons failing to take actions in the aforementioned non murder set.
One could take a view called view A1 which states that
A person's action is the direct cause of a harm if them becoming unconscious during that action would negate that harm.
This view would allow a person to argue that one is the direct cause of harm during the trolley problem only if they flip the switch. However this view has quite implausible implications.
It would argue that if a person is driving and sees a child in front of them in the road, and fails to stop their car and runs over the child they would not be at fault, because failing to stop is merely inaction, not action. It would also imply that if a heavy shed was about to fall on five people, yet could be prevented by the press of a button, a person who failed to press the button would not be at fault. Thirdly, it would imply that people are almost never the cause of car related mishaps, because things would surely have gone worse if a person were unconscious while driving. Fourthly, it would entail that a person would not be at fault for failing to intervene during a trust fall exercise over metal spikes. Finally, it would entail that if a person was murdering another by using a machine to drive a metal spike slowly into them, that drove the spike forward up until the press of a button, they only be at fault for initially pressing the button, yet they would not be acting wrongly by allowing the machine to continue it’s ghastly act.
This view is also illogical. Inaction is merely not taking an action. It seems implausible that the moral value of an action would be determined by the harms avoided by being rendered unconscious.
Another view could claim that a person causes an event, if, had that person not existed that event wouldn’t have occurred. However, this is still subject to the aforementioned second objection. It is subject to several other objections
First it would argue that an actress would be at fault if a person became enraged at the shallowness of their life, after seeing an actress act in a movie, and in rage, committed a murder. It would also argue that, assuming Hitlers mothers classmates were almost all responsible for the holocaust, if they changed her life enough to make her baby sufficiently different to have prevented the holocaust. Finally, it would argue that Jeffrey Dahmer would act wrongly to donate to charity, because his non existence would avoid harm.
To resolve these issues one could add in the notion of predictability. They could argue that a person is at fault if their non-existence could be predictably expected to prevent harm. This would still imply that if a heavy shed was about to fall on five people, yet could be prevented by the press of a button, a person who failed to press the button would not be at fault. Furthermore it’s subject to two unintuitive modified trolley problems.
First it implies that if a train were headed towards five people, and one could flip a switch such that it would run over zero people they would not be at fault for not flipping the switch.
Second it has a fairly non intuitive second modified trolley problem. Suppose one is presented with the initial trolley problem, yet trips and accidentally flips the switch. Should they flip the switch back to its initial state, such that five people die, rather than the one. This view would imply that they should. However, this view seems unintuitive, it seems not to matter whether the trolley's position was caused initially by a person's existence.
If one accepts the view that they should flip the switch back to its original position, they’d have to accept that in most situations, if a person's non existence would likely result in the same type of harm, they are not at fault. For example, if it were the case that most doctors would harvest the organs of one to save five, then a person would not be blameworthy for doing it, because if they didn’t exist, another doctor would harvest their organs. It would also say that if the majority of people would flip the switch, and that if a person weren’t on the railroad, another would be, then one ought to flip the switch.
It would also argue that in a world where as a child, I accidentally brought about the demise of Jeffrey Dahmer, it would be permissible to kill all the people who Dahmer would have otherwise killed, assuming I could have knowledge that he would have gone on to murder many people and of who he was going to murder.
Thus it could be revised to be the following
A2 A person's action is responsible for a harm if there would be a less great differential in harm to at least one person between the actual world and a world in which they don’t exist, after taking the action
For example on this view Jim murdering a child would be immoral, because after murdering a child, there is harm to at least one person ie the child which would be prevented by Jim’s non existence, such that had Jim stopped existing the instant before he took the action, there would have been a person who would not have suffered
Yet this view also seems implausible.
Firstly because it sacrifices a good deal of simplicity and elegance. The complexity and ad hoc nature of the principle gives us reason to distrust it relative to a simpler principle.
Secondly, it would entail that if person A came across person B murdering person C by using a machine to drive a metal spike slowly into them, that drove the spike forward up until the press of a button, person A would not be at fault for failing to press the button
Thirdly, it would still entail the permissibility of person C failing to press the button, given that their non existence the moment before would result in the same outcome.
Fourth it would say that if a person saves someone’s life while simultaneously stabbing them, they haven’t violated any ones rights, because their non-existence the moment before would make the person stabbed worse off.
Therefore we have decisive reasons to reject this view
On the other hand, utilitarianism provides a desirable answer to the trolley problem. One should flip the switch because it would maximize happiness. This is thus evidence for utilitarianism.
Objection 2: The Hope Objection
This objection is largely a repackaging of an objection given earlier in this series. It seems plausible that a perfectly benevolent third party observer should hope for the switch to flip naturally, of its own accord. If a benevolent third party observer should hope for X to happen, it seems plausible that it would be moral to do X. After all, they would only hope for it if it’s worth doing.
Objection 3: The Better World Objection
This one is closely related to the hope objection. If the switch flipped naturally of its own accord, the world would be better, with 1 person dying rather than 5. Thus, if we accept that
Giving a perfectly moral person control over whether or not something happens can’t make the world worse.
If a perfectly moral person wouldn’t decide to flip the switch, giving them control over whether or not something (IE the flipping of the switch) happens makes the world worse.
Then we’d have to accept
A perfectly moral person would flip the switch.
Then, if we accept
A perfectly moral person would only take good actions
We’d have to accept
Flipping the switch is a good action.
The advocate of not flipping the switch might object to A. However, this is a significant price to pay, as A is very plausible. The notion that giving more options to those who will always choose correctly won’t make things worse is very plausible. Indeed, it’s supported through the following argument.
All of the objections in part 1 provided against rights could also apply here, so there are lots of other objections--it would, however, be tedious and redundant to repeat all of them.
Objection 4: The Blindfolding Objection
Suppose all of the people in the trolley problem were blindfolded, and thus did not know which side they were on. In this scenario, they would all rationally hope for flipping the switch. It would mean they’d have a 83.33333% chance of survival, rather than a 16.6666666666% chance. However, this case seems to be morally equivalent to the trolley problem case.
If we accept
Morality describes what we’d do if we were fully rational and impartial.
This is plausible, it seems that all accusations of moral failing relate to people either arbitrarily prioritizing certain people or being incorrect about the things that matter. How could one be making a moral failing if they care about the right set of entities and the right things for that set of entities.
Making the people in the trolley problem rational and blindfolded makes them impartial and rational
We can stipulate that the people are rational in this case. Then we’d have to accept
Thus, the moral action would be that which would be consented to by the rational blindfolded people in the trolley problem.
Additionally, the following seems very plausible
The rational blindfolded people in the trolley problem would consent to flipping the switch.
We’d have to accept
Flipping the switch in the trolley problem is the moral action.
This seems plausible. A similar thing can be done for the bridge case involving pushing people off of a bridge to save 5 other people. Our judgment about the wrongness of flipping the switch evaporates when we consider it form the perspective of rational affected parties.
Objection 5: The Flip Objection
Suppose we modify the trolley problem slightly. The trolley will currently hit people A, B, C, D, and E. However, you can flip a switch which will cause the trolley to flip onto a different track, hitting no one, but a different trolley will come out of nowhere and hit people A and B. This seems obviously good, it would save three people and be worse for no one. However, now you’ve created a new trolley which is being launched at two people. It seems like this new trolley which is being launched at two people would be better if it was only launched at a different person, person F. Thus, if we accept that you should make the trolley be replaced by the other trolley, and you should have the other trolley go towards a different person, then the overall state of affairs caused by actions, each of which should be taken, is identical to the trolley problem.
Objection 6: Status Quo Bias Objection
As (Chappell and Meisner, 2022) argues opposition to utilitarian tradeoffs is, in many cases, rooted in status quo bias. After all, if the switch was already flipped, no one would support flipping it back. Thus, it’s only because people want to maintain the status quo that they oppose flipping the switch. If the status quo were different, they wouldn’t support making changes. Thus, there’s a plausible debunking account of anti trolley flipping intuitions.
Objection 7: Billion Objection
Suppose we start with a trolley that will kill 5 billion people. It can be redirected to kill only a billion people. It seems intuitive that it should be directed. Thus, our opposition to such things stem from higher order heuristics. When the stakes are sufficiently large, the intuition begins to flip.
Perhaps one doesn’t share the intuition. Well, if they don’t, this might be a fundamental divergence of intuitions. However, it seems hard to imagine that we’d really prefer the deaths of most people to the deaths of only less than one seventh of people.
Utilitarianism Wins Outright Part 31: Stay In Omelas
A note on the story by Ursula Le Guin
"The Ones Who Walk Away From Omelas," is a short story by Ursula Le Guin which tells of a society called Omelas where everything goes well. Everything is nice; Le Guin spends pages describing just how nice everything is. However, there is a catch—in Omelas the idyllic society relies on the torture of a small child, who must not be helped in any ways for the society to function.
Whether Omelas was intended as a reductio to utilitarianism is subject to significant debate—see this blog post for example. However, regardless of whether it’s intended as a reductio to utilitarianism, it has been provided as an alleged counterexample, and it is thus worth addressing.
In the original short story, Le Guin doesn’t talk about destroying Omelas; instead, she talks about leaving Omelas, and based on the phrasing, seems to endorse walking away from Omelas. I’ve always found this quite baffling. After all, leaving Omelas does not benefit the child in the slightest. It merely causes people to be worse off for no reason. This view isn’t defensible.
However, the more common view that’s touted as a reductio to utilitarianism is utilitarianism would deny the (allegedly) intuitive view that one should destroy Omelas, if given the opportunity. I’ve also found this view baffling, but perhaps far less so than the original “one should walk away” view.
Several things are worth noting. For one, even on plausible moderate threshold views like those of Huemer, one shouldn’t destroy Omelas. Given that the utility of torturing one child is so enormous that it outweighs the harm of rights violation, a moderate deontologist would have to hold that view that one shouldn’t destroy Omelas—or indeed, even the stronger view that one should create Omelas.
Secondly, our society currently is far worse than Omelas. There are lots of children currently being tortured in basements. Thus, if Omelas currently shouldn’t exist, neither should our society as it exists right now.
Third, one would choose to be in Omelas from behind the veil of ignorance. The expected value of a 1 in (some big number) chance of being tortured for a (Some big number minus 1)/ (some big number chance of having a great life).
Fourth, we can arrive at the conclusion that Omelas is good. As I’ve already argued, there are some number of dust specks that are worse than a single torture. If that’s true, then to hold that Omelas is a bad society, one would have to hold that a utopia where people get dust specks in their eyes is bad overall, because of the dust specks.
Fifth if we accept
A) A perfectly rational, omniscient, benevolent third party observer would, if choosing between two worlds, pick the better one.
B) A world where Omelas is caused to exist is better than a world where it isn’t.
C) One should take actions that a perfectly rational, omniscient, benevolent third party observer would pick
Then we’d have to accept
D) One should cause Omelas to exist.
Each premise is plausible. A is almost true by definition; what would it mean to be a better world if a perfectly moral, all knowing third party would prefer the worse world. B is also plausible; nearly all consequentialists would agree that Omelas is good, and B only requires consideration of the consequences. C is also very plausible—it would be bizarre to hold that a perfect being would hope for you to act wrongly. If this were true, then putting perfect beings in charge of more things could make things worse.
Our anti Omelas intuitions can be explained away. For one, it’s very easy to have scope neglect, ignoring the benefits to truly vast numbers of people. All of the biases that I describe here and here also apply. It’s very difficult to think from the perspective of a society, particularly when an author has tried in maximally gory detail to describe just how bad life is for one person. As Huemer notes, humans are notoriously terrible at adding up vast numbers of small benefits.
On top of this, I just don’t think Omelas is that repugnant. If people appreciate that this society is much better than ours and has a miniscule fraction of the torture in our society (!), it begins to seem much more intuitive. It’s only Le Guin’s good writing that disguises the weakness of the intuition. To hold that Omelas is bad, one has to hold that our society would be sufficiently bad that it shouldn’t exist, even if we got rid of every single problem that exists in our society today, with the exception of the torture of one child. That view is the radically unintuitive one.
So my friends, stay in Omelas. Do not let Le Guin’s fearmongering propaganda scare you away! Don’t be the ones who walk away from Omelas.
Utilitarianism Wins Outright Part 32: Be An Eternal Oyster
Common advice
Introduction
"You are a soul in heaven waiting to be allocated a life on Earth. It is late Friday afternoon, and you watch anxiously as the supply of available lives dwindles. When your turn comes, the angel in charge offers you a choice between two lives, that of the composer Joseph Haydn and that of an oyster. Besides composing some wonderful music and influencing the evolution of the symphony, Haydn will meet with success and honour in his own lifetime, be cheerful and popular, travel and gain much enjoyment from field sports. The oyster's life is far less exciting. Though this is rather a sophisticated oyster, its life will consist only of mild sensual pleasure, rather like that experienced by humans when floating very drunk in a warm bath. When you request the life of Haydn, the angel sighs, ‘I'll never get rid of this oyster life. It's been hanging around for ages. Look, I'll offer you a special deal. Haydn will die at the age of seventy-seven. But I'll make the oyster life as long as you like...’"
Roger Crisp (Mill on Utilitarianism, 1997)
Those who are wrong often take the argument described above to be a good rejoinder to utilitarianism—more specifically, to hedonism. After all, who would want to be an oyster rather than Joseph Haydn??
A guy who looks happier than an oyster.
An oyster who looks just as happy as an oyster1.
However, rational analysis leads us inevitably to the conclusion that it’s better to be the oyster by orders of magnitude. The case for being an oyster is as overwhelming as any argument in philosophy.
Huemer Speaks Truth
This argument is a repackaging of the argument that Huemer made here. Suppose we accept the following three principles.
1 If you make every day of a person’s life better and then add new days of their life which are good at the end of their life, you make the person’s life better. This is obvious.
2 If one life has a million times higher average utility per day than another, a million times higher total utility, and more equal distribution of utility, it is a better life. This is also obvious.
3 If A>B and B>C then A>C. I’ve defended transitivity here, while insulting those who criticize transitivity.
If these are all true, then it’s better to be the oyster than Haydn. Why is this? Well, suppose Haydn has 100,000 utility per day and the oyster has .1 utility per day. Well, Haydn’s life would be better if he had 100,010 utility for 77 years and then .00000000000000000000000000000000000001 utility per day for another 10000^10000^10000 days, by the first axiom. However, this life has utility that’s a million times lower on average, total, and less equal distribution of utility than the Oyster’s life, meaning it’s worse by the second axiom. Transitivity means that, in combination with these, the oyster life is better.
One2 might object, claiming that, even if the oyster is better off than Haydn in terms of utility, Haydn has a better life for non utility related reasons, because he has a beautiful wife, lots of friends, and deep satisfying relationships.
His beautiful wife
However, the relevant insight which makes this response unsuccessful is that the Oyster is not merely better off with respect to utility—it is infinitely better off with respect to utility. If one adopts any threshold at which utility can outweigh other things, then they’d have to accept that the utility is so great in this case that it outweighs any other non utility considerations.
By the proof above, being an oyster forever is better than having 100^100^100 utility per year for 100^100^100 years. However, having 100^100^100 utility per year for 100^100^100 years would be worth having 100^100 years of torment, with 100^100 units of torment per year, such that the combination of those two would be good overall. Thus, by transitivity, being an oyster forever is sufficiently good that it would outweigh the badness of 100^100 units of torment per year for 100^100 years. However, no other non utility based consideration in Haydn’s life would be worth experiencing 100^100 units of torment per year for 100^100 years—this is, after all, considerably more torment than has existed throughout all of human history. Thus, by transitivity, the oyster’s utility dwarfs all other non-utility based good things in Haydn’s life.
This conclusion, like many supporting utilitarianism, has been derivable from some very basic, intuitive axioms. Surely the axioms appealed to above, particularly when combined with the considerations that will be presented later in this article, will outweigh in strength the intuition appealed to by Crisp.
Torture vs Dust Specks Considerations Apply
Given that my excellent article3 about torture vs dust specks, the considerations I shall now present are not especially new. They were present, albeit slightly modified, when describing the case of torture vs dust specks.
Friedman’s Insight
David Friedman says the following
“Economists are often accused of believing that everything-health, happiness, life itself-can be measured in money. What we actually believe is even odder. We believe that everything can be measured in anything. My life is much more valuable than an ice cream cone, just as a mountain is much taller than a grain of sand, but life and ice cream, like mountain and sand grain, are measured on the same scale. This seems plausible if we are considering different consumption goods: cars, bicycles, microwave ovens. But how can a human life, embodied in access to a kidney dialysis machine or the chance to have an essential heart operation, be weighed on the same scale as the pleasure of eating a candy bar or watching a television program? The answer is that value, at least as economists use the term, is observed in choice. If we look at how real people behave with regard to their own lives, we find that they make trade-offs between life and quite minor values. Many smoke even though they believe that smoking reduces life expectancy. I am willing to accept a (very slightly) increased chance of a heart attack in exchange for a chocolate sundae.”
This basic principle, that good things are commeasurable, that there are no goods that take on a different qualitative texture beyond the reach of any number of more minor goods, is a necessary feature of goodness. In no other domain do we think that no number of minor instances of some even can add up to have greater impact than an upscaled version. Whales are large—but their size can be surpassed by vast numbers of Amoeba. Fires are hot, but their heat can be surpassed by vast numbers of Icecubes. Arsenic is far more unhealthy than Ice-cream, but the detrimental health effects of everyone eating 30 billion gallons of Ice-cream would surpass those of one person eating an iota of arsenic. Monkeys typing out Shakespeare is far less likely than flipping a heads, but there is a number of coin flips required to be heads which are conjunctively less likely than monkeys typing out Shakespeare.
If you think that oyster pleasure is a little bit good and Haydn’s life is very good, if we multiply the goodness of oyster pleasure by a large enough number, we’ll get the badness of torture. There must be some number of days of oyster pleasure that can outweigh the goodness of Haydn’s life.
Diminishment Is Decisive
Suppose that everyday Haydn has 100,000 utility and the oyster has .1, as was stated above. Presumably living for 77 years as Haydn currently would be less good than living for 87 years with utility of 95,000, which would be less good than 150 years with utility of 90,000, which would be less good than 300 years with utility of 85,000, which would be less good than 700 years with utility of 80,000…4 which is less bad than one person having utility of .1 for 100000000000000000000000000000 years, which is clearly inferior to the oyster. By transitivity and this analysis, the oyster’s life is better.
Forever is a long time
It’s hard to get a grip on just how long forever is. The oyster will, over the course of its infinite existence, experience infinitely more pleasure than has been experienced previously throughout all of human history. Think about just how much better your life is currently than it would be if you only lived to one year of age. Well, the oyster’s life’s vastness surpasses yours far more than yours surpasses that of those with illnesses that cause their life to end at one year of age.
When you realize that the oyster will live long enough to observe5 the death of every star in the universe, the slow decay of black holes brought about by hawking radiation, the end of the universe, and still, after all that, the oyster will be chugging away, continuing to have a good time, it becomes much more intuitive that the oyster is far, far better off than Haydn. When one realizes that the oyster will experience more “mild, sensual pleasure” by itself than all humans ever, the conclusion that the oyster is better off than Haydn isn’t just true—it’s obviously true. Debying the superiority of the oyster life should be the reductio!!
We Suck At Reasoning About Such Cases
Humans are terrible at shutting up and multiplying. We have no intuitive grasp of the difference between a million and a billion, or between a billion and infinity. I elaborate much more on this in part zero of my article here. As Huemer notes
2.1 The egoistic bias When comparing worlds A and Z, we may find ourselves imagining what it would be like to live in each, and asking ourselves which we would prefer.13 Even if we consciously realise that this is not the relevant question, our intuitive evaluation may still be influenced by our preferences. We would prefer a world in which we are ecstatically happy to one in which we are barely content. Thus, we tend to evaluate A more positively than Z. But the fact that we would prefer to occupy world A hardly shows that A is better
This applies to this case because it’s hard to imagine being an oyster, so we may prefer the world that we can imagine being part of. This influences our judgments in ways that are not conscious, but it affects our judgments nonetheless—proof that this happens can be seen here, here, and here.
Huemer continues
2.2 The large numbers bias As Broome (2004, pp. 57–9) observes, we should be wary of intuitions whose reliability turns on our appreciating large numbers. This is because, beyond a certain magnitude, all large quantities strike our imagination much the same. A popular joke illustrates this: An astronomer giving a public lecture mentions that the sun will burn out in five billion years. An audience member becomes extremely agitated at the news. The lecturer tries to reassure him: ‘No need to worry, it will not happen for another five billion years.’ The audience member breathes a sigh of relief, explaining, ‘Oh, five billion years. I thought you said five million years!’ When we try to imagine a billion years, our mental state is scarcely different, if at all, from what we have when we try to imagine a million years. If promised a billion years of some pleasure, most of us would react with little, if any, more enthusiasm than we would upon being promised a million years of the same pleasure. Intellectually, we know that one is a thousand times more pleasure than the other, but our emotions and felt desires will not reflect this.
Later he says
2.3 Compounding small numbers In many cases, we make intuitive errors when it comes to compounding very small quantities. In one study, psychologists found that people express greater willingness to use seatbelts when the lifetime risk of being injured in a traffic accident is reported to them, rather than the risk per trip (Slovic, Fischhoff, and Lichtenstein 1978). This suggests that, when the very small risk per trip is presented, people fail to appreciate how large the risk becomes when compounded over a lifetime. They may see the risk per trip as ‘negligible’, and so they neglect it, forgetting that a ‘negligible’ risk can be large when compounded many times. For an especially dramatic illustration of the hazards of trusting quantitative intuitions, imagine that there is a very large, very thin piece of paper, one thousandth of an inch thick. The paper is folded in half, making it two thousandths of an inch thick. Then it is folded in half again, making it four thousandths of an inch thick. And so on. The folding continues until the paper has been folded in half fifty times. About how thick would the resulting paper be? Most people will estimate that the answer is something less than a hundred feet. The actual answer is about 18 million miles.15 For a case closer to our present concern, consider the common intuition that a single death is worse than any number of mild headaches. If this view is correct, it seems that a single death must also be worse than any amount of inconvenience. As Norcross observes, this suggests that we should greatly lower the national speed limit, since doing so would save some number of lives, with (only) a great cost in convenience.16 Yet few support drastically lowering the speed limit. Indeed, one could imagine a great many changes in our society that would save at least one life at some cost in convenience, entertainment, or other similarly ‘minor’ values. The result of implementing all of these changes would be a society that few if any would want to live in, in which nearly all of life’s pleasures had been drained. In all of these cases, we find a tendency to underestimate the effect of compounding a small quantity. Of particular interest is our failure to appreciate how a very small value, when compounded many times, can become a great value. The thought that no amount of headache-relief would be worth a human life is an extreme instance of this mistake—as is the thought that no number of low-utility lives would be worth as much as a million high-utility lives.
Continuing
2.4 Underrating low-quality lives When we imagine a low-quality life, even if we fill in a great many factual details, we may easily be unsure what its utility level is. When we imagine any realistic sort of life, we must be able to weigh complex combinations of goods and bads of various different kinds in order to arrive at any overall assessment of the life’s utility level. Because of difficulties involved in judging such things as the weighting of values of very different kinds and whether and how values combine to form organic unities,17 we may easily mistake a life with welfare level -1, for example, for one with welfare level 2. According to the advocates of RC, the ability to distinguish such alternatives would be crucial for intuitively evaluating an imagined world of low average utility. To avoid this problem, we might try imagining unrealistically simple lives, such as a life containing no evaluatively significant experiences or activities other than a uniform, mild pleasure. However, even the evaluation of a very simple life may be a complex matter. Our sense that we would be bored by experiencing a lifetime of such uniform, mild pleasure; that such a life would be meaningless; and that we would have to be seriously mentally defective to have no evaluatively significant other activities or states than this single pleasure, all may combine to give us a negative reaction to what we intended to be a slightly positive state. For these reasons, it is not clear that our intuitions can be expected to reliably distinguish very slightly good lives from neutral or slightly bad lives.
Conclusion
Overall, we have lots of good reasons to distrust the Haydn favoring intuitions and several independent ways of deriving that it’s better to be the oyster. This oyster case is, like what Huemer said of the repugnant conclusion, “one of the few genuine, nontrivial theorems of ethics discovered thus far.”
Response to Jay M on Oysters
You should still be an oyster
Jay M has written a long comment on my previous post, one which I’ll respond to here.
> 1 If you make every day of a person’s life better and then add new days of their life which are good at the end of their life, you make the person’s life better. This is obvious.
> 2 If one life has a million times higher average utility per day than another, a million times higher total utility, and more equal distribution of utility, it is a better life. This is also obvious.
> 3 If A>B and B>C then A>C. I’ve defended transitivity here, while insulting those who criticize transitivity.
All plausible principles. But this isn't sufficient to make it reasonable to accept the conclusion, i.e. that it's better to live as the oyster than as Haydn. What you've really shown is that there are 4 plausible propositions that are mutually inconsistent: the 3 principles you mentioned just here and the proposition that it's better to be Haydn than the oyster. When met with a set of mutually inconsistent propositions, the rational decision is to abandon the least plausible proposition. It seems to me that the *least* plausible proposition here is your second principle. Prior to seeing this argument, I would give the second principle maybe ~80% credence, but I'd give the other 3 propositions (your other 2 principles and the proposition that it's better to be Haydn) >90% credence.
Remember, the dialectical context is that this is being trotted out as a counterexample to an otherwise plausible moral theory. Thus, whether it’s an effective counterexample will hinge on whether it is hard to reject. If it turns out that we have good reason to reject it, or at least that it’s reasonable to reject, it is not a good counterexample. This is especially true because of the meta reasoning about apparent counterexamples that I describe here. If utilitarianism were correct, we’d expect it to sometimes go against our intuitions, but we’d expect that, when it does, there are other plausible principles that must be denied to accommodate the original intuition. This is exactly what we find in this case. I also find the second principle that I laid out to be far more obvious than the intuition about Haydn—particularly when we take into account the other reasons to reject the Haydn intuition. Given the tentativeness of the Haydn intuition, is it really less plausible than the notion that making your average utility per day a million times greater and making your utility spread evenly distributed wouldn’t make you better off?
I'm going to assume that when you say utility considerations "outweigh" other considerations, you just mean we have most *reason* to promote utility rather than whatever other non-utility considerations are present.
You assume correctly
I suppose this statement of yours would be compelling to someone who believed that there is a threshold at which *the quantity of utility alone* can outweigh other considerations. But I don't think most people value utility in this way. Someone might have a utility threshold based on something other than just the _quantity_ of utility. For example, someone might think that the disvalue of suffering, but not the value of pleasure, can outweigh other considerations. Or one might think the utility threshold can be breached only if the utility is coextensive with other values (e.g., one might think that self-awareness must be present, that one must maintain their sense of personal identity while experiencing the utility, that one's memories are persisted, that average utility is sufficiently high, etc.).
Positing that only suffering can outweigh involves a puzzling sort of asymmetry, which becomes especially hard to maintain given the possibility of trading off pain avoidance for some amount of pleasure. If only self aware beings’ utility matters, this would exclude many infants.
So one might think that even an infinite quantity of utility doesn't necessarily cross their threshold, since the infinite quantity of utility might not be generated in the right way. In any event, you would need an argument explaining why, if someone adopts a threshold at which utility can outweigh other considerations, then they should adopt a threshold at which *quantity of utility alone* must outweigh other considerations. Without such an argument, there is no compelling reason to accept your statement here.
Note that if one believes that the utility threshold can always be satisfied by quantity of utility alone, then that would imply that the threshold can be breached even if the utility isn't *theirs*. When I make a sacrifice now in order to promote my happiness in the future, what is it that makes the happiness in the future "mine"? Well, presumably it's a sense of personal identity, psychological continuity, and maybe some other mental processes (self-awareness?). But if none of these mental processes are met, then there is no sense in which the future utility is mine (even if the future utility is nevertheless "good"). In fact, it seems that if I perform an action that creates a lot of utility for my body in the future but lack the aforementioned kind of mental processes, then I've actually committed suicide for the sake of some other being who will use my body. If that's right, then if the utility threshold can always be breached by the quantity of utility alone, then it would be irrational to not commit suicide to allow for a new being to be produced (so long as the new being experienced a sufficiently large amount of utility). In other words, it would be irrational to not commit suicide for the eternal oyster. But this is implausible.
I would defend the view that it would be irrational to not sacrifice oneself for the oyster, but it’s not entailed by the previous discussion—I may discuss that view at some future juncture. Specifically, the claim is that in moral tradeoffs, one has most moral reason to take action X if action X produces enough utility. For the present purposes, I was assuming that the oyster was somehow you, in at least the same sense that Haydn would be you. Whether that’s possible comes down to questions about personal identity, which are not worth delving into. However, if it is inconceivable, we can replace the oyster with a person “floating very drunk in a warm bath, forever.”
I'm going to translate all talk about "value/goodness/badness" to talk about (self-interested) "reasons". That said, while it may seem that *goodness* is aggregative in the sense you allude to here, it does not seem that our *reasons* are aggregative in this same sense. A good example of this concerns aggregating experiences that are qualitatively identical. Let's say a man is stuck in a time loop where he repeats the same day over and over (like Groundhog's day). Fortunately for him, he doesn't know that he's in a time loop. Also, the day is actually relatively good. It might seem plausible that experiencing the day N times is N times as good as experiencing the day once (all else equal).
But now consider someone else who has the option of _entering_ the time loop for N days (followed immediately by their death), where the quality of life of each day is greater than the average quality of life that he would have if he refrains from entering. Despite the fact that N days in the time loop is N times as "good" as just 1 day, it does not seem that he necessarily has N times as much reason to *enter* the time loop as he does to experience the day just once (even if we assume all else equal). To say otherwise would imply that there is some number of days N in the time loop such that it would be irrational to not enter. But this seems implausible. It seems like it may be rational to choose to, say, pursue other achievements in life rather than living the same (happy) day over and over.
This also implies that what a person has most self-interested *reason* to do can diverge from what is most *good* for them, something that I find plausible for other reasons.
I think it would be irrational not to plug in, and our contrary intuitions can be easily explained away. Given that uniformity is never conducive, in the real world, to overall utility, it’s unsurprising that we’d have the intuition that a totally uniform life would be worse.
I take X to be good for you iff you have self interested reason to promote X, so the second claim seems definitionally false.
> Suppose that everyday Haydn has 100,000 utility and the oyster has .1, as was stated above. Presumably living for 77 years as Haydn currently would be less good than living for 87 years with utility of 95,000, which would be less good than 150 years with utility of 90,000, which would be less good than 300 years with utility of 85,000, which would be less good than 700 years with utility of 80,000…4 which is less bad than one person having utility of .1 for 100000000000000000000000000000 years, which is clearly inferior to the oyster. By transitivity and this analysis, the oyster’s life is better.
The central premise underlying each step in this reasoning is that a much longer life with slightly lower average utility is better. Note that this is plausible only if we assume all other values are held constant when comparing the shorter and longer life. But there will be certainly some point between Haydn's life and the oyster's life where all other values are *not* equal. For example, there will be some point (which is vague and probably undefinable) where one loses self-awareness. Assuming that one values self-awareness in the same way that (I assume) most people do, it does not always seem better to greatly extend one's lifespan in exchange for inching their experiential quality closer to that of the oyster.
But self awareness plausibly would exclude infants. I also don’t think that there being vagueness in reality is possible—there has to be some fact of the matter about whether a being is self aware; vagueness is a part of language not reality. The map, not the territory, is vague. Additionally, if it’s vague, there’s no firm cutoff, which this argument requires.
However, even if we grant this, this would mean that as long as the oyster was self aware, they’d be better off than Haydn. This conclusion is bolstered by the many biases that undermine the reliability of our intuitions, combined with the other arguments I’ve presented.
Contra Bergman on Suffering Focused Ethics
They're wrong
My friend Aaron Bergman has recently written an article, playing devil’s advocate1 to the claim that we should try very hard to prevent the end of the world. In it, Bergman makes the case for some form of suffering focused ethics—a view I’ve previously argued against2. This article shall criticize Bergman’s view. Bergman intends to argue for the following thesis3.
Under a form of utilitarianism that places happiness and suffering on the same moral axis and allows that the former can be traded off against the latter, one might nevertheless conclude that some instantiations2 of suffering cannot be offset or justified by even an arbitrarily large amount of wellbeing.
Nearly every view can be coherently held. The more important question is whether a view can be held without having deranged or implausible implications. This more daunting requirement is one which Bergman’s view is unable to meet, as we shall see. Bergman writes
For myself and I believe many others, though, I think these “preferences” are in another sense a stronger claim about the hypothetical preferences of some idealized, maximally rational self-interested agent. This hypothetical being’s sole goal is to maximize the moral value corresponding to his own valence. He has perfect knowledge of his own experience and suffers from no cognitive biases. You can’t give him a drug to make him “want” to be tortured, and he must contend with no pesky vestigial evolutionary instincts.
Those us who endorse this stronger claim will find that, by construction, this agent’s preferences are identical to at least one of the following (depending on your metaethics):
One’s own preferences or idealized preferences
What is morally good, all else equal
For instance, if I declare that “I’d like to jump into the cold lake in order to swim with my friends,” I am claiming that this hypothetical agent would make the same choice. And when I say that there is no amount of happiness you could offer me in exchange for a week of torture, I am likewise claiming this agent would agree with me.
Consider a particular instance of egregious torture—one which Bergman wouldn’t trade for the world. Surely one instance of this torture is less bad than two instances of slightly less gruesome torture, each of which are less bad than two instances of torture slightly less gruesome… until we get, by transitivity, the conclusion that a very horrific form of torture is less bad than a vast number of instances of mild pain. However, a vast number of instances of mild pain can clearly be traded off against enormous pleasures. By the above reasoning, one torture is less bad than a ton of pinpricks, but pinpricks can be weighed against positive pleasures. Thus, by transitivity, so can torture.
I’ve discussed the matter with Aaron briefly and his response was primarily to say that on a utility scale, horrific torture is infinitely awful. This is, however, false. When Aaron says that it’s infinite on the negative utility scale, this is, in one sense, repeating the original claim, namely, that horrific torture can’t be outweighed by finite goods. However, when I described scaling down the misery, I wasn’t describing utility in the VNM sense of how aversive a rational being would find an experience. Instead, I was describing it in the far more trivial sense of just how much pain was caused. While the precise numbers we assign are arbitrary, there are clearly more painful experiences, and there are also clearly less painful experiences. When we describe the suffering diminishing, we can just think of making the experience slightly less awful. For any awful torture, there are a possible number of diminishments that would make it not awful. Surely this is a concept that we can all conceive of quite readily. However, if one makes an experience less awful enough times, it will eventually not be awful at all.
So, while I talked a lot about utility assignments in my conversation with Aaron, we don’t even need to attach a utility scale. All we have to accept is that very awful things can be made slightly less awful, until they’re barely awful at all. This is a very plausible claim; almost impossible to deny. If we accept that making it less awful but affect more people brings about something overall worse and transitivity, then we arrive inescapably at the conclusion that suffering of the most extreme variety can eventually be outweighed by garden variety pleasures.
This conclusion is exactly what we expect on first blush. The earth weighs a lot, but its weight can be surpassed by enough feathers. I am very tall (a whole whopping 5’7!), but my height can be outweighed by enough combined ant heights. Generally, things that are immense in a particular respect can be surpassed in that respect by a vast number of things that are meager in the same respect.
One other slightly troubling implication of Aaron’s view that they admitted freely was that it plausibly entailed—if we assume one is more likely to be kidnapped and thus heinously tortured if they go outside, an assumption Aaron was uncertain about—Aaron’s view entails that a perfectly rational agent would never go outside for trivial reasons. For example, if one was perfectly rational, they would never go to the pub with friends, given that there is a non-zero risk of being kidnapped and horrifically tortured. Even one who has initial suffering focused intuitions will plausibly be put off by this rather troubling implication of the view. In fact, Aaron goes so far as to agree that this would hold even if the difference in kidnap and torture rates were one in 100000000000000000000000000000000000000000000000000000000000000000000^100000000000000000000000000000000000000000000000000^10000000000000000000000000000000000000000000000000000000000000000000000000000000, meaning the universe would almost certainly end before there is a difference in torture rates of even one person. This is an enormous bullet to bite.
In defending this fairly radical implication, Aaron says
But this all seems beside the point, because we humans are biological creatures forged by evolution who are demonstrably poor at reasoning about astronomically good or bad outcomes and very small probabilities. Even if we stipulate that, say, walking to the grocery store increases ones’ risk of being tortured and there exists a viable alternative, the fact that a person chooses to make this excursion seems like only very weak evidence that such a choice is rational.
It’s true that all of our views are somewhat influenced by evolution, and thus susceptible to distorting factors. Yet it seems the plethora of biases relating to our inability to conceive of large numbers end up making the plethora of biases favor my view. Additionally, despite evolutionary forces undermining the reliability of our intuitions, people are still (reasonably) good judges of what makes them well off. A view that entails that walking outside is foolish most of the time will need some other way to offset its wild unintuitiveness. Most people are right about basic prudential matters most of the time. Pious moral proclamations are generally worse judges of the things that make people well off than what people do in their everyday lives. But even if we think how people in fact act is largely irrelevant, the hypotheticals I’ve given with greatly diminished risk should still hold force.
Aaron formalizes his argument as follows.
(Premise) One of the following three things is true:
One would not accept a week of the worst torture conceptually possible in exchange for an arbitrarily large amount of happiness for an arbitrarily long time.6
One would not accept such a trade, but believes that a perfectly rational, self-interested hedonist would accept it; further this belief is predicated on the existence of compelling arguments in favor of the following:
Proposition (i): any finite amount of harm can be “offset” or morally justified by some arbitrarily large amount of wellbeing.
One would accept such a trade, and further this belief is predicated on the existence of compelling arguments in favor of proposition (i).
(Premise) In the absence of compelling arguments for proposition (i), one should defer to one’s intuition that some large amounts of harm cannot be morally justified.
(Premise) There exist no compelling arguments for proposition (i).
(Conclusion) Therefore, one should believe that some large amounts of harm cannot be ethically outweighed by any amount of happiness.
I reject one and would accept the trade, largely for the reasons described above. I similarly reject 3, having, in my view, just dispensed several. Aaron describes the following indifference curve, the second one matching his own.
While you can have an indifference curve like this, it is prone to some bizarre conclusions. The point at the vertical asymptote does not seem infinitely more horrible than the point a tiny bit away from the vertical asymptote. These are really, very small differences. Asymptotically approaching a particular point does give the appearance of greater reasonability, but this appearance is only apparent. As long as we accept that making the pain lessen by some very small amount, say 1/100,000,000th the badness of torture, at no point4 makes the situation less than half as bad, then we get the conclusion that things are not incommensurable5. Bergman’s indifference curve would have to deny this very reasonable assumption. It only has the appearance of reasonability because exponential growth curves are hard to follow and look like they asymptotically approach a point, even when they, in fact, do not. Bergman would have to accept that a certainty of suffering at the highest point on the graph as presented—very near the asymptote—would be less bad than a one in a billion chance of having a one in a billion chance of having a one in a billion chance… and so on for a billion more iterations of this, of having the suffering that’s right at the asymptote. Aaron continues
To rephrase that more technically, the moral value of hedonic states may not be well-modeled by the finite real numbers, which many utilitarians seem to implicitly take as inherently or obviously true. In other words, that is, we have no affirmative reason to believe that every conceivable amount of suffering or pleasure is ethically congruent to some real finite number, such as -2 for a papercut or -2B for a week of torture.
While two states of the world must be ordinally comparable, and thus representable by a utility function, to the best of my knowledge there is no logical or mathematical argument that “units” of utility exist in some meaningful way and thus imply that the ordinal utility function representing total hedonic utilitarianism is just a mathematical rephrasing of cardinal, finite “amounts” of utility.
Well, the vnm formula shows one’s preferences will be modellable as a utility function if they meet a few basic axioms. Let’s say a papercut is -n and torture is -2 billion. To find n, we just need to find a number for which we’d be indifferent between a certainty of a papercut and n/2 billion risk of torture. Bergman is aware of this fact. I think his point here is that there’s no in principle reason why certain things can’t be incommensurable—having, for example, torture be infinitely worse than a papercut. I agree with Aaron here—I think, for example, infinite suffering should be modeled as negative infinity. So Aaron and I agree that one can have a perfectly coherent set of assumptions under which we don’t just assign non-infinite numbers to every experience. Our disagreement is about whether non-infinite amounts of suffering should be assigned negative infinite utility.
Aaron’s case for the view that negative experiences should dominate our considerations is largely based on intuitions about how horrible negative experiences are. Several things are worth noting
First, as Huemer argues in his paper in defence of repugnance6, we’re very bad at reasoning about big numbers; the types of intuitions that Aaron appeals to are precisely the ones we should be debunking.
Second, we have the ability to grasp, to some degree, just how horrible the worst conceivable experiences are. We can, for example, gain some intuitive appreciation of how horrific it would be to be devoured by a school of piranhas or to burn to death. We cannot, however, adequately appreciate the unimaginably good experiences that could exist in our posthuman future. It seems quite possible, indeed likely even, that once we can modify our consciousness, we’d be able to have experiences that are as desirable as being tortured is undesirable. The subjective experience of something that feels as good as being burned to death feels bad, being experienced for a thousand years makes every experience had so far by humans appear as a meagre scintilla of what could be to come, so far outclassed by what could be possible in the future. We in fact have good reason to think that pleasure and suffering are logarithmic7. Thus, it’s quite plausible that extreme experiences should dominate our ethical calculus, on both the positive and the negative end, even if the intuitions to which Aaron appeals should be believed. This provides a more parsimonious explanation of the overwhelming badness of suffering, while avoiding a puzzling asymmetry.
Third, when one really considers their intuitions about the badness of vicious torture those who conclude that it is infinitely worse than pinpricks seem to me to be going very far astray. Considering how bad torture is can, in my view, give one the belief that torture is very, very bad, but I’m unsure how, even in principle, it can give the belief that it’s infinitely bad. This intuition seems much like the intuition that monkeys will never type out Shakespeare. It will, no doubt, take them a supremely long time to do so, just as it would take an obscenely large number of dust specks to outweigh a torture, but it seems a mistake to believe that there’s some firm, uncrossable barrier, in either case. Bergman’s belief in this barrier based merely on a series of flimsy intuitions, which clash with several converging much stronger intuitions, seems to be a grave mistake.
Opening Statement For My Debate With Arjun Panickssery
Part 1
Here I will present a series of arguments favoring utilitarianism, in a debate over whether utilitarianism is the truth, the way, and the light.
1 Theoretical Virtues
When deciding upon a theory we want something with great explanatory power, scope, simplicity, and clarity. Utilitarianism does excellent by this criteria. It’s incredibly simple, requiring just a single moral law saying one should maximize the positive mental states of conscious creatures, explains all of ethics, applies to all of ethics, and has perfect clarity. Thus, utilitarianism starts out ahead based on its theoretical virtues. Additionally, in terms of its prior plausibility it does well, being immensely intuitive. It just seems obvious that ethics should be about making everyone’s life as good as possible. There are others detailed here, in which utilitarianism excels, though I haven’t room to go into detail.
2 History As A Guide
History favors utilitarianism. If we look at historical atrocities, they were generally opposed by utilitarians. Utilitarian philosophers were often on the right side of history. Bentham favored decriminalizing homosexuality, abolition of slavery, and protection for non human animals. Mill was the second member of parliament to advocate for women's suffrage and argued for gender equality. In contrast, philosophers like Kant harbored far less progressive views, supporting killing people born to unmarried parents, favoring racial supremacy, and believing masturbation to be a horrifically wrong “unmentionable vice.”
Additionally, the atrocities of slavery, the holocaust, Jim Crow, and all others have all come from excluding a class of sentient beings from moral consideration, something prevented by utilitarianism.
If utilitarianism were not the correct moral view, it would be a bizarre coincidence that it both has the mechanism to rule out every historical atrocity and that utilitarians are consistently hundreds of years ahead of their time when it comes to important moral questions.
3 A syllogism
These premises, if true prove utilitarianism.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
I shall present a defense of each of the premises.
Premise 1, is true by definition
Premise 2 states that a rational egoist would do only what produces the most happiness for themselves. This has several supporting arguments.
1 When combined with the other arguments, we conclude that what a self interested person would do should be maximized generally. However, it would be extremely strange to maximize other things like virtue or rights and no one holds that view.
2 Any agent that can suffer matters. Imagine a sentient plant, who feels immense agony as a result of their genetic formation, who can’t move nor speak. They’re harmed from their pain, despite not having their rights violated or virtues. Thus, being able to suffer is a sufficient condition for moral worth.
We can consider a parallel case of a robot that does not experience happiness or suffering. Even though this robot acts exactly like us, it would not matter absent the ability to feel happiness or suffering. These two intuitions combine to form the view that hedonic experience is a necessary and sufficient condition for mattering. This serves as strong evidence for utilitarianism—other theories can’t explain this necessary connection between hedonic value and mattering in the moral sense.
One could object that rights, virtue, or other non hedonistic experiences are an emergent property of happiness, such that one only gains them when they can experience happiness. However, this is deeply implausible, requiring strong emergence. As Chalmers explains, weak emergence involves emergent properties that are not merely reducible to interactions of the weaker properties. For example, chairs are reducible to atoms, given that we need nothing more to explain the properties of a chair than knowing the ways that atoms function. However, strongly emergent properties are not reducible to weaker properties. Philosophers tend to think there is only at most one strongly emergent thing in the universe, so if deontology requires strong emergence, that’s an enormous cost.
3 As we’ll see, theories other than hedonism are just disastrously bad at accounting for what makes someone well off, however, I’ll only attack them if my opponent presents one, because there are too many to criticize.
4 Hedonism seems to unify the things that we care about for ourselves. If someone is taking an action to benefit themselves, we generally take them to be acting rationally if that action brings them joy. This is how we decide what to eat, how to spent our time, or who to be in a romantic relationship with—and is the reason people spend there time doing things they enjoy rather than picking grass.
The rights that we care about are generally conducive to utility, we care about the right not to be punched by strangers, but not the right to not be talked to by strangers, because only the first right is conducive to utility. We care about beauty only if it's experienced, a beautiful unobserved galaxy would not be desirable. Even respect for our wishes after our death is something we only care about if it increases utility. We don’t think that we should light a candle on the grave of a person who’s been dead for 2000 years, even if they had a desire during life for the candle on their grave to be lit. Thus, it seems like for any X we only care about X if it tends to produce happiness.
5 Consciousness seems to be all that matters. As Sidgwick pointed out, a universe devoid of sentience could not possess value. The notion that for something to be good it must be experienced is a deeply intuitive one. Consciousness seems to be the only mechanism by which we become acquainted with value.
6 Hedonism seems to be the simplest way of ruling out posthumous harm. Absent hedonism, a person can be harmed after they die, yet this violates our intuitions,.
7 As Pummer argues, non hedonism cannot account for lopsided lives.
If we accept that non hedonic things can make one’s life go well, then their life could have a very high welfare despite any amount of misery. In fact, they could have an arbitrarily good life despite any arbitrary amount of misery. Thus, if they had enough non hedonic goodness (E.G. knowledge, freedom, or virtue), their life could be great for them, despite experiencing the total suffering of the holocaust every second. This is deeply implausible.
8 Even so much as defining happiness seems to require saying that it’s good. The thing that makes boredom suffering but tranquility happiness is that tranquility has a positive hedonic tone and is good, unlike boredom. Thus, positing that joy is good is needed to explain what joy even is. Additionally, we have direct introspective access to the badness of pain when we experience it.
9 Only happiness seems to possess desire independent relevance. A person who doesn’t care about their suffering on future Tuesdays is being irrational. However, this does not apply to rights—one isn’t irrational for not exercising their rights. If we’re irrational to not care about our happiness, then happiness has to objectively matter.
10 Sinhababu argues that reflecting on our mental states is a reliable way of forming knowledge as recognized by psychology and evolutionary biology—we evolved to be good at figuring out what we’re experiencing. However, when we reflect on happiness we conclude that it’s good, much like reflecting on a yellow wall makes us conclude that it’s bright.
11 Hedonism is very simple, holding there’s only one type of good thing, making it prima facie preferrable.
Let’s take stock. So far we’ve established that hedonism is intuitively prima facie plausible, it’s extremely simple, beings matter if and only if they have the capacity for hedonic value, other theories need to posit strong emergence which exists virtually nowhere else in the universe, hedonism unifies the things we care about such as knowledge, friendship, and virtue, we have a reliable mechanism for identifying the goodness of pleasure, unlike the goodness of other things, there’s a plausible evolutionary explanation of the goodness of pleasure unlike other things, it is necessary to explain what pleasure even is, only hedonic value seems to have desire independent relevance, consciousness seems intuitively to be all that matters, hedonism explains why one can’t be harmed or benefited after death, pleasure is the most obvious value, experiences seem valuable if and only if they produce pleasure--other theories have to posit other strange things that make people better off, and when the hedonic values are sufficiently great, they dominate all other considerations. The non-hedonist has to deny that pleasure is the good, while accepting that pleasures is a prerequisite for there being good and dominates goodness considerations at the extremes, no matter what other facts are present.
Premise 3 says therefore only happiness is good (for selves who are rational egoists). This follows from the previous premises.
Premise 4 is trivial. For things to be better for person A than for person B, they must have something about them that produces extra benefits for person A, by definition.
Premise five says happiness does not have unique benefits that only apply to rational egoists. This is obvious.
Premise six says follows from previous premises
Premise seven is trivial
Premise eight says therefore, only happiness is good for selves. This follows from the previous premises.
Premise nine says Something is good, if and only if it is good for selves
It seems hard to imagine something being good, but being good for literally no one. If things can be good while being good for no one, there would be several difficult entailments that one would have to accept, such as that there could be a better world than this one despite everyone being worse off.
People only deny this premise if they have other commitments, which I’ll argue against later.
Premise 10 says Therefore only happiness is good. It follows from the previous premises.
Premise 11 says we should maximize good
First it’s just trivial that if something is good we have reason to pursue it, so the most good thing is the thing we have the most reason to pursue.
Second, this is deeply intuitive. When considering two options, it is better to make two people happy than one, because it is more good than merely making one person happy. Better is a synonym of more good, so if an action produces more good things it is better that it is done.
If there were other considerations that counted against doing things that were good, those would be bad, and thus would still relate to considerations of goodness
Third, as Parfit has argued the thing that makes things go best, is the same as the thing that everyone could rationally consent to and that no person could reasonably reject.
Fourth, an impartial observer should hope for the best state of the world to come into being. However, it seems clear that an impartial observer should not hope for people to act wrongly. Therefore, the right action should bring about the best world.
Fifth, as Yetter-Chappel has argued, agency should be a force for good. Giving a perfectly moral agent control over whether some action happens shouldn’t make the world worse. In the trolley problem, for example, the world would be better if the switch flipped as a result of random chance, divorced from human action. However, if it is wrong to flip the switch, a perfectly moral person being given control over whether or not the flip switches by accident would make the world actively worse. Additionally, it would be better for a perfectly moral person to have muscle spasm which results in the switch flipping, than to have total control of their actions. It shouldn’t be better from the point of view of the universe for personally benevolent agents to have muscle spasms resulting in them taking actions that would have been wrong if they’d voluntarily taken them.
Sixth, as Yetter-Chappel has argued, a maximally evil agent would be an evil consequentialist trying to do as much harm even if it involves them not violating rights, so a maximally good agent would be the opposite.
Premise 12 says therefore, we should maximize only happiness. This follows from the previous premises.
4 Harsanyi’s Proof
Harsanyi’s argument is as follows.
Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.
So, when making ethical decisions one should act as they would if they had an equal chance of being any of the affected parties. Additionally, every member of the group should be VNM rational and the group as a whole should be VNM rational. This means that their preferences should have the following four features of rational decision theory which are accepted across the board (they’re slightly technical but they’re basically universally agreed upon).
These combine to form a utility function, which represents the choice worthiness of states of affairs. For this utility function, it has to be the case that a one half chance of 2 utility is equally good to certainty of 1 utility. 2 utility is just defined as the amount of utility that’s sufficiently good for a 50% chance of it to be just as good as certainty of 1 utility.
So now as a rational decision maker you’re trying to make decisions for the group, knowing that you’re equally likely to be each member of the group. What decision making procedure should you use to satisfy the axioms? Harsanyi showed that only utilitarianism can satisfy the axioms.
Let’s illustrate this with an example. Suppose you’re deciding whether to take an action that gives 1 person 2 utility or 2 people 1 utility. The above axioms show that you should be indifferent between them. You’re just as likely to be each of the two people, so from your perspective it’s equivalent to a choice between a 1/2 chance of 2 utility and certainty of 1 utility. We saw before that those are equally valuable, a 1/2 chance of 2 utility is by definition equally good to certainty of 1 utility. 2 utility is just the amount of utility for which a 1/2 chance of it will be just as good as certainty of 1 utility. So we can’t just go the Rawlsian route and try to privilege those who are worst off. That is bad math!! The probability theory is crystal clear.
Now let’s say that you’re deciding whether to kill one to save five, and assume that each of the 6 people will have 5 utility. Well, from the perspective of everyone, all of whom have to be impartial, the choice is obvious. A 5/6 chance of 5 utility is better than a 1/6 chance of 5 utility. It is better by a factor of five. These axioms combined with impartiality leave no room for rights, virtue, or anything else that’s not utility function based.
This argument shows that morality must be the same as universal egoism—it must represent what one would do if they lived everyone’s life and maximized the good things that were experienced throughout all of the lives. You cannot discount certain people, nor can you care about agent centered side constraints.
5 Wrong About Rights: Why Utilitarianism, Unlike Its Critics, Isn’t
Rights are both the main reason people would deny premise 9 of the earlier syllogism and the main objection to utilitarianism. Sadly, the doctrine of rights is total nonsense.
1 Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, we generally think it would be a violation of rights to create huge amounts of pollution, such that a million people die, but not a violation of rights to light a candle that kills no people. The difference is just in the harm caused. If things that we currently don’t think of as rights began to maximize happiness to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our sanctity in an indescribable way. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
If my opponent argues for rights then I’d challenge him to give a way of deciding whether something is a right that is not based on hedonic considerations.
3 A reductionist account is not especially counterintuitive and does not rob our understanding or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
4 An additional objection can be given to rights. We generally think that it matters more to not violate rights than it does to prevent other rights violations. We intuitively think that we shouldn’t kill one innocent person to prevent two murders. I shall give a counterexample to this. Suppose we have people in a circle each with two guns that will each shoot the person next to them. They have the ability to prevent two other people from being shot, with another person's gun, or to prevent their own gun from shooting one person. Morally, if we take the view that one's foremost duty is to avoid rights violation, then they would be obligated to prevent their gun from shooting the one person. However, if everyone prevents one of their guns from being shot, rather than two guns from other people from being shot, then everyone in the circle would end up being shot. If it’s more important, however, to save as many lives as possible, then each person would prevent two guns from being shot. World two would have no one shot, and world one would have everyone shot. World one seems clearly worse.
Similarly, if it is bad to violate rights, then one should try to prevent their own violations of rights at all costs. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them. This seems deeply implausible.
5 We have decisive scientific reasons to distrust the existence of rights, which is an argument for utilitarianism generally. Greater reflection and less emtotional hinderance makes people much more utilitarian as has been shown by research by Koenigs , Greene et al, and Fornasier. This evidentially supports the reliability of utilitarian judgements.
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer if given the choice between one person killing one other to prevent 5 indiscriminate murders to 5 indiscriminate murders or them not doing so, should obviously choose the world in which the one person does the murder to prevent 5. An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The person in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 1.5777218 x 10^69 murders, when then alternative actions could have resulted in only one murder. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
8 Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong.
(More paradoxes for rights can be found here and here, but I can’t go into all of them here).
6 Some Cases Other Theories Can’t Account For
Many other theories are totally unable to grapple with the moral complexity of the world. Let’s consider four cases
Case 1
Imagine you were deciding whether or not to take an action. This action would cause a person to endure immense suffering—far more suffering than would occur as the result of a random assault. This person literally cannot consent. This action probably would bring about more happiness than suffering, but it forces upon them immense suffering to which they don’t consent. In fact, you know that there’s a high chance that this action will result in a rights violation, if not many rights violations.
If you do not take the action, there is no chance that you will violate the person’s rights. In fact, absent this action, their rights can’t be violated at all. In fact, you know that the action will have a 100% chance of causing them to die.
Should you take the action? On most moral systems, the answer would seem to be obviously no. After all, you condemn someone to certain death, cause them immense suffering, and they don’t even consent. How is that justified?
Well, the action I was talking about was giving birth. After all, those who are born are certain to die at some point. They’re likely to have immense suffering (though probably more happiness). The suffering that you inflict upon someone by giving birth to them is far greater than the suffering that you inflict upon someone if you brutally beat them.
So utilitarianism seems to naturally—unlike other theories—provide an account of why giving birth is not morally abhorrent. This is another fact that supports it.
Case 2
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
Case 3
Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off.
Case 4
An action will make everyone better off. Should you necessarily do it? The answer seems to be yes, yet other theories have trouble accounting for that if it violates side constraints.
Case 5
When the government taxes is that objectionable theft? If not, why not? Consequentialism gives the only satisfactory account of political authority.
Case 6
Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, Mogenson and Macaskill argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people by changing very slightly the time in which lots of other people have sex. They also change traffic distributions, potentially reducing and potentially increasing the number of people who die in traffic accidents. Thus, every time a person gets in a car, there is a decent chance they’ll cause an extra death, a high chance of changing the distribution of lots of future people, and a decent chance they’ll prevent an extra death. Given that most such actions produce fairly minor benefits, it is quite analogous to the scenario described above about the button.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist. The same is true if you ever have sex; you will change the identity of a future person.
Utilitarianism Wins Outright Part 33: Intellectual Property Rights
Don't steal this article's intellectual property! :)
Intellectual property rights offer another illuminating example of a case in which utilitarianism outshines its rivals at providing the philosophical basis for a widely accepted concept. Intellectual property rights are the types of property rights involved in copyright and patent law, wherein people have control over how their ideas are used. Utilitarianism provides a very natural explanation of why people would have rights to their ideas. Giving people control over how their ideas are used makes it more profitable to have new, important ideas, by making them harder to steal, which spurs important and sometimes life-saving innovation.
In fact, utilitarianism explains quite simply why it makes sense to have the type of intellectual property rights law that we do. There are two competing roles for intellectual property rights law. The first is making it profitable to innovate, creating incentives for people to innovate in the future. The second opposite role is making the innovations accessible. If patents last too long, the price of the innovated goods will be too high for too long, resulting in people not purchasing the patented good. This is the basic reason why monopolies are inefficient.
Our current system, which seems at least potentially just, is that patents are had for a limited period of time before they expire. This is hard to reconcile with the notion of rights. Surely there’s no fact of the matter about how long one deserves intellectual property for an invention--which happens to be 20 years. It seems like deontological accounts would have to be all or nothing. Either people deserve indefinite property rights--a conclusion which is deeply unintuitive--or they deserve no property rights, which is also deeply unintuitive.
Virtue ethics also seems out of place here. Is there really some robust, virtue based account of what length of time intellectual property rights should last? It’s not clear what the account would look like. Thus, intellectual property rights are another case that fits most neatly into a utilitarian conception.
When analyzed, it is hard to find a non-consequentialist justification for making it illegal for people to copy down the written statements of a person. There isn’t a clear non-legal sense in which one’s statements are their property. Yet utilitarianism explains why it makes sense to treat intellectual property as property.
There’s a reason why lots of libertarians have hang-ups about intellectual property rights. It just turns out that intellectual property rights are not very easy to justify, absent appealing to consequences. This makes them a great example of something consequentialism explains better than its rivals.
Utilitarianism Wins Outright Part 34: Should You Kill One So You Don't Kill Five?
A question for deontologists
Suppose you know that you’re a deeply morally fallible agent. While you don’t really want to kill a person now, you know, based on facts about your psychology, that if you don’t kill one person now, later in the year you’ll kill five people. In this case, should you kill the person now to prevent five future killings?
There are two options that can be given by the deontologist and neither of them are good. First, they can say that you shouldn’t. However, this seems unintuitive; if it’s one murder now or five later, of course the one murder now would be less bad. There is, after all, less murder done by you, to violate rights. The fact that one is now and the other is in the future doesn’t seem morally salient.
Suppose if you kill one person now you’ll kill A, while if you wait, you’ll kill A,B,C,D, and E. In this case, A will be killed regardless, so it’s just a question of whether B,C,D, and E all get killed. There is literally no benefit to anyone from not killing A. Thus, this option is untenable.
The second option for the deontologist is saying that you should kill the person to prevent five of your future killings. However, this gets unintuitive results of its own. In this case, the deontologist holds that you shouldn’t kill one person to prevent 5 killings. It doesn’t seem fundamentally important whether the killings you prevent are carried out by you or by other agents.
If you should kill A to prevent yourself from killing A,B,C,D, and E, then you should kill one person to prevent someone else from killing A,B,C,D,E, and F. After all, the only difference between the scenarios for the potential victims is that in the scenario wherein you’re preventing your future murders, F still gets murdered by someone else, while F doesn’t in the other case. However, the deontologist doesn’t think you should kill one person to prevent someone else from killing A,B,C,D,E, and F.
Thus, the argument is as follows.
1 If you should kill one to prevent five killings done by yourself in the future, you should kill one to prevent six killings done by other people.
2 You should kill one to prevent five killings done by yourself in the future
Therefore, you should kill one to prevent six killings done by other people.
3 If deontology is true, you shouldn’t kill one to prevent six killings done by other people.
Therefore, deontology isn’t true.
This objection also plausibly works against virtue ethics, which holds that your primary aim should be being a good person, rather than bringing about good consequences. It seems like we have reason to also accept
4 If virtue ethics is true, you shouldn’t kill one to prevent six killings done by other people.
Therefore virtue ethics isn’t true
Now, given that the rival views are consequentialism, deontology, and virtue ethics, we have reason to accept
5 If virtue ethics and deontology are both not true, then consequentialism is true
Therefore, consequentialism is true. Just as I suspected.
Opening Statement For the Organ Harvesting Debate With Ben Burgis
Presenting the arguments
I recently debated Ben Burgis on the topic “the organ harvesting case gives us good reason to doubt utilitarianism.” I took the negative position. Here is my opening statement, which I typed out before hand.
There are lots of good reasons to think the organ harvesting case doesn’t count against utilitarianism.
Part 1: General objections to rights
Here I’ll present a series of philosophical problems with the notion of rights.
1 Everything that we think of as a right is reducible to utility considerations. For example, we think people have the right to life, which obviously makes people’s lives better. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are 100 trillion aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are significant, then the aliens grabbing the legs of the humans, in ways that harm no one, would be morally bad. The amount of rights violations would outweigh and not only be bad, but they would be the worst thing in the world. However, it doesn’t seem plausible that the aliens should have to experience being burned alive, when no humans even find out about what’s happening, much less are harmed. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically tortured all of the time but where there are no rights violations.
3 A reductionist account is not especially counterintuitive and does not rob our understanding of or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person's innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
4 We generally think that it matters more to not violate rights than it does to prevent other rights violations, so one shouldn’t kill one innocent person to prevent two murders. If that’s the case, then if a malicious doctor poisons someone's food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them, on this view. This seems deeply implausible. Similarly, this view entails that it’s more important for a person to eliminate one landmine that will kill a child set down by themself, rather than eliminating five landmines set down by other people—another unintuitive view.
5 We have lots of scientific evidence that judgments favoring rights are caused by emotion, while careful reasoning makes people more utilitarian. Paxton et al 2014 show that more careful reflection leads to being more utilitarian.
People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders, should obviously choose the world in which the one person does the murder to prevent 5.
An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The people in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 5^99 murders, when the alternative actions could have resulted in only one murder, because they’d keep passing the buck until the 100th circle. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being, who always chooses correctly.
8 Let’s start assuming one holds the following view
Deontological Bridge Principle: This view states that you shouldn’t push one person off a bridge to stop a trolley from killing five people.
This is obviously not morally different from
Deontological Switch Principle: You shouldn’t push a person off a bridge to cause them to fall on a button which would lift the five people to safety, but they would not be able to stop the trolley.
In both cases you’re pushing a person off a bridge to save five. Whether their body stops the train or pushes a button to save other people is not morally relevant.
Suppose additionally that one is in the Switch scenario. They’re deciding whether to make the decision and a genie appears to them and gives them the following choice. He’ll push the person off the bridge onto the button, but then freeze the passage of time in the external world so that the decision maker can have ten minutes to think about it. At the end of the ten minutes, they can either lift the one person who was originally on the bridge back up or they can let the five people be lifted up.
It seems reasonable to accept the Genie’s offer. If, at the end of ten minutes, they decide that they shouldn’t push the person, then they can just lift the person back up such that nothing actually changes in the external world. However, if they decide not to then they’ve just killed one to save five. This action is functionally identical to pushing the person in switch. Thus, accepting the genie’s offer is functionally identical to just giving them more time to deliberate.
It’s thus reasonable to suppose that they ought to accept the genie’s offer. However, at the end of the ten minutes they have two options. They can either lift up one person who they pushed before to prevent that person from being run over, or they do nothing and save five people. Obviously they should do nothing and save five people. But this is identical to the switch case, which is morally the same as bridge.
We can consider a parallel case with the trolley problem. Suppose one is in the trolley problem and a genie offers them the option for them to flip the switch and then have ten minutes to deliberate on whether or not to flip it back. It seems obvious they should take the genie’s offer.
Well at the end of ten minutes they’re in a situation where they can flip the switch back, in which case the train will kill five people instead of one person, given that it’s already primed to hit one person. It seems obvious in this case that they shouldn’t flip the switch back. Thus, deontology has to hold that taking an action and then reversing that action such that nothing in the external world is different from if they hadn’t taken and then reversed the action, is seriously morally wrong.
If flipping the switch is wrong, then it seems that flipping the switch to delay the decision ten minutes, but then not reversing the decision, is wrong. However, flipping the switch to delay the decision ten minutes and then not reversing the decision is not wrong. Therefore, flipping the switch is not wrong.
Maybe you hold that there’s some normative significance to flipping the switch and then flipping it back, making it so that you should refuse the genie’s offer. This runs into issues of its own. If it’s seriously morally wrong to flip the switch and then to flip it back, then flipping it an arbitrarily large number of times would be arbitrarily wrong. Thus, an indecisive person who froze time and then flipped the switch back and forth googolplex times, would have committed the single worst act in history by quite a wide margin. This seems deeply implausible.
Either way, deontology seems committed to the bizarre principle that taking an action and then undoing it can be very bad. This is quite unintuitive. If you undo an action, such that the action had no effect on anything because it was cancelled out, that can’t be very morally wrong. Much like writing can’t be bad if one hits the undo button and replaces it with good writing, it seems like actions that are annulled can’t be morally bad.
It also runs afoul of another super intuitive principle, according to which if an act is bad, it’s good to undo that act. On deontological accounts, it can be bad to flip the switch, but also bad to unflip the switch. This is extremely counterintuitive.
9 (Huemer, 2009) gives another paradox for deontology which starts by laying out two principles (p. 2)
“Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
This is intuitive—how we classify the division between actions shouldn’t affect their moral significance.
Second (p.3) “If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.”
Now Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good—everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong. However, this clearly wouldn’t be morally wrong.
10 Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, (Mogenson and Macaskill, 2021) argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people, by changing very slightly the time in which lots of other people have sex.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist, and no doubt some will violate rights in significant ways and others will have their rights violated in ways caused by you. Mogenson and Macaskill argue that consequentialism is the only way to account for why it’s not wrong take most mundane, banal actions, which change the distribution of future people, thus violating (and preventing) vast numbers of rights violations over the course of your life.
11 The pareto principle, which says that if something is good for some and bad for no one then it is good, is widely accepted. It’s hard to deny that something which makes people better off and harms literally no one is morally good. However, from the Pareto principle, we can derive that organ harvesting is morally the same as the trolley problem.
Suppose one is in a scenario that’s a mix of the trolley problem and the organ harvesting case. There’s a train that will hit five people. You can flip the switch to redirect the train to kill one person. However, you can also kill the person and harvest their organs, which would cause the 5 people to be able to move out of the way. Those two actions seem equal, if we accept the Pareto principle. Both of them result in all six of the people being equally well off. If the organ harvesting action created any extra utility for anyone, it would be a Pareto improvement over the trolley situation.
Premise 1 One should flip the switch in the trolley problem
Premise 2 Organ harvesting, in the scenario described above, plus giving a random child a candy bar is a pareto improvement over flipping the switch in the trolley problem
Premise 3 If action X is a pareto improvement over an action that should be taken, then action X should be taken
Therefore, organ harvesting plus giving a random child a candy bar is a action that should be taken
Part 2: Specific objections to the organ harvesting case
First, there’s a way to explain our organ harvesting judgments away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes.
Second, we have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctor’s offices for fear of being murdered. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5.
Third, we can imagine several modifications of the case that makes the conclusion less counterintuitive.
First, imagine that the six people in the hospital were family members, who you cared about equally. Surely we would intuitively want the doctor to bring about the death of one to save five. The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.
A second objection to this counterexample comes from Savulescu (2013), who designs a scenario to avoid unreliable intuitions. In this scenario there’s a pandemic that affects every single person and makes people become unconscious. One in six people who become unconscious will wake up—the other 5/6ths won’t wake up. However, if the one sixth of people have their blood extracted and distributed, thus killing them, then the five will wake up and live a normal life. It seems in this case that it’s obviously worth extracting the blood to save 5/6ths of those affected, rather than only 1/6ths of those affected.
Similarly, if we imagine that 90% of the world needed organs, and we could harvest one person's organs to save 9 others, it seems clear it would be better to wipe out 10% of people, rather than 90%.
A fourth objection is that, upon reflection, it becomes clear that the action of the doctor wouldn’t be wrong. After all, in this case, there are four more lives saved by the organ harvesting. It seems quite clear that the lives of four people are fundamentally more important than the doctor not sullying themself.
Fifth, we would expect the correct view to diverge from our intuitions in a wide range of cases, the persistence of moral disagreement and the fact that throughout history we’ve gotten lots of things morally wrong show that the correct view would sometimes diverge from our moral intuitions. Thus, finding some case where they diverge from our intuitions is precisely zero evidence against utilitarianism, because we’d expect the correct view to be counterintuitive sometimes. However, when it’s counterintuitive, we’d expect careful reflection to make our intuitions become more in line with the correct moral view, which is the case, as I’ve argued here.
Sixth, if we use the veil of ignorance, and imagine ourself not knowing which of the six people we were, we’d prefer saving five at the cost of one, because it would give us a 5/6ths, rather than a 1/6ths chance of survival.
Utilitarianism Is Not Like Libertarianism
Not in the slightest
Critics of utilitarianism are very fond of comparing libertarianism to utilitarianism. They claim that utilitarianism involves trotting out a plausible sounding moral principle, but then just biting the bullet on all of its crazy results. Instead, we should take its litany of crazy results to be a really good reason to reject it.
There are radical libertarians who think that one should never violate rights, whatever the benefits. Thus, these people would be opposed to stealing one penny from Jeff Bezos to prevent infinite torture. This view is just crazy!
When these people cobble together justifications for their deranged views, it’s important to keep in mind that, even if the principles sound intuitive—being about owning oneself and so on—once they start to have crazy implications, they no longer sound plausible at all. My acceptance of the libertarian moral axioms comes with a ceteris paribus clause—if it requires me sanctioning infinite torture instead of stealing a penny, then I have decisive reason to reject the premises.
Something very similar is true of natural law theory. Natural law theory may sound plausible at first, but when it entails the wrongness of contraception, homosexuality, every lying, and that animals don’t matter, it no longer seems plausible at all.
However, utilitarianism is not like these. When a radical libertarian bites the bullet, there’s nothing more they can say. The same is true of the natural law theorists. Their only defense is that it falls out of the theory, and the theory seems correct.
However, when a utilitarian bites the bullet, there’s a lot they can say—other than just that it follows from the theory. As I’ve argued at great length throughout the utilitarianism wins outright series, the controversial conclusions of utilitarianism—from the repugnant conclusion, to the utility monster, to the organ harvesting case—all end up being impossible to deny, upon reflection. This is exactly what we’d expect of the correct moral theory: it may diverge from our intuitions somewhat, but when it does, we’d expect careful reflection to bring our intuitions more in line with it.
A counterexample only counts as a counterexample if we continue to find it unintuitive upon reflection. However, with the utilitarian counterexamples, there are always counterarguments against them as counterexamples.
Noam Chomsky says a lot of controversial things. I don’t, in this article, intend to take a broad stance on how correct Chomsky is. However, in the case of Chomsky, rather than point out merely that he says unintuitive things, to argue that his theories are broadly wrong, one would have to analyze the unintuitive things that he says, and see if they turn out to be correct. The same should be done with utilitarianism.
However, let’s consider some far less sophisticated leftists than Chomsky—ones whose sole argument for radical anti-capitalist views is that Marx said that capitalism is bad; in other words, it follows from the theory. If there was no robust defense of the controversial claims that were made, that would count strongly against their theories.
Utilitarianism is more like Chomsky and less like random foolish leftists. When it makes a controversial claim—like that you should harvest organs—it brings the receipts.
Thus, my disagreement with those who claim that utilitarianism is like libertarianism is factual, not methodological. I agree that a very good tip off that your theory is wrong is that it keeps turning up crazy results. If your theories starting premises are less plausible than the counterexample to the theory, then we have decisive reason to reject your theory.
Thus, one good way to see that you’re going wrong in theorizing is if you keep having to say “look—I know this sounds wrong, but it’s what the theory says.” If instead, what you find yourself saying is, “look—I know this sounds wrong, but here are a dozen crazy things that follow from rejecting it,” you should be in the clear. All the better for utilitarians.
Responding to Ben Burgis, Responding to Me
Recap to the recap to our debate
Introduction
After my debate with Ben Burgis about organ harvesting, I published my opening statement. Ben wrote a response to my opening statement, which I’ll respond to in this article. The quotes from my original article are in italics.
A few preliminary notes. First, there may be some spelling errors—I wrote this from roughly 12-4 in the morning, and am not going to check the grammar of my more than 20,000 word article.
Second, big thanks to Ben for engaging me on this topic. I think the utilitarians have the much better side of the argument, so it’s nice to be consider the arguments in depth.
Matthew starts with some general arguments against the concept of moral rights.
As I’ve been known to do!
Objection 1
Here’s the first one:
1 Everything that we think of as a right is reducible to utility considerations. For example, we think people have the right to life, which obviously makes people’s lives better. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not. Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
It’s generally true about moral theories that partisans of one moral theory will try to show how their theory can capture what’s plausible about another. How successful any given attempt is, of course, has to be evaluated case-by-case. So for instance I claim that much of what’s plausible about utilitarianism can be (better) explained by Rawls’s theory of justice. (Consequences are given their due, but we’re still treating individuals as the unit of moral evaluation in a way we’re just not when we throw all harms and all benefits onto the scales as if they were all being experienced by one great big hive mind.) Matthew, meanwhile, claims that the appearance of (non-reducible) moral rights can be better explained by the moral importance of good and bad consequences.
As I’ve previously shown, consistent Rawlsians would be utilitarians—disproportionately caring about the badly off makes no sense. John Harsanyi>John Rawls.
On a more pressing note, Ben’s response misses the point. My claim wasn’t merely that consequentialism captures what we want about rights, it was that consequentialism is the only way to explain what rights we have in the first place. Consequentialism says that X should be enshrined as a right iff1 doing so has good consequences. Thus, this explains what rights are, and which things are rights. Other accounts of rights, I argue, are wildly implausible, lacking firm ontological foundation.
A straightforward moral rights intuition is that we have a right not to be shot that’s much more serious than our right not to have certain soundwaves impact our ears. It should be noted that everyone, on reflection, actually does believe that we have at least some rights against people producing soundwaves that will enter our ears without our consent — -hence noise laws. Even if we limit ourselves to talking, if the talking in question is an ex-boyfriend who won’t stop following around his ex-girlfriend telling her about how much he loves her and how he won’t be able to live without her, how he might commit suicide if she doesn’t come back, etc., that’s very much something we all actually do think on reflection the ex-girlfriend has a right against. But Matthew’s general point stands.
The point stands far more as a result of the examples. Making sound is a rights violation iff it causes enough harm to be worth calling one. The same is true of the ex boyfriend case. The badness of making loud sound isn’t about the nature of the sound, it’s about the nature of the harm. If the sounds were half as loud, but they caused more harm because humans had more sensitive ears, emitting them would be a graver rights violation.
We all consider our rights against being shot much weightier than our rights against hearing noises we don’t want to hear. For one thing, we tend to think — not always but in a very broad range of normal situations — that the onus is on the person who objects to some noise to say “hey could you turn that down?” or “please leave me alone, I’m not interested in talking” while we don’t think the onus is usually on the person who’s dodging gunshots to say “please stop shooting at me.” And in the noise case there’s a broad range of cases where we think the person who doesn’t want to be exposed to some noise is being unreasonable and they’ll have to suck it up and the range of cases where we’d say something parallel about bullets is, at the very least, much narrower.
So — what’s the difference?
Matthew thinks the only difference is that the consequences of being shot are worse than the consequences of someone talking to you. He further thinks that if it’s the only difference, we have no reason to believe in (non-reducible) moral rights. Both of these inferences are, I think, far too quick, and my contention is that neither really holds up to scrutiny.
I don’t claim that that is the only difference. I merely claim that this is the best explanation of a wide range of rights that exist. Take another example—you have the right not to let other people enter your house, but you don’t have the right not to let people look at your house. Consequentialism naturally explains this. It additionally explains, quite naturally, why in the noise case, the threshold at which banning noise seems reasonable is when it starts to cause significant harm. It also explains why the harm threshold is the threshold for banworthy pollution, but it makes sense to tax pollution instead of banning it in general; having a kid, which is bad iff the kid is expected to live a terrible life; owning pets, it’s fine to own pets but not to torture them; the permissibility of wars; and political authority.
Now, depending on what kind of noise we’re talking about, the context in which you’re hearing it, etc., noises can cause all sorts of harms — irritation, certainly, but also lost sleep, headaches, or even hearing loss. But the effect of bullets entering your body are typically way worse! Fair enough. But is this the only difference between firing soundwaves and bullets without prior consent?
It’s really not. For example, one difference that’s relevant from a rights perspective is that a great many normal cases of Person A talking to Person B when Person B would rather they didn’t to them are cases where Person A holds a reasonable belief that there’s at least a non-negligible chance that Person B will welcome being talked to. (In fact, I suspect that the great majority of cases are like this.) Cases where Person A shoots bullets into Person B while holding the reasonable but mistaken belief that Person B would welcome being shot are…very rare.
Several points are worth making. First, utilitarianism also makes sense of this distinction. After all, given that rights are societal heuristics, if they turned out to be wildly unpredictable, such that one had no idea whether they were violating rights, that would be a disaster from a consequentialist perspective. Thus, consequentialism explains, not assumes, why this is a salient distinction. Second, A can only predict that B will enjoy being talked to if most people like B enjoy being talked to. However, if most people like being talked to, that means that the justifiability of talking to people relates to it fulfilling people’s desires—making them better off—which is nonetheless a consequentialist notion. Finally, it’s permissible to talk even in cases where one doesn’t think the other person will appreciate what they say. For example, if I tell my child not to have any more desert, I may predict that they won’t enjoy it, but that action is permissible.
Another relevant difference is that it’s often difficult or even impossible to secure permission to talk to someone without, well, talking to them. Shooting people isn’t like that. You don’t have to shoot a couple of bullets at someone first to see if they like it. You can just ask, “Would you by any chance be amendable to me shooting you?” and then you’re talking not shooting.
This difference is practically relevant; however, it doesn’t reflect what is fundamentally salient about the distinction. Imagine, for a moment, that we lived in a society without a word for gun or bullet. Thus, the only way to ask whether someone wanted to be shot was to shoot them and ask them if they want to repeat what just happened. In this world, shooting random people would still be impermissible.
A third relevant difference, especially if we’re talking about soundwaves in general and not just talking, is that we often feel that there are (at best) competing rights claims at play in soundwave situations in a way that typically isn’t true when people shoot each other. If John stays out until dawn drinking whiskey on Friday night and a few hours after he goes to sleep he’s woken up by the noise of his neighbor Jerry mowing his (Jerry’s) lawn, we tend to think that however little John might like it, he’ll just have to get over it if there’s nothing he can do on his end to block out the noise — because we think Jerry has a right to mow his own lawn. And notice that this seems correct even though the bad consequences for Jerry from his lawn not being mowed that day might well be far less than the bad consequences for John from being woken up so soon! For example, John might experience a pounding headache for hours and Jerry might simply be vaguely displeased about his grass not being completely even.
Two points are worth making. First, my case was specifically about talking, which one doesn’t have a right to do—particularly in ways that violate rights. If talking were, much like shooting people, a rights violation, then doing it would be impermissible, whatever its utility. Second, frequently in society we don’t have total access to information, and rights are imperfect instruments. Thus, there will be cases when rights don’t rule out egregious harm (e.g. causing suicide), and others when they rule out harmless action (e.g. intellectual property rights, for property that I haven’t already purchased).
Thus, rights don’t always maximize well-being—rather, they are best explained as well-being maximizing heuristics. If, however, we want to capture the morally salient features of the situation, it would seem that John waking Jerry is pretty immoral. If John knew about the harm to Jerry, and acted anyways, he’d be a total asshole2.
Additionally, if things that we currently don’t think of as rights began to maximize hedonic value to be enshrined as rights, we would think that they should be recognized as rights. For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
The first sentence is just wrong. There are plenty of things that might maximize hedonic value that no one would normally think should be enshrined as rights. (Enshrining the right of people otherwise in danger of dying of kidney failure to a spare kidney would plausibly maximize hedonic value.) The second, though, is under-described but plausibly correct — because we think we having “horrific” suffering inflicted on us is exactly the sort of thing against which we have a right.
We’ll return to that point in a moment, but first note that we have rights against some kinds of harms but not against others. You have a right against being killed, for example, but you don’t have a right against having your heart broken. And degree of harm is often very relevant to which things you can plausibly be said to have rights against.
My first sentence is too broad—one could imagine it being optimific to give the utility monster the right to free food from the flesh of every human, however, that would nonetheless not be considered a right by most people. This is one of the few things that Ben got right, however, not for the reason that he gave. The right to bodily autonomy entails that the government can’t take ones organs, and that is a more important rights, that generally has better outcomes.
The rights against being killed, rather than heartbreak, are best explained by utilitarianism. While heartbreak can be very painful—certainly worse than assault in some cases—a world in which breaking someone’s heart was outlawed would be very, very bad, from the standpoint of utility. The right to break another’s heart is important for relationships—relationships would be dysfunctional if one could never leave. Thus, like nearly every other point discussed so far, the best account is a utilitarian one.
Even when we’re specifically talking about bodily autonomy, that comes in degrees. It doesn’t seem crazy, for instance, to say that laws against abortion are a far more profound violation of bodily autonomy (and hence far less likely to be justifiable by weighing competing values) than vaccine mandates. It might be reasonable to (a) refuse to let a mental patient leave the hospital because you judge them to be a threat to themselves and/or others but (b) have moral reservations (even if we’re assuming a time and a place where it’s entirely up to the doctor) about giving that same patient electroshock therapy against their will even if it’s absolutely true that they would benefit from it.
But the amount of bodily autonomy doesn’t seem relevant to the strength of the right to bodily autonomy. This can be seen with the following example. Suppose that we had control over a massive limb that stretched into the fourth dimension, just sort of flapping around. However, we can direct it with our minds, if we want to. If this limb were causing harm, even if this limb was 99.9999% of our body weight, it wouldn’t be a violation of rights to destroy this limb. This is because the right to bodily autonomy only matters if it makes people well off. The only reason to care about a right—apart from irrational rule worship—is if it makes people well off.
I don’t really have the intuition about there being an asymmetry between the justification for beneficial electroshock therapy and restraining someone. One cause of the asymmetric intuition may just be a negative reaction to electroshock therapy, given its horrific history of being used in gay conversion therapy. Additionally, even if treatment is beneficial, there’s a difference between something being beneficial to do and beneficial to compel. A patient may individually benefit from treatment, but giving the government the right to treat people without their permission would make people terrified to go in, genuinely seeking help. There is also significant value to upholding the liberal value involved in allowing people to choose how their life goes, even if they choose poorly.
The difference between a pure utilitarian framework where you assume all we’re doing is weighing harms against benefits and a rights framework in which we (non-reducibly) have a right against being harmed in certain ways is nicely demonstrated by thinking about the debate started by Peter Singer about whether it’s wrong not to donate excess income to famine relief charities. Whatever position you take on that issue, we’re all going to agree — I suppose some particularly ferocious bullet-biter might disagree? — that it would definitely be wrong to fly to a famine-stricken country and shoot some people yourself. Even if we could be sure that the people you shot would have starved to death — even if, as is plausible if you’re a good enough shot, they would have suffered more if they’d starved instead of being shot — you still can’t do that. In fact, I suspect that most people who Singer has convinced that it’s wrong not to donate excess income to famine relief charities would still think an expert marksman could at delivering headshots that will kill people quickly flying to a famine-stricken country to shoot some famine victims would in fact be much much much worse than just not donating.
In response to the first part, I’ll just quote the section of my book in progress on this very topic.
Some like (Carson, 1983) have argued, utilitarianism has the unacceptable consequence that we should kill people who have a negative quality of life. If their quality of life is negative, they overall detract from the hedonic sum, such that things would be better if they were killed. However, this conclusion strikes many as counterintuitive.
There are two crucial questions worth distinguishing. The first is whether one should kill unhappy people and the second is whether the world is better when unhappy people die. Utilitarianism would, in nearly all cases, answer no to the first. One can never be particularly confident in their judgments about the hedonic value of another person and killing damages the soul and undermines desirable societal norms. Thus, the cases in which utilitarianism prescribes that people who are unhappy should be killed are not cases which are likely to arise in the real world, which trigger our intuitions. They are cases in which there are no spillover effects, no one would ever find out about the killing, the killing won’t undermine one’s character, and you can know with absolute certainty or near absolute certainty that the person has a bad life which will remain bad for the entirety of its existence.
Thus the cases in which utilitarianism prescribes that sad people should be killed are ones in which the relevant question is far more like the second one posed above, namely, whether the world is better because of the deaths of some people, based purely on facts about the life of the person. This caveat is to distinguish the case from one in which a person is killed to prevent other bad things from happening, such as would occur from killing Hitler. If we imagine a scenario in which there are aliens who know with absolute certainty that a person will be miserable for the next year and then die, who can bring about their death in a way that will be believed to be an accident, producing zero negative effects on the aliens, and maximizing the sum total of well-being in the world, the utilitarian account begins to seem more intuitive.
Thus, the question primarily boils down to the question of whether a person with more sadness than happiness is better off dead. If people are better off dead, then killing them makes them better off. The intuition against killing seems dependent on the notion that it makes the victim worse off. Common sense would seem to hold that people can be better off dead, at least in some cases. If a person is about to undergo unfathomable torture, it seems plausible that they’d be better off dead. So utilitarianism and common sense morality are in 100% agreement that some people are better off dead. The disagreement is merely about whether this applies to all sad people.
However, it seems all theories of well-being would have similar conclusions. Objective list theory would hold that a person is better off dead if the badness of their life is greater than their pursuit of goods on the objective list. Desire theory holds a person is better off dead if they have more things in their life that they don’t desire than things they do desire.
Thus, the argument can be formulated as follows.
Premise 1 If killing people makes them better off, then you should kill people, assuming that there will be no additional negative side effects. This principle has independent plausibility; a morality that isn’t concerned with making victims better off seems to be in error. Additionally, this premise follows from the intuitive Pareto principle, which states that something is worth causing if it’s better for some and worse for none.
Premise 2 Killing people who have a negative quality of life makes them better off. This seems almost true by definition and follows from all theories of well-being.
Therefore, you should kill people who have a negative quality of life, assuming that there will be no additional side effects.
Premise 3 If you should kill people who have a negative quality of life, assuming that there will be no additional side effects, then the hedonistic utilitarian judgments about killing people who have more sadness than happiness is plausible. The objective list theory and desire theories also hold you should kill people under similar circumstances, relating to desire and objective list fulfillment, therefore, this plagues all theories and is not a reason to reject hedonistic utilitarianism.
Therefore, the hedonistic utilitarian judgments about killing people who have more sadness than happiness is plausible.
One might object that a notion of rights would prevent this action. I’ve already provided a litany of arguments against rights. However, even if one believes in rights, it’s hard to make sense of a notion of rights that doesn’t apply in cases of making people better off. For example, it is not a violation of rights to give unconscious people surgery to save their life, even if they haven’t consented, by virtue of their being unconscious. Thus, a notion of rights is insufficient to salvage this objection. A right that will never make anyone better off doesn’t seem to be very plausible.
Additionally, I’ve previously argued that there is no deep distinction between creating a new person with a good life and increasing the happiness of existing people. If this is true and it’s bad to create miserable people, then it would seem to be permissible to kill miserable people, given the extended ceteris paribus clause that has been stipulated.
Several more arguments can be provided for this conclusion. Imagine that a person is going to have a bad dream. It seems reasonable to make them not have that dream, if one had the ability to make their sleep dreamless, rather than containing a miserable dream. Similarly, if a person would be miserable for a day, it seems reasonable to make them not experience that day, as long as that would produce no undesirable consequence. However, there’s no reason this principle should only apply to limited time durations. If it’s moral to cause a person not to experience a day because it would contain more misery than joy, then it would also seem reasonable to make them not experience their entire life.
Additionally, it seems reasonable to kill people if they consent to being killed and live a terrible life. However, if they don’t consent because of error on their part, then it would be reasonable to fix the error, much like it would be reasonable to force a deluded child to get vaccinated. Killing them seems plausibly analogous in this case.
Finally, there are many features of the scenario that undermine the reliability of our intuitions. First, there’s status quo bias, given that one upsets the status quo by killing someone. It seems much more intuitive that a person is better off dead if that occurs naturally than being killed by a particular person, which shows the influence of status quo bias. Second, we rightly have a strong aversion to killing. Third, it’s very easy to imagine acts becoming societal practices when evaluating their morality. However, killing unhappy people would clearly be a bad societal practice. Fourth, there’s an intuitive connection between killing innocent people and viciousness, showing that character judgments may be behind the intuition. Fifth, the scenario is deeply unrealistic, involving total certainty about claims that we can’t really know in the real world, meaning our intuitions about the world are unlikely to be reliable. It also requires stipulating that a person will never be able to be helped for their misery. Sixth, this prescription is the type that has the potential to backfire, given that it would be bad if people acted on it in any realistic situation. Seventh, this principle seems somewhat related to the ethics of suicide, which people naturally have a strong aversion to.
Burgis’ notion seems to be reliant on some notion of a distinction between doing and allowing something. You can allow rights to be violated, as long as you are not the one doing the rights violating. However, such an account is untenable, as has been argued at length by Bennett, Kagan, and most notably me. To see my response to this, look under “Objection 1: The No Good Account Objection.”
The case of the marksman would obviously not be justified by utilitarianism unless we make wildly implausible stipulations. We’d have to assume that it’s guaranteed no one will find about what they’re doing, they know that the people won’t survive and will live net negative lives, they have the ability to be a perfect marksman, and so on.
The reason we may find this unintuitive is likely because we imagine that it is, in some way, bad for the children. However, on both objective list theory and hedonism—which together represent the overwhelmingly majority view—this aciton would be good for the children. The only other theory of the things that make people well off is desire theory, which is wildly implausible as I argue here and here.
If one doesn’t hold that the badness of killing relates to it being bad for the victim then there’s a rather puzzling verdict. It begins to seem self centered, for one to hold that it would be good for a person if they were killed, but nonetheless, they shouldn’t be killed. The badness of killing does seem to be grounded in facts about the victim.
Additionally, if we hold that it’s good for the victims, and thus overall, if they die painlessly, then it seems as though we should hope for the victims to die. However, if we should hope for the deaths of the victims, then it seems that it would be good to kill the victims. After all, it would be very strange for the correct morality to hold that one should hope that something happens, but that one isn’t supposed to bring it about.
There is a reason that it makes sense to terminate the life of the terminally ill who are in immense pain. That reason is grounded in it being good for them. If a person is about to starve to death, they are functionally terminally ill—and in immense pain. It thus makes sense to terminate their life.
To consider this more impartially, imagine that we knew that lightyears away some aliens were about to die. They were in immense pain. We could bring about their inevitable death, leading to less pain, and no one would ever find out about it. It seems immensely clear in this case that doing so would be the right thing. This is especially true if it would make the aliens better off. However, it’s quite trivial that ending painlessly of the lives of those with bad remaining segments of their life makes them better off.
Another of Matthew’s examples is looking at a house vs. entering the house. We have a right against people entering our homes without our permission, but not against them looking at our homes. True! But why? Matthew thinks the difference is about harm but that doesn’t really seem to capture our intuitions about these cases — we don’t typically think people have a right against having their homes entered only when they’ll experience some sort of harm as a result. If Jim enjoys watching Jane sleep, for example, and he knows she’s a very heavy sleeper who won’t hear him slip in her window and pulling up a chair by her bed to watch — and he leaves long before she wakes up — this is surely the kind of thing Jane has a very strong right against. Part of the difference between that and looking at her house is about property rights (the kind even socialists believe in, the right to personal as opposed to productive property!) but there’s part of it that’s not about that and we can draw out that distinction nicely by imagining that he’s watching her sleep through high-powered binoculars from just off her property. Jane may have a legal right against this, and she certainly has a moral rights against it, because it’s an invasion of privacy — even if she experiences no harm whatsoever as a result.
Before moving on to (2) in Matthew’s list of general arguments against rights, one quick note about methodology that the Jane/Jim case brings out nicely. If two moral views both have the same result in some instance — for example, there are many cases in which we normally think people have some right where that can be explained either in utilitarian terms or in terms of non-reducible rights — a useful way of deciding between them is to consider cases (which might have to be hypothetical ones) where the frameworks would diverge. In act-utilitarian terms, it’s a little tricky to explain what’s wrong with Jim’s actions. There are moves the act-utilitarian could make here, but it’s a little tricky. In terms of bog-standard rights assumptions, though, the wrongness is straightforward.
In my book in progress, I also argue that utilitarianism gives the best account of privacy rights. This is thus largely quoting the book.
One possible objection to utilitarianism is that it doesn't adequate account for the value of privacy. After all, utilitarianism would hold that violations of privacy are bad if and only if they cause harm which negatively impact the mental states of people. If there were people spying on you all the time, gaining immense joy from it and you never found out, utilitarianism holds that that state of affairs would be good overall. Many people find this a good reason to reject utilitarianism. I’m not convinced!!
Objection 1: Aliens
Suppose that there were a trillion aliens who, experienced per second, the sum total of all suffering experienced during the holocaust, by all of the victims. They could violate every single person’s privacy up to one quadrillion times per second, resulting in the possibility of about 7.6 septillion privacy rights violations every second (wow!). Each time they violate privacy their suffering diminishes slightly, such that violating the privacy rights of people a trillion times reduces their hedonic state to a neutral amount. Thus, if they violate people’s privacy 7.6 septillion times, they’ll be in a state of unfathomable bliss, experiencing more per second satisfaction than all humans ever. The humans never find out about these privacy violations.
If privacy rights really are intrinsically valuable independent of hedonic considerations, then, given that the aliens violate privacy 7.6 septillion times per second, they would be committing the single worst act in history. The badness of the holocaust, slavery, and a global nuclear war killing everyone would pale in comparison to the badness of their action. However, not only does it not seem like their action wouldn’t be the worst action ever, it would be positively good. If their actions produce no negative mental states for anyone, it seems rather cruel to condemn them to undergo the total suffering of the holocaust every single second.
One might have the intuition that the aliens are acting seriously morally wrong. If they do, perhaps there might be a fundamental divergence of intuitions. However, several additional things are worth reflecting on.
First, imagine changing the scenario so that humans were the aliens. In order to avoid enduring the total suffering of the holocaust every second, we had to constantly spy on lots and lots of aliens. I wouldn’t want to endure the holocaust every second, and it would seem quite morally appropriate to spy on lots of aliens without their knowledge to avoid such a grisly fate.
Second, even if one has the intuition that it would be morally wrong, it’s hard to imagine having the intuition that it would be the single worst act in history by an unfathomable number of orders of magnitude. The holocaust, Jim Crow, slavery, and many other things seem clearly worse.
Third, imagine that the aliens, without the humans knowing that there was a connection between the two actions, offered the humans half of the utility that they gained from spying on them. If this happened, merely by distributing the gains, the humans would be now experiencing more joy than has been experienced so far in history. It’s hard to imagine that they’d be worse off. If both the humans and aliens are better off, it’s hard to imagine how the action would be wrong--especially if they were unfathomably better off.
One might object that the amount of utility gained is worth a privacy violation, even though privacy violations are bad. This, however, misunderstands the scenario. Each privacy violation has virtually no impact on utility. The only reason the privacy violations have a significant effect is because they violate rights septillions of times per second. Each privacy violation produces virtually no utility.
One might object that privacy violations have declining marginal disvalue. This, however, runs into a few issues.
1 This intuition is what we’d expect on utilitarianism. There is declining marginal harm in terms of utility caused by one more rights violation.
2 This doesn’t seem to track our intuitions very well if there are distinct types of privacy violations. For example, if one was spied on in the restroom by a creep who put a Camera in, that doesn’t seem to undermine the harm of NSA surveillance. We can thus stipulate for the purpose of the thought experiment that each time rights are violated, it’s done in a new way, to avoid repetition (these aliens are very creative, finding billions of different ways to violate privacy every second!).
3 To avoid concerns about decreasing marginal value, we could suppose that the privacy violations did not repeat. Instead of the aliens spying on us.they spied on a much larger alien civilization, with googolplex aliens, each of the spying aliens only spying on each individual alien once, but spying on 7.6 septillion aliens per second. The larger aliens never find out about it, and in fact it would be metaphysically impossible for them to ever find out about it.
We could imagine a related scenario. Imagine if the air was conscious and was spying on us every second. It nevertheless seems like the air would not be acting particularly immorally, and certainly would not be committing the worst atrocity in human history.
Objection 2: Viciousness
Normative judgments are often intricately linked with character judgments. Thus, when thinking of violations of privacy rights, our judgments may be influenced by thinking about whether the person who violates privacy rights is a bad person, which they usually are in the real world. This viciousness account explains many of our real world moral judgments about when privacy matters. Even if we’re not harmed, a person who is reckless and violates our privacy in a way that would bother us if we found out about it seems like a bad person and has certainly acted poorly. It is largely analogous to drunk driving. Drunk driving isn’t always harmful, however, one shouldn’t drive while drunk because it’s reckless, even if no accidents actually happen.
If we consider real world scenarios, the viciousness account combined with a consequentialist account seems to explain our judgments. We don’t mind that parents change their children’s diapers or that people look at other people, often gaining important information about them, because in the real world such things usually have good consequences and don’t indicate viciousness, or anything else defective about one's character.
Additionally, the badness of privacy violations seems desire dependent. If one waives their privacy rights, we don’t generally think they’re worse off. It’s only when one doesn’t consent and expects to be harmed that privacy violations start to seem bad.
Objection 3: Heuristics
Our moral judgments are also largely explainable in terms of heuristics. In the real world privacy violations are often harmful, for example those done by government agencies or private people. Thus, it’s not surprising that we’d find privacy to be intrinsically valuable. If every time a person violates privacy in the real world it’s bad, we’d develop the judgment that it’s always bad, even in counterfactual scenarios in which it’s not harmful.
If every time a person pressed a button bad things happened, we might find it morally bad to press the button, even in scenarios in which pressing the button wouldn’t actually be harmful. The drunk driving case above is a prime example of this.
Objection 4: What if everyone did it?
Imagine we were deciding between the following worlds.
World 1: Everybody constantly (dozens of times per second) violates everybody else's privacy, and has very high positive utility. .
World 2: No one violates privacy, but everyone is miserable.
World 1 seems clearly better. However, if privacy violations were intrinsically very bad, then a world where everyone was miserable could be worse than one in which people’s privacy rights were violated constantly. This shows that our appreciation of privacy rights is merely instrumental--privacy rights don’t seem to matter in and of themselves.
All of the violations of privacy rights that are found objectionable involve scenarios in which violations of rights cause lots of harm. However, if we stipulate that there’s a pareto improvement from the standpoint of well-being from lots of privacy violations, it begins to seem much more intuitive that people should violate each other's privacy rights.
Thus, it seems like reflection on what fundamentally matters reveals that privacy does not matter intrinsically--it only matters as a means to an end.
There Is No Adequate Principled Defense of Privacy
It’s unclear what makes privacy intrinsically valuable or how to maintain the intrinsic value of privacy in the absence of utilitarian considerations. Merriam Webster’s defines privacy as “a: the quality or state of being apart from company or observation.” However, this seems clearly not to be intrinsically valuable. There’s nothing intrinsically immoral about observing people in public, for example. It also seems odd to say that inviting friends over undermines your privacy.
The next definition they give is “b : freedom from unauthorized intrusion.” However, this doesn’t seem intrinsically valuable either, depending on how privacy is defined. When a person observes another in public, their observation is not authorized. However, looking at people in public places is clearly not an objectionable violation of privacy.
(DeCew, 2018) defined privacy as
“1 Intrusion upon a person’s seclusion or solitude, or into his private affairs.
2 Public disclosure of embarrassing private facts about an individual.
3 Publicity placing one in a false light in the public eye.
4 Appropriation of one’s likeness for the advantage of another (Prosser 1960, 389).”
However, this is clearly not an adequate basis of what fundamentally matters about privacy. The first one, isn’t clear--a definition of privacy can’t very well appeal to private affairs or seclusion or solitude. Those concepts seem to be roughly synonyms of privacy. What determines whether something is a “private affair?”
The second feature is consistent with utilitarianism. Obviously public disclosure of embarrassing facts causes hedonic harm. The third and fourth also can be explained by utilitarian considerations.
(Parent, 1983) attempts to provide a definition of privacy (p.269)
“Privacy is the condition of not having undocumented personal knowledge about one possessed by others. A person's privacy is diminished exactly to the degree that others possess this kind of knowledge about him.”
Parent clarifies (p.269-271)
“A full explication of the personal knowledge definition requires that we clarify the concept of personal information. My suggestion is that it be understood to consist of facts about a person' which most individuals in a given society at a given time do not want widely known about themselves. They may not be concerned that a few close friends, relatives, or professional associates know these facts, but they would be very much concerned if the information passed beyond this limited circle. In contemporary America facts about a person's sexual preferences, drinking or drug habits, income, the state of his or her marriage and health belong to the class of personal information. Ten years from now some of these facts may be a part of everyday conversation; if so their disclosure would not diminish individual privacy. This account of personal information, which makes it a function of existing cultural norms and social practices, needs to be broadened a bit to accommodate a particular and unusual class of cases of the following sort. Most of us don't care if our height, say, is widely known. But there are a few persons who are extremely sensitive about their height (or weight or voice pitch).2 They might take extreme measures to ensure that other people not find it out. For such individuals height is a very personal matter. Were someone to find it out by ingenious snooping we should not hesitate to talk about an invasion of privacy. Let us, then, say that personal information consists of facts which most persons in a given society choose not to reveal about themselves (except to close friends, family, . . .) or of facts about which a particular individual is acutely sensitive and which he therefore does not choose to reveal about himself, even though most people don't care if these same facts are widely known about themselves. Here we can question the status of information belonging to the public record, that is, information to be found in newspapers, court proceedings, and other official documents open to public inspection. (We might discover, for example, that Jones and Smith were arrested many years ago for engaging in homosexual activities.) Should such information be excluded from the category of personal information? The answer is that it should not. There is, after all, nothing extraordinary about public documents containing some very personal information. I will hereafter refer to personal facts belonging to the public record as documented. My definition of privacy excludes knowledge of documented personal information. I do this for a simple reason. Suppose that A is browsing through some old newspapers and happens to see B's name in a story about child prodigies who unaccountably failed to succeed as adults. B had become an obsessive gambler and an alcoholic. Should we accuse A of invading B's privacy? No. An affirmative answer blurs the distinction between the public and the private. What belongs to the public domain cannot without glaring paradox be called private; consequently it should not be incorporated within our concept of privacy. But, someone might object, A might decide to turn the information about B's gambling and drinking problems over to a reporter who then publishes it in a popular news magazine. Isn't B's privacy diminished by this occurrence?3 No. I would certainly say that his reputation might well suffer from it. And I would also say that the publication is a form of gratuitous exploitation. But to challenge it as an invasion of privacy is not at all reasonable since the information revealed was publicly available and could have been found out by anyone, without resort to snooping or prying. In this crucial respect, the story about B no more diminished his privacy than would have disclosures about his property interests, say, or about any other facts concerning him that belonged to the public domain.”
This account isn’t very different from the utilitarian account. By having privacy relate generally to information that most people wouldn’t want revealed, it captures privacy as a heuristic that generally produces good outcomes. However, by broadening the definition to include unwanted private encroachment (E.G. about the height of one who is very sensitive about their height), the definition maintains that discovering such harmful information is immoral. However, where this does differ, it is flawed.
Imagine the following case. A person is walking in public and says something rude to someone else. That other person records them and posts a video online. Billions of people subsequently find out about it, and the billions turn against that individual person. It very much seems like their privacy is violated. This is because they were severely harmed by the disclosure. Or consider a case in which a magazine publishes private information about someone. By this definition, reposting that information to a billion people, even if it’s very embarrassing, wouldn’t be a violation of privacy, because that information is already on the public record.
Additionally, consider a case in which a person is convicted in a court of law of a heinous crime. That is on the public record. However, it would still be a privacy violation to broadcast to a billion people that the person was convicted of a particular crime.
Additionally, Parent defends the desirability of privacy primarily by appealing to consequentialist considerations, writing (P. 276-277)
“Lest you now begin to wonder whether privacy has any value at all, let me quickly point to several very good reasons why people in societies like ours desire privacy as I have defined it. First of all, if others manage to obtain sensitive personal knowledge about us they will by that very fact acquire power over us. Their power could then be used to our disadvantage. The possibilities for exploitation become very real. The definite connection between harm and the invasion of privacy explains why we place a value on not having undocumented personal information about ourselves widely known.
“Second, as long as we live in a society where individuals are generally intolerant of life styles, habits, and ways of thinking that differ significantly from their own, and where human foibles tend to become the object of scorn and ridicule, our desire for privacy will continue unabated. No one wants to be laughed at and made to feel ashamed of himself. And we all have things about us which, if known, might very well trigger these kinds of unfeeling and wholly unwarranted responses.
“Third, we desire privacy out of a sincere conviction that there are certain facts about us which other people, particularly strangers and casual acquaintances, are not entitled to know. This conviction is constitutive of "the liberal ethic," a conviction centering on the basic thesis that individuals are not to be treated as mere property of the state but instead are to be respected as autonomous, independent beings with unique aims to fulfill. These aims, in turn, will perforce lead people down life's separate paths. Those of us educated under this liberal ideology feel that our lives are our own business (hence the importance of personal liberty) and that personal facts about our lives are for the most part ours alone to know. The suggestion that all personal facts should be made available for public inspection is contrary to this view. Thus, our desire for privacy is to a large extent a matter of principle.'8”
The first two arguments are clearly consequentialist. The third argument is less clearly consequentialist, but it can still be made by a consequentialist. Society is better off when strangers are barred from knowing deep, intimate information about people. The liberal ethic is plausibly optimific.
If the third point is non-consequentialist, then it is hard to make sense of it in combination with the definition of privacy. Why would the information that strangers are not entitled to know depend on whether it being known would cause harm, on whether most people wouldn’t want to be known, or even on whether it’s currently on some obscure public record.
Objection 2
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are 100 trillion aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are significant, then the aliens grabbing the legs of the humans, in ways that harm no one, would be morally bad. The amount of rights violations would outweigh and not only be bad, but they would be the worst thing in the world. However, it doesn’t seem plausible that the aliens should have to experience being burned alive, when no humans even find out about what’s happening, much less are harmed. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically tortured all of the time but where there are no rights violations.
This one can be dispensed with much more easily. The conclusion just straightforwardly doesn’t follow from the premise. To see that, let’s strip the example down to a simpler and easier to follow version that (I think, see below) preserves the key point.
As Matthew himself reasonably pointed out to me in a different discussion, our intuitive grasp on situations gets hazier when we get up to truly absurdly large numbers, so let’s at least reduce both sides of the equation. 100 trillion is a million times more than 100 million. One human leg being non-consensually but harmlessly grabbed by an alien will mean a million aliens won’t experience the sensation of being burned alive. Matthew thinks the alien should grab away. I agree! In fact, it’s not clear that the human’s rights would be violated at all, considering that any remotely psychologically normal (or really even psychologically imaginable) human would retroactively consent to having their leg grabbed for unfathomably more trivial harm-prevention reasons. But even if we do assume that the one human is having his rights violated, that assumption just gets you to “any rights we might have against certain extremely trivial violations of personal space are non-absolute,” not “there are no non-reducible moral rights.”
In this case, Ben has totally missed the point of the case, hence his confused claim that this can “be dispensed with much more easily.” It is not that 100 trillion aliens need to grab the leg of 100 million humans total to avoid experiencing being burned alive—it’s that they need to grab the leg of 100 million humans each, however, each time without the humans finding out about it. Thus, each marginal leg grab produces only miniscule benefit, but the collective leg grabs produce enough benefit to avert horrific torture.
Ben also ignores the second case I gave in this example. “If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically tortured all of the time but where there are no rights violations.”
To see why not, think about a more familiar and less exciting example — pushing a large man off a footbridge to save five people from a trolley. Here, the harm is of the same type (death) and five times as much of it will happen if you don’t push him, but most people’s intuition about this case is that it would be wrong to push anyway. That strongly suggests that most of us think there are indeed moral rights that can’t be explained away as heuristics for utility calculations.
Contrast that to a trolley case structurally more like Matthew’s aliens-in-agony scenario although at a vastly smaller scale. As always, five people are on a trolley track. As in a familiar variant, there’s a lever that can be pulled to divert the train onto a secondary track. But in this version (a) the second track is empty, so you aren’t killing anyone by pulling the lever. But there happens to be someone standing in front of you with his hand idly resting on the lever. His eyes are closed and he’s listening to loud music on his Airpods and he has no idea what’s going on. By the time you got his attention, the five people would be dead. So you grab his hand and yank it.
If we were just considering this last example, you could end up drawing utilitarian conclusions..but the example just before nicely demonstrates why that would be a mistake.
What I have said is an argument against the intuitive judgment about the organ harvesting case. Thus, in turn, it is an argument against the judgment that you should push the fat man in the trolley case. Pointing to the judgment that you should push the fat man in the trolley case, in response to an argument against this very conclusion, would be begging the question.
A final thought about Matthew’s point 2 before moving onto 3 — rereading some of his formulations quoted above (particularly the one about the amount of rights violations involved in his original version of the example allegedly being something someone who believes in non-reducible rights would have to regard as the worst thing in the world) maybe my simplification of 100 trillion aliens not experiencing the sensation of burning alive vs. 100 million humans not having their legs grabbed into a million aliens and one humans missed something important in Matthew’s example. Maybe his idea goes something like this:
“Sure, grabbing one leg to save a million aliens from unspeakable torment might make sense given standard non-utilitarian assumptions about rights. But remember in each individual instance of leg-grabbing in the original example, the effect of that individual act will be to reduce each aliens’ collective suffering by one one hundred millionth — the aliens would barely notice — so when we consider each one individually, it would be too trivial to justify the rights violation.”
This is not what I mean. Once again, what I meant is that each alien needs to grab 100 million legs to avert their own torture.
If so, I’d say two things. First, just as Matthew is correct to point out that intuitions can be confused when we’re talking about very large numbers, it’s similarly hard to gage with very small fractions. In this case, I’m not sure there even is such a thing as a one one-hundred-millionth reduction of the sensation of being burned alive. I suspect that sensations don’t work like that Perhaps in some way that’s totally inaccessible to human minds, it does work like that for aliens. At any rate, I don’t really know what “they’ll experience one one hundred millionth less suffering than the senation of being burned alive” means, and frankly neither do you, so asking me to have a moral intuition one way or the other about whether it’s significant enough to justify grabbing someone’s leg without their knowledge or consent is deeply unlikely to shed much light on my overall network of moral intuitions.
This seems like a dramatic failure of imagination. We’re all perfectly comfortable with the notion of hurting less. In fact, things may hurt less in a way that’s so imperceptible that we don’t notice the difference. A reduction in torture of 1 in 100 million would be an imperceptible reduction in suffering, such that the suffering would be brought to zero if done 100 million times.
However, even if one finds the aforementioned notion baffling, it can be replaced, for the purposes of the hypothetical with a mere 1 in 100 million chance of eliminating the torture each time.
Second, even it couldn’t be morally justified in weighing rights against consequences when each individual leg-grabbing was considered in isolation, it just wouldn’t follow that all hundred million leg-grabbings were unjustified when considered in tandem. “This will be the impact of doing this a hundred million times, including this one” is morally relevant information.
Given that Ben has misunderstood my argument, this misses the point, and it has nothing that I need to respond to.
Objection 3
3 A reductionist account is not especially counterintuitive and does not rob our understanding of or appreciation for rights. It can be analogized to the principle of innocence until proven guilty. The principle of innocent until proven guilty is not literally true. A person’s innocence until demonstration of guilt is a useful legal heuristic, yet a serial killer is guilty, even if their guilt has not been demonstrated.
It’s not counterintuitive until we start to think about the many examples (like the first of the two trolley cases above) where it has wildly counterintuitive consequences! The Innocent Until Proven Guilty, I think, starts to look less helpful the more we poke into it. One thing IUPG is absolutely not is a heuristic or anything like one. It’s not a useful rule of thumb — it’s an unbending legal rule, that someone (who may or may not be actually innocent) who hasn’t been proven guilty has the legal status of an innocent person.
This is false; innocent until proven guilty, much like the concept of rights, is a useful heuristic in two senses. First, treating people as innocent until proven guilty generally produces good consequences. Second, in a purely descriptive sense, most people who have not been proven guilty are, in fact, innocent. Thus, it is a useful rule of thumb, that is also enshrined as a legally inviolable right. This is for good reason—making it a right that’s firmly locked in produces good outcomes.
While we’re talking about IUPG, by the way, it’s worth pausing to ask whether pure utilitarianism can make sense of why it should be the legal standard. Think about the classic justification for it — Blackstone’s Ratio (it’s better for ten guilty persons to go free than one innocent person to be imprisoned). That makes perfect sense if we think there’s something like a categorical moral prohibition on the state punishing the innocent that’s so important it can outweigh the benefits of saving the victims of those ten guilty people. But it’s at the very least not obvious that the utility calculus will work out that way.
Utilitarianism has a very good explanation of the morality of the criminal justice system. The horrific mistreatment in the criminal justice system, resulting in rape on a mass scale, for example, gives us very good reason to not risk incarcerating innocent people. Utilitarianism, unlike deontology, explains why it’s worth risking incarcerating some innocent people to incarcerate some guilty people.
Additionally, being in prison makes people more likely to reoffend. Prison causes vast amounts of suffering, which makes urgent criminal justice reform needed. To decide whether or not eliminating the innocent until proven guilty standard should be eliminated, we’d need to compare the minimal deterrence factor to the vast harm from incarcerating innocent people. It is quite clear—which is the reason EA organizations have been sponsoring criminal justice reform—that at least some criminal justice reforms end up being beneficial.
The claim that utilitarianism justifies being too tough on criminals is just a baffling one. It’s the retributivists, who think that people deserve to suffer for doing bad things, whatever the consequences, that advocate aggressive criminal justice reform. Utilitarians in the real world, unlike Ben’s assessments of what they should advocate, do in fact tend to be the most progressive segment of the population on criminal justice reform.
Everyone—utilitarian and non-utilitarian alike—agrees that a crucial role of the criminal justice is to deter crime, keep dangerous people locked up, and make people better members of society. Non-utilitarian theories of criminal justice are more punitive, because they add extra roles for the cjs, that utilitarianism has no need for.
Objection 4
4 We generally think that it matters more to not violate rights than it does to prevent other rights violations, so one shouldn’t kill one innocent person to prevent two murders. If that’s the case, then if a malicious doctor poisons someone’s food, and then realizes the error of their ways, the doctor should try to prevent the person from eating the food, and having their rights violated, even if it’s at the expense of other people being poisoned in ways uncaused by the doctor. If the doctor has the ability to prevent one person from eating the food poisoned by them, or prevent five other people from consuming food poisoned by others, they should prevent the one person from eating the food poisoned by them, on this view. This seems deeply implausible. Similarly, this view entails that it’s more important for a person to eliminate one landmine that will kill a child set down by themself, rather than eliminating five landmines set down by other people — another unintuitive view.
No. None of this actually follows from belief in rights per se, or even from the view that it’s important not to violate rights than to prevent rights violations (which itself is a substantive extra commitment on top of belief in rights). Here’s the trick: The attempt at drawing out a counterintuitive consequence relies on the rights-believer seeing “poisoning food and then not stopping it from being eaten” as a single action (or “setting a landmine and not eliminating it”) as a single action, but the intuition itself relies on thinking of them as two separate actions, so that the poisoning/landmine setting is in the background of the decision, and now we’re thinking about a new decision about which poison/landmine to save who from and it seems arbitrary to save the victims of your own past self as opposed to someone else’s victims.
This response is totally confused. The relevant question is not whether it counts as one action or two actions—instead, the relevant question is whether or not it’s a rights violation. Laying down a land mine will be a rights violation iff you don’t destroy it before it harms anyone—same with poisoning someone’s food. Thus, if you don’t take the action of preventing the person from eating your food, rather than the other five people’s food, you will have violated one person’s rights, but allowed five other rights to be violated. However, if you prevent the one person from eating your food, then you’ll have violated no one’s rights, but allowed five rights to be violated. If choosing between a world in which you violate one person’s rights, or one in which five others violate one person’s rights each, the deontologist holds that the second world is the one which you should prefer.
But here’s the thing: Whichever you think is the right way to cut up what counts as the same action or a new one, you really do have to pick. If you consistently think of these as two separate actions, the rights-believer has no reason to believe the counterintuitive thing Matthew attributes to them. On this view, they’re not choosing between killing and letting die. They’ve committed attempted murder in the past but now they’re choosing who to let die and none of the options would constitute killing.
It doesn’t have to be attempted murder. Suppose I put a landmine down for some good reason, and I know with total certainty that I’ll be able to eliminate it in the future. This wouldn’t be attempted murder, because I predict I’ll eliminate the landmine.
On the other hand, if we somehow manage to truly feel this in our bones as one action (which I don’t know how to do, btw — it seems like two to me), I’m not so sure we’d have the intuition Matthew wants us to have. To see why not, think about a nearby question. Who would you judge more positively — someone who goes to a war zone, intentionally kills one child with a landmine (while simultaneously deciding to save four others from other people’s landmines) or someone who never travels to the war zone in the first place, spending the war engaged in normal peacetime activities, and thus neither commits nor foils a single war crime? “OK, but I saved more children than I killed” would not, I think, get you much moral approval from any ordinary human being.
Whether it’s divided into one or two actions seems morally irrelevant, as Huemer points out. In terms of character evaluations, I’d regard the first as better. However, in terms of actions, the second one would be better, for the reasons I described in the previous article. Given that I am arguing that there is no salient distinction between violating rights and permitting other violations of rights, merely giving the original case to which this was a counterexample does not advance the dialectic. The whole point of this argument is to disprove the notion that the second action should be judged to be better.
Objection 5
People with damaged vmpc’s (a brain region responsible for generating emotions) were more utilitarian (Koenigs et al 2007), proving that emotion is responsible for non-utilitarian judgements. The largest study on the topic by (Patil et al 2021) finds that better and more careful reasoning results in more utilitarian judgements across a wide range of studies.
The studies, I’m sure, are accurately reported here, but the inference from them is as wrong as wrong could be. I won’t go into this in too much depth here because this was a major theme of my first book (Give Them An Argument) but basically:
All moral judgments without exception are rooted in moral feelings. Moral reasoning, like any other kind of reasoning, is always reasoning from some premises, which can be supplied by factual information, moral intuition (i.e. emotional feelings of approval or disapproval) or some combination of the two, but moral intuition is always in the mix any time you’re validly deriving moral conclusions. There’s just no other place for your most basic premises to come from, ever, and couldn’t be. I don’t doubt that people whose initial emotional reactions (thinking about good and bad consequences) lead them to endorse moral principles and who henceforth reason in very emotionless ways end up sticking to utilitarianism more than people who open themselves to ordinary human moral intuitions about things like organ harvesting examples. For precisely similar reasons, I’d be pretty shocked if people with damaged VMPCs weren’t far more likely to be deontic libertarians than people more likely to have regular emotional reactions. (No clue if anyone’s done a study on that, but if you’re a researcher in relevant areas you can have the idea for free!)
Ben claims that the studies are accurately reported, and then he goes on to contradict the results of the studies! No, not all of our moral judgments are caused by emotions. Some are caused by careful prolonged reflection on what matters. We have significant evidence that when people carefully reflect on what matters, they become way more utilitarian. There’s a reason that people whose VMPC’s are damaged, such that they are less emotional, become more utilitarian. What’s Ben’s account of that, if it’s all just emotions? For more on this, see this article.
Ben’s guess that people would be more likely to be radical libertarians is just wrong—people become 6x more likely to push the guy off the bridge after they have damaged VMPC’s. The dual process theory, which says that being more careful and reflective makes people more utilitarian, has a mountain of evidence, not just the VMPC case—much of which I discuss in the article linked above.
Objection 6
6 Rights run into a problem based on the aims of a benevolent third party observer. Presumably a third party observer should hope that you do what is right. However, a third party observer, if given the choice between one person killing one other to prevent 5 indiscriminate murders and 5 indiscriminate murders, should obviously choose the world in which the one person does the murder to prevent 5.
There’s absolutely nothing obvious about that! Is a Benevolent Third-Party Observer benevolent because they want everyone to do the right thing, or benevolent because they want the best outcome? Unless you question-beggingly (in the context of an argument against rights) assume that the right thing is whatever leads to the best outcomes, those goals will be in tension, so if the BT-PO holds both we need to find out what principle they’re using to weigh the two goals or resolve conflicts before we can even begin to have the slightest idea what a BT-PO might say about a case where rights violations lead to good consequences.
Ben cut this off before I provided the argument for why they’d have this preference.
Matthew continues the point:
An indiscriminate murder is at least as bad as a murder done to try to prevent 5 murders. 5 indiscriminate murders are worse than one indiscriminate murder, therefore, by the transitive property, a world with one murder to prevent 5 should be judged to be better than 5 indiscriminate murders. If world A should be preferred by a benevolent impartial observer to world B, then it is right to bring about the state of the world. All of the moral objections to B would count against world A being better. If despite those objections world A is preferred, then it is better to bring about world A. Therefore, one should murder one person to prevent 5 murders. This seems to contradict the notion of rights.To put what I said earlier slightly differently:
Unless you beg the question against the rights-believer by assuming these can’t come apart, you have to pick whether the BT-PO wants whatever makes the world better or whatever’s morally preferable (or perhaps goes back and forth between preferring these depending on some further consideration?). If the BT-PO’s consistent principle is to prefer whatever makes the world better, then bringing them up has zero possible argumentative weight against belief in a non-consequentialist notion of rights — that there can be such conflicts and that rights should at least sometimes win is what anyone who says there are non-consequentialist rights is saying. If the BT-PO’s consistent principle is to prefer that everyone does the right thing, on the other hand, then it’s not clear what the source of counter-intuitiveness for the rights-believer is supposed to be here. And that’s still true if the BT-PO’s principle is to apply some further consideration to navigate conflicts between rights and good consequences.
This misstates my argument. No premise in my argument assumes they’d always prefer the better thing. My claim is merely that they would prefer 1 killing to prevent 5 to 5 indiscriminate killings. This is because 1 indiscriminate killing is worse than or equal to 1 killing to prevent 5 killings, and 5 killings are worse than 1 killing, thus, by transitivity, 5 killings would be judged to be worse by them than 1 killing to rpevent 5.
There’s a primitive notion of a moral third party—imagine god as the paradigm case. The third party observer should hope you do the right thing, but they should hope you kill one to prevent 5 killings, thus killing one to prevent 5 killings is the right thing.
Objection 7
7 We can imagine a case with a very large series of rings of moral people. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Kill one person
2 Give the five people in the circle outside of you corresponding to you the same options you were just given.
The people in the hundredth circle will be only given the first option if the buck doesn’t stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would choose option 2. However, if this is the case then a cluster of perfectly moral people would bring about 5⁹⁹ murders, when the alternative actions could have resulted in only one murder, because they’d keep passing the buck until the 100th circle. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being, who always chooses correctly.
While my instinct is to agree that whatever you can say about killing vs. letting die you can say about killing vs. not preventing killing, the first thing to note about this is that the “5⁹⁹ murders” once you get to the outermost circle aren’t actually murders at all, since by stipulation they’re involuntary. So (at least if we’re considering the morality of everyone in the first 99 circles refusing to murder) this reduces to a classic but extreme killing vs. letting die dilemma — it’s no different from stipulating that, say, the entire human race other than you and the large man on the bridge has been shrunken down to microscopic size by a supervillain who then put a container containing all however-many-billion people on the trolley track. Anti-utilitarian intuitions generally crumble in the face of sufficiently awe-inspiring numbers and that’s what Matthew is relying on here. There’s an interesting question here about whether to take that as an instance of the general problem of humans having a hard time fitting their heads around scenarios involving sufficiently large numbers or whether to take this as a straightforward intuition in favor of a sort of “moral state of exception” whereby an imperative to prevent genocide-level amounts of death overrides the moral principles that would apply in other cases. (Which of these two is correct? Here’s the good news: You don’t really need to decide because nothing remotely like this will ever come up and both answers are compatible with anti-utilitarian intuitions about smaller-scale cases.)
Ben has totally missed the case. As I say in my article, which I linked in my original blog post, which Ben quotes my linking of
If you’re currently thinking that “moderate deontology says you shouldn’t kill one to save five but should kill one to save 1.5777218 x 10^69,” read the argument more carefully. The argument shows that moderate deontology is internally inconsistent. If you think the argument is just question begging or that the deontologist should obviously accept option 1, as some deontologists have who have heard the argument before I explained it to them more carefully, read the argument again.
I explicitly preempt Ben’s confusion, and yet he nonetheless carries it out!
In case it wasn’t clear, I’ll include the clearer section of my book where I describe the case.
We can imagine a case with a very large series of rings of perfectly moral decision makers, who always make the correct decision. The innermost circle has one person, the second innermost has five, third innermost has twenty-five, etc. Each person corresponds to 5 people in the outer circle. There are a total of 100 circles. Each person is given two options.
1 Murder: Kill one person
2 Buck Pass: Give the five people in the circle outside of you corresponding to you the same options you were just given, if there are any. If not, this option is not available, and you must take option one. Thus, the people in the hundredth circle will be only given the first option if the buck doesn't stop before reaching them.
The deontologist would have two options. First, they could stipulate that a moral person would to pass the buck. However, if this is the case then a cluster of perfectly moral people would bring about 5^99 murders, when then alternative actions could have resulted in only one murder because they’d keep passing the buck until the 100th circle. This seems like an extreme implication.
Secondly, they could stipulate that you should kill one person. However, the deontologist holds that you should not kill one person to prevent five people from being murdered. If this is true, then you certainly shouldn’t kill one person to give five perfectly moral people two options, one of which is killing one person. Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse. If you shouldn’t kill one person to prevent five murders you certainly shouldn’t kill one person to prevent five things that are judged to be at most as bad as murders by a perfectly moral being.
This follows from the following principle.
Optionality: Giving perfectly moral people extra options can’t make things worse.
If optionality is true, then killing one to prevent five killings will be at least as bad as killing one to prevent five perfectly moral people from having two options, one of which is killing one person.
Next, Ben says
But as with the leg-grabbing aliens above, the apparent difference between this and a simple dilemma between killing one innocent and letting 5⁹⁹ innocents die is that, considered in isolation, standard anti-utilitarian moral intuitions would seem to recommend individual decisions that, in aggregate, would amount to permitting the deaths of 5⁹⁹ people.
It’s not merely that they would in aggregate create bad outcomes. It’s that the person in the first circle knows with total certainty that it would kill 5^99 people. Thus, the moderate deontologist would seem to be unable to take the first horn of the dilemma, which says that you shouldn’t take the buck passing option. However, they also can’t say you should kill, because that violates optionality, a nearly self evident principle.
But (as Larry Temkin emphasizes in his response to “money pump” arguments for transitivity) it’s irrational not to reason about a series of decisions with aggregate effects…in aggregate.
I’ve argued for transitivity here.
A reasonable principle is that the first person in the first ring should do whatever all of the saints in all the rings would agree on if they had a chance to talk it all through together to decide on a collective course of action. If we assume that the “moral state of exception” view is correct, they would presumably all want the person in the first ring to kill the five people in the second one.
But this violates optionality, as I explain, and Ben ignores.
(Just for fun, by the way, since that “everyone” would include the five victims, in this scenario it would be more like assisted suicide than murder.)
No—it involves murdering other people.
If it’s not correct, then I suppose they would all abstain and it would be the fault of whatever demon set this all up rather than any of his victims.
But then this violates threshold deontology, because it holds you shouldn’t kill one person to prevent an evil demon from killing 5^99 people.
As I mentioned in my first conversation with Matthew, I’m also very open to the possibility that this could just be a moral tragedy with no right answer — as an extremely convoluted scenario designed precisely to make moral principles that seem obviously correct in simpler cases difficult to apply, if anything’s an unanswerable moral tragedy, this strikes me as a good candidate on its face!
The fact that it’s a tragedy in no way implies that there’s no fact of the matter of what you should do. However, if there’s no fact of the matter in the case of the 100 rings, and there’s also no fact of matter with 1000 rings, then, by transitivity, one would be indifferent between 5^99 deaths and 5^999 deaths. To illustrate this they find
Passing the buck when there are 100 circles = killing one
Killing one = Passing the buck when there are 1000 circles
Thus, by transitivity, passing the buck when there are 100 circles=passing the buck when there are 1000 circles. In one case, there are 5^99 deaths, in the other, there are 5^999 deaths.
Also, the notion that there’s no fact of the matter about whether one should kill one to save 5^99 or not is a ridiculous notion. You should obviously kill one to save the world.
But no matter which of these three answers you go with (do kill the five in the name of a moral state of exception, refuse to play the demon’s game, or just roll with “both answers are indefensibly wrong, its an unanswerable moral tragedy”) I have a hard time seeing how any of those three roads are supposed to lead me to abandoning normal rights claims. At absolute worst, normally applicable rights are overridden in this scenario. Even if that’s the case, that gives me no particular reason to think they’re overridden in, say, ordinary trolley or organ harvesting cases.
This may be because Ben didn’t adequately appreciate the argument.
Oh, and it’s worth saying a brief word about this:
Giving perfectly moral beings more options that they don’t have to choose cannot make the situation morally worse.At this point I’ve had multiple conversations with Matthew where this has come up and I still have no idea why he thinks this is true, never mind obviously true. It vaguely sounds like the sort of thing that could turn out to be true, but the same can be said of plenty of (mutually inconsistent) moral principles. When you’re thinking about a principle this abstract, it’s easy to nod along but right methodology is to test it out by applying it to cases — like this one!
Here’s the argument
1 Perfectly moral beings would only choose extra options if they are good options
-this is true by definition
2 If perfectly moral beings don’t choose extra options, they don’t make things worse
Therefore, if extra options are not good, they don’t make things worse.
3 If extra options are good, then choosing them makes things better.
-this is just what a good choice is. Note, this doesn’t assume consequentialism, because I’m using making things worse in a broad sense, that would potentially sanction deontology claiming that killing one to save five makes things worse.
4 Making things better doesn’t make things worse
Therefore, giving perfectly moral beings extra options doesn’t make things worse.
Objection 8
8 Let’s start assuming one holds the following view
Deontological Bridge Principle: This view states that you shouldn’t push one person off a bridge to stop a trolley from killing five people.
This is obviously not morally different from
Deontological Switch Principle: You shouldn’t push a person off a bridge to cause them to fall on a button which would lift the five people to safety, but they would not be able to stop the trolley.
In both cases you’re pushing a person off a bridge to save five. Whether their body stops the train or pushes a button to save other people is not morally relevant.
Suppose additionally that one is in the Switch scenario. They’re deciding whether to make the decision and a genie appears to them and gives them the following choice. He’ll push the person off the bridge onto the button, but then freeze the passage of time in the external world so that the decision maker can have ten minutes to think about it. At the end of the ten minutes, they can either lift the one person who was originally on the bridge back up or they can let the five people be lifted up.
It seems reasonable to accept the Genie’s offer. If, at the end of ten minutes, they decide that they shouldn’t push the person, then they can just lift the person back up such that nothing actually changes in the external world. However, if they decide not to then they’ve just killed one to save five. This action is functionally identical to pushing the person in switch. Thus, accepting the genie’s offer is functionally identical to just giving them more time to deliberate.
It’s thus reasonable to suppose that they ought to accept the genie’s offer. However, at the end of the ten minutes they have two options. They can either lift up one person who they pushed before to prevent that person from being run over, or they do nothing and save five people. Obviously they should do nothing and save five people. But this is identical to the switch case, which is morally the same as bridge.
It looks to me like, once again, Matthew is trying to have it both ways here. Either the genie’s offer just delays the decision (which is what we need to assume for that breezy “it’s reasonable to accept the genie’s offer” to make sense) or it is a morally significant decision in itself. This in turn reduces to the same issue noted above in the poison and landmine cases — if you do something and then deliberate about whether to reverse the effect, does “doing it and then deciding not to reverse it” count as one big action or does it separate into two actions? The “reasonable to accept the genie’s offer” claim makes sense if (and only if) you accept “the one big action” analysis, but the “obviously they should do nothing” claim only makes sense given the “two distinct actions” view. If it’s two distinct actions, accepting the genie’s offer was wrong (in the way setting a landmine even if you might decide to change your mind and decide to save the child from later would be wrong). If it’s one big action, then Matthew’s “obviously” claim doesn’t get off the ground.
Whether or not it’s two distinct actions or one distinct action isn’t relevant, and Ben doesn’t clearly reject any part of the argument. The genie’s offer makes it so that the status quo changes as a temporary placeholder, but it can be reversed! However, after the ten minutes, the intuition is very strong that the status quo should be maintained.
We can consider a parallel case with the trolley problem. Suppose one is in the trolley problem and a genie offers them the option for them to flip the switch and then have ten minutes to deliberate on whether or not to flip it back. It seems obvious they should take the genie’s offer.
Again: Only obvious given the “one big action” view.
Well at the end of ten minutes they’re in a situation where they can flip the switch back, in which case the train will kill five people instead of one person, given that it’s already primed to hit one person. It seems obvious in this case that they shouldn’t flip the switch back.
Thus, deontology has to hold that taking an action and then reversing that action such that nothing in the external world is different from if they hadn’t taken and then reversed the action, is seriously morally wrong.Again: This claim about what’s it’s “obvious” that the rights-believer has to endorse is only obvious given the “two distinct actions” view.
This is an astounding claim—particularly the second one, though I have no idea how the one big action view makes the first intuitive. On the second one, after ten minutes, a person has set up a situation in which a train is going to kill five people. However, they can flip the switch to kill one person. Obviously they shouldn’t flip the switch. Really visualize the scenario—the answer becomes obvious.
If flipping the switch is wrong, then it seems that flipping the switch to delay the decision ten minutes, but then not reversing the decision, is wrong. However, flipping the switch to delay the decision ten minutes and then not reversing the decision is not wrong. Therefore, flipping the switch is not wrong.
Maybe you hold that there’s some normative significance to flipping the switch and then flipping it back, making it so that you should refuse the genie’s offer. This runs into issues of its own. If it’s seriously morally wrong to flip the switch and then to flip it back, then flipping it an arbitrarily large number of times would be arbitrarily wrong. Thus, an indecisive person who froze time and then flipped the switch back and forth googolplex times, would have committed the single worst act in history by quite a wide margin. This seems deeply implausible.
This part relies on an assumption about how wrongness aggregates between actions that, at least in my experience, most non-utilitarian moral philosophers will emphatically reject. In fact, my impression at least is that the intuition that wrongness doesn’t aggregate in this way plays a key role in why so many of the people who’ve thought most about utilitarianism reject it.
Now, it could be that the non-utilitarian moral philosophers are wrong to reject aggregation. But even if so, once utilitarianism has been rejected and rights have been affirmed, it’s just a further question whether the wrongness of (initially attempted then reversed) rights violations can accumulate in this way.
Wrongness must aggregate—if it’s wrong to do something once, then doing it more times is even worse. I argue that this conclusion is undeniable here.
Either way, deontology seems committed to the bizarre principle that taking an action and then undoing it can be very bad. This is quite unintuitive. If you undo an action, such that the action had no effect on anything because it was cancelled out, that can’t be very morally wrong. Much like writing can’t be bad if one hits the undo button and replaces it with good writing, it seems like actions that are annulled can’t be morally bad.
It’s worth just briefly registering that this is a pretty eccentric judgment. To adapt an old Judith Jarvis Thomson example, if I put poison in my wife’s coffee then felt an attack of remorse and dumped it out and replaced it with unpoisoned coffee before she drank it, my guess is that very few humans would disagree that my initial action was very wrong. Deep and abiding guilt would be morally appropriate despite my change of heart.
In this case, perhaps the initial action would be wrong, in that it would be reckless, dangerous, and indicative of vicious character. This is often what we mean by wrong, as I argue here. However, if we ask whether it would be wrong in the deeper sense of better for it never to have been, the answer is obviously no. The action, while reckless, did not end up harming people. The action may have been wrong, but it certainly wasn’t bad, if we stipulate that it didn’t cause the person any guilt or damage their character at all.
To put a little bow on this part, it’s worth pointing out that we coud adapt this case very slightly and make it a straightforward moral luck example. Call the version of the case where I dump out the coffee and stealthily replace it with unpoisoned coffee “Coffee (Reversed)”. Now consider a second version — “Coffee (Unreversed)” — where I don’t have the attack of remorse in time because I’m distracting by the UPS delivery guy ringing the doorbell, and my wife is thus killed.
Intuitively, these cases are messy, but part of what makes the moral luck problem such a problem is that at least one of the significant intuitions at play in moral luck cases is that the difference between Coffee (Reversed) and Coffee (Unreversed) isn’t the kind of difference that should make any difference in your moral evaluation of me.
Obviously coffee unreversed is bad. In coffee reversed, while you clearly acted wrongly, you action wasn’t bad overall—it didn’t harm anyone. Remember, my principle concerns badness, rather than wrongness.
It also runs afoul of another super intuitive principle, according to which if an act is bad, it’s good to undo that act. On deontological accounts, it can be bad to flip the switch, but also bad to unflip the switch. This is extremely counterintuitive.
If we read the “super intuitive principle” as “if an act is bad, all else being equal it’s good to undo such an act,” then I can understand why Matthew finds it so intuitive. If we read it as “if an act is bad, then it’s always good to undo it” or to put a finer point on it “if an act is bad, then the morally best decision on balance is always to undo it,” I’m a whole lot less sure. In fact, given that last reading, Matthew himself doesn’t agree with it, given that he thinks that in the landmine and poison cases them morally best decision is to save the more numerous victims of other malefactors rather than to undo your own bad act.
Let me make the statement more precise, and it will be super obvious. If an act is bad, then all else equal it’s always good to undo such an act.
Objection 9
9 (Huemer, 2009) gives another paradox for deontology which starts by laying out two principles (p. 2)
“Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
This is intuitive — how we classify the division between actions shouldn’t affect their moral significance.
We’ve already seen several times above that this principle is wrong, and we could just leave Huemer there, but there’s another point of interest coming up, so let’s take a look at the remainder of 9:
Ben’s claims previously have had no basis, so this isn’t a response to the argument. Clearly reflect on the principle—can the morality of an act really depend on whether we think of it as you doing two actions vs one action?
Second (p.3) “If it is wrong to do A, and it is wrong to do B given that one does A, then it is wrong to do both A and B.”
Now Huemer considers a case in which two people are being tortured, prisoner A and B. Mary can reduce the torture of A by increasing the torture of B, but only half as much. She can do the same thing for B. If she does both, this clearly would be good — everyone would be better off. However, on the deontologist account, both acts are wrong. Torturing one to prevent greater torture for another is morally wrong.
If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong. However, this clearly wouldn’t be morally wrong.
The rights-believer — I keep translating Matthew’s references to “the deontologist” this way because these are all supposed to be general arguments against rights, and because you don’t have to be a pure deontologist to believe that considerations about rights are morally important — is only committed to all of this given the assumption we’ve already considered and rejected in the discussions of the leg-grabbing aliens and the circles of saints above, which is that there can’t be cases where individual actions can’t be justified by some set of moral principles when considered separately but can be when considered together. “The overall effect will be to reduce everyone’s harm” is morally relevant information and Temkin’s point about aggregate reasoning is a good one.
Good translation! The overall effect will be making everyone better off seems to rely on the pareto principle, which Ben will go on to reject. However, this is just explaining why it’s a tough pill to swallow—it requires rejecting that making every better off is good. The deontologist has to reject either
A) That you shouldn’t violate rights to produce greater benefit
B) Benefiting all the people in the torture scenario is good.
C) “Whether some behavior is morally permissible cannot depend upon whether that behavior constitutes a single action or more than one action.”
D) If it’s wrong to cause one unit of harm to prevent 2 units of harm to another, then an action which does this for two people, making everyone better off, would be morally wrong. However, this clearly wouldn’t be morally wrong.
A would seem to require rejecting rights—the deontologist has to find another of the super plausible principles to reject.
Objection 10
10 Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, (Mogenson and Macaskill, 2021) argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people, by changing very slightly the time in which lots of other people have sex.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist, and no doubt some will violate rights in significant ways and others will have their rights violated in ways caused by you. Mogenson and Macaskill argue that consequentialism is the only way to account for why it’s not wrong take most mundane, banal actions, which change the distribution of future people, thus violating (and preventing) vast numbers of rights violations over the course of your life.
This is a fun one, but this seems like precisely the opposite of the right conclusion. This case, if we think about it a little harder, actually cuts pretty hard against utilitarianism (and consequentialism in general).
To see why, start by noticing that from a rights-based perspective — especially straight-up deontology! — pressing a button that will itself either save or kill someone (and give you $5 either way) is absolutely nothing like engaging in an ordinary action that might indirectly and unintentionally lead (along with many other factors) to someone coming into existence who will either kill someone or save somebody from being killed.
Ben says “might.” Well, it’s overwhelmingly likely. Being indirect isn’t morally salient—an arms dealer who sells arms, knowing they’ll be used to kill kids, would still be violating rights. Additionally, we can stipulate that the five dollar button works in mysterious, indirect ways. That wouldn’t seem to affect our moral judgment of the situation. Ben says that it’s unintentional. Well, it may not be intended, but it’s foreseen as a side effect now that this act has been pointed out. Pressing the button would be impermissible even if the person mostly intended to get the money and didn’t care about the side effects of their actions.
The whole point of deontology is to put the moral focus on the character of actions rather than their consequences. If deontologists are right, there’s a moral galaxy of difference between acting to violate someone’s rights and acting in a way that leads to someone else violating someone’s rights (or, if we’re going to be precisely accurate about the case here, since what we’re talking about is bringing someone into existence who will then decide to violate someone else’s rights, “leads to someone else violating someone’s rights” should reall be “is a necessary but not a condition for someone else violating someone’s rights”).
We can just slightly modify the button case, and the moral situation is no different. If pressing the button had a 50% chance of causing a murder, a 50% chance of preventing a murder, and would certainly give five dollars, it seems structurally analogous. If you know that your action will cause a murder, such that had you not taken the action there wouldn’t be a murder—or some other non-murder, yet still very bad crime—that’s clearly a rights violation on deontology. For more on this, you can read the MacAskill and Mogenson paper that was linked.
If, on the other hand, consequences are all that matter, it’s much harder to see a morally significant difference between unintentionally setting in motion a chain of events that ends with someone else making a decision to do X and just doing X! The button, in other words, is a good analogy for getting into your car if utilitarianism is right, but not if deontology is right.
This was refuted above.
Also note that if there’s a 50% chance that any given button-pushing will save someone and a 50% chance that it will kill someone, over the “sufficiently long run” appealed to by statisticians, it’ll all balance out and thus be utilitarian-ishly neutral — but there’s absolutely no guarantee that killings and savings in any given lifetime of metaphorical button-pushing will balance out! You might well happen to “kill” far more people than you “save.” If we assume that the character of your action is beside the point because consequences are all that matter, your spending a lifetime taking car trips and contributing to traffic patterns, etc., might well add up to some serious heavy-duty moral wrongness.
True, but it might also turn out to be very good. Utilitarianism correctly identifies the morally salient effects of your actions, based on your state of mind—those wash out in the long run. Ben’s objection seems to be that utilitarianism problematically holds that driving your car might be really bad. Well, here’s the thing; it might be. It won’t be really wrong, because wrongness relates to the available information. But if your driving caused baby Hitler to be born, then your driving is really bad, such that it would be better if you’d never driven. This is the judgment of utilitarianism, and it’s not unintuitive.
Objection 11
11 The pareto principle, which says that if something is good for some and bad for no one then it is good, is widely accepted.
It’s widely accepted by economists for deciding what counts as better utility. It’s not widely accepted among non-utilitarian moral philosophers as a standard for what constitutes a morally good action, for obvious reasons — it assumes (once read as a principle about how to morally evaluate actions) that only the consequences of actions are morally relevant!
This is disastrously and almost scandalously false! One can both hold that making everyone better off is a good thing and that people have rights that shouldn’t be violated. It’s very intuitive that doing things that only make people well off is good. I have no polling date on the pareto principle, but in my experience, most people agree with it. Even if one ultimately rejects it, that’s a cost of a theory, given its prima facie plausibility.
It’s hard to deny that something which makes people better off and harms literally no one is morally good. However, from the Pareto principle, we can derive that organ harvesting is morally the same as the trolley problem.
This is a pet peeve and a little bit off-topic, but it drives me crazy. The trolley “problem” as originally formulated by Thomson (who coined the phrase “the trolley problem”) was precisely that, if we’re just looking at outcomes, pushing the large man (or, and this was actually Thomson’s preferred example for dramatizing the problem, harvesting a healthy patient’s organs to save five people who need transplants) is indistinguishable from pulling the lever…and yet the vast majority of people who share the intuition that lever-pulling is morally legitimate don’t have a parallel intuition about those other cases. The “problem” was supposed to be how to reconcile those two seemingly incompatible intuitive reactions. Anyway, let’s keep going.
Sorry!
Suppose one is in a scenario that’s a mix of the trolley problem and the organ harvesting case. There’s a train that will hit five people. You can flip the switch to redirect the train to kill one person. However, you can also kill the person and harvest their organs, which would cause the 5 people to be able to move out of the way. Those two actions seem equal, if we accept the Pareto principle. Both of them result in all six of the people being equally well off. If the organ harvesting action created any extra utility for anyone, it would be a Pareto improvement over the trolley situation.
This nicely demonstrates exactly why “the situation created if you do X is a Pareto improvement over the situation created if you do Y” doesn’t entail “doing X is no worse morally than doing Y” without hardcore consequentialist assumptions about how morality works. While it should be noted that many non-utilitarian philosophers bite the bullet on the first version of the trolley case and conclude (on the basis of their far stronger intuitive reaction to Thomson’s other cases) that pulling the lever in the first case is wrong, there are ways of consistently avoiding giving up either of the initial intuitions. (Whether any of these ways are fully convincing is, of course, super-duper controversial.) For example, one of the solutions to the Trolley Problem that Thomson herself briefly floats in one of her several papers about it is a Kantian one — that sending a train to a track where it unfortunately will kill the person there doesn’t involve reducing them to the status of a mere means to your end in the way that actually using their body weight to block the trolley (or flip the switch in Switch) or harvesting their organs does. To see the distinction drawn in this solution (which is roughly Doctrine of Double Effect-ish), notice that if you turned out to be wrong in your assumption that the workman on the second track wouldn’t be able to get out of the way and he did in fact manage to scamper off the track before the trolley would have squashed him, that wouldn’t mess up your plan for saving the five — whereas if the large man survived the fall and rolled off the track, that would mess up your plan, because your plan involved using him as a mere means rather than setting something in motion which would have the foreseen but not intended side effect of killing him.
Look, obviously pointing out that my conclusions are consequentialists is not any sort of response to my argument. My point is that, if we try to reason by looking at which principles are the most plausible, to hold that the switch version of the trolley problem is different from the organ harvesting case, one must deny the super intuitive pareto principle.
Now, maybe you find that convincing and maybe you don’t. But it doesn’t seem obviously wrong to me — and if it’s at all plausible, the fact that murdering the workman on the second track and harvesting his organs would be a pareto improvement from diverting the train to the second track (thus causing his death) wouldn’t be sufficient to settle the question of whether the organ harvesting was wrong in a way that diverting the train wasn’t.
Here’s the conclusion of 11:
Premise 1 One should flip the switch in the trolley problemPremise 2 Organ harvesting, in the scenario described above, plus giving a random child a candy bar is a pareto improvement over flipping the switch in the trolley problem
Premise 3 If action X is a pareto improvement over an action that should be taken, then action X should be taken
Therefore, organ harvesting plus giving a random child a candy bar is a action that should be taken
This is a very noisy version of what could be in one sentence:
“If consequences are all that matter, saving the five through organ harvesting is no worse than saving them through pulling the lever, and doing the latter plus doing things that cause other good consequences is better.”
No, not if consequences are all that matter—if the pareto principle, something widely accepted and deeply plausible upon prolonged reflection, is true!
But here’s the thing — that has no argumentative force whatsoever against deontologists and other non-utilitarians, since critics of utilitarianism are generally split between (a) people who think even pulling the lever is wrong, and (b) people who think pulling the lever might be defensible but Thomson’s other examples that are equivalent to pulling the lever in terms of consequences are still definitely wrong. It’s hard to see how a partisan of either position would or should be moved by this argument (which remember was in a list of arguments against any sort of belief in rights understood as real rights and not heuristics for utility calculations).
Finally, Matthew’s opening statement ends with a few more specific responses to the organ harvesting counterexample to utilitarianism.
People who think that the pareto principle is plausible and that one should flip the switch should be swayed. This is plausibly most people, as most think one should flip the switch.
Now we turn to the objections to the organ harvesting case, even assuming we accept rights
Finally, Matthew’s opening statement ends with a few more specific responses to the organ harvesting counterexample to utilitarianism.
Objection 1
First, there’s a way to explain our organ harvesting judgments away sociologically. Rightly as a society we have a strong aversion to killing. However, our aversion to death generally is far weaker. If it were as strong we would be rendered impotent, because people die constantly of natural causes.
This is the sort of point that might bother a hardcore moral realist who believed that (some of) our moral intuitions are somehow caused by an externally existing moral reality, and some are caused by other things and should thus be disregarded. But I just find that view of meta-ethics deeply implausible — I won’t run through all this here, but I’ll just say that above and beyond the usual ontological simplicity concerns about the idea of a separate moral realm external to our moral intuitions, I have epistemic and semantic concerns about this picture. How exactly are our intuitions making contact with this realm? What plausible semantic story could we tell about how our moral terms came to refer to elements of this underlying moral reality?
This shouldn’t just trouble moral realists. If the reason we’re opposed to organ harvesting is because of our brain overgeneralizing based on other patterns, such that if we really, carefully reflected, we’d revise the judgment, then that’s everyone’s problem, including the anti-realist.
On the point about moral realism, anti-realism is wildly implausible, as I argue here. For more on this, see “On What Matters” and “Ethical Intuitionism.” The semantic account and the epistemological account would both be similar to how we came to know about and talk about other abstract realms, such as math and sets. We can reason about such things, and they can explain particular moral judgments that we make, which feature in our moral language.
The sort of view I’m attracted to instead says, basically, that the project of moral reasoning is precisely to hammer our moral intuitions (or as many of them as possible) into a coherent picture so we can act on them. Where our moral intuitions come from is an interesting question but not really a morally relevant one. What we’re trying to figure out is which goals we care about, not the empirical backstory of how we came to care about them.
This view is, as expressed, crazy. Suppose that the only reason you had the view that taxes should increase is because you were hypnotized by a criminal. That would give you good reason to revise such a view. If our moral judgments don’t reflect what we’d value, upon reflection, that gives us a really good reason to revise them.
Objection 2
Second, we have good reason to say no to the question of whether doctors should kill one to save five. A society in which doctors violate the Hippocratic oath and kill one person to save five regularly would be a far worse world. People would be terrified to go into doctor’s offices for fear of being murdered. While this thought experiment generally proposes that the doctor will certainly not be caught and the killing will occur only once, the revulsion to very similar and more easily imagined cases explains our revulsion to killing one to save 5.
Not sure what the Hippocratic oath is supposed to have to do with anything — presumably, in a world with routine organ-harvesting, doctors would just take a different oath in the first place! But the point about going to the doctor’s office for checkups is a good one. To test whether that’s the source of our revulsion, we should consider other kinds of organ-harvesting scenarios. For example, we could just make everyone register for random selection for organ harvesting the way we make boys reaching adulthood register for the Selective Service. There would be an orderly process to randomly pick winners, and the only doctors who had anything to do with it would be doctors employed by the state for this purpose — they would have nothing to do with the GPs you saw when you went in for a checkup, so we wouldn’t have the bad consequences of preventable diseases not being prevented. We’d still have a fair amount of fear, of course, but (especially if this system actually made organ failure vanishingly rare) I don’t know that it’s obvious a priori that the level of fear generated would outweigh the good consequences of wiping out death from organ failure in the utility calculus.
I’m pretty sure that it would cause overall harm, particularly because in the real world, where a large percentage of organs are rejected, and people don’t have enough useful organs to save multiple lives. This was argued in much greater detail here.
A further point on this:
The claim that our reaction to extremely distant hypothetical scenarios where organ harvesting was routinely and widely known about somehow explains our reaction to far more grounded hypothetical scenarios where it was a one-off done in secret is…odd. What’s the epistemic story here? What’s the reason for believing that when people think they’re having an immediate intuitive reaction to the latter they’re….subconsciously running through the far more fanciful hypothetical that they’ve somehow mixed up with it and thus forming a confused judgment about the former? I guess I just don’t buy this move at all.
I’ll just quote my explanation that I wrote here.
Let’s begin with an example—the organ harvesting case. A doctor can kill a patient and harvest their organs to save five people. Should they? Our intuitions generally say no.
What’s going on in our brains—what’s the reason we oppose this? Well, we know that social factors and evolution dramatically shape our moral intuitions. So, if there’s some social factor that would result in strong pressure to hold to the view that the doctor shouldn’t kill the person, it’s very obvious that this would affect our intuitions. Are there?
Well, of course. A society in which people went around killing other people for the greater good would be a much worse society. We have good rules to place strong prohibitions on murder, even for the allegedly greater good.
Additionally, it is a practical necessity that we accept, as a society, some doing allowing distinction. Given that doing the maximally good thing all the time would be far too demanding, as a society, we treat there as being some fundamental distinction between doing and allowing. Society would collapse if we treated murder as being only a little bit bad. Thus, it’s super important that we treat murder as very bad. But given that we can’t treat failing to do something unfathomably demanding as horrendous—equivalent to murder—we have to treat there as being some distinction between doing and allowing.
After this distinction is in place, our intuitions about organ harvesting are very obviously explainable. If killing is treated as unfathomably evil, while not saving isn’t, then killing to save will be seen as horrendous.
To see this, imagine if things were the other way. Imagine if we were living in a world in which every person will kill one person per day, in an alternative multiverse segment, unless they fast during that day. Additionally, imagine that, in this world, each person saved dozens of people per day in alternative multiverse segment, unless they take drastic action. In this world, it seems clear that failing to save would be seen as much worse than killing, given that saving is easy, but failing to kill is very difficult. Additionally, imagine that these people saw those who they were saving, and they felt empathy for them. Thus, not saving someone would provoke similar internal emotional reactions in that world as killing does in ours.
So what do we learn from this. Well, to state it maximally bluntly and concisely, many of our non-utilitarian intuitions are the results of social norms that we design to have good consequences, which we then take to be significant independently of their good consequences. These distinctions are never derivable from plausible first principles, never have clear delineations, and always result in ridiculous reductios. They are more epiphenomena—an unnecessary byproduct of correct moral reasoning. We correctly see that society needs to enshrine rights as a legal concept, and then incorrectly feel an attachment to them as an intrinsic feature of morality.
When we’re taught moral norms as a child, we’re instructed with rigid norms like “don’t take other people’s things.” We try to reach reflective equilibrium with those intuitions, carefully reflecting until they form coherent networks of moral beliefs. Then, later in life, we take them as the moral truth, rather than derivative heuristics.
Objection 3
Third, we can imagine several modifications of the case that makes the conclusion less counterintuitive.
First, imagine that the six people in the hospital were family members, who you cared about equally. Surely we would intuitively want the doctor to bring about the death of one to save five. The only reason why we have the opposite intuition in the case where family is not involved is because our revulsion to killing can override other considerations when we feel no connection to the anonymous, faceless strangers who’s death is caused by the Doctors adherence to the principle that they oughtn’t murder people.
This one really floored me in the debate. I guess I could be wrong but my assumption would be that no more than one or two out of any one hundred million human beings — not one or two million out of a hundred million, but literally one or two — would be more friendly to murdering a member of their own family to carve them up for their organs than doing the same to a complete stranger.
I’d be curious to hear from people in the chat—just intuitively, what would you hope for in this case. For me, I’d definitely prefer for more of my family members to be saved, rather than fewer. The question didn’t ask about murdering a member of ones family—it asked about whether they’d hope that a doctor would murder one of their family so that the other five members don’t die.
A second objection to this counterexample comes from Savulescu (2013), who designs a scenario to avoid unreliable intuitions. In this scenario there’s a pandemic that affects every single person and makes people become unconscious. One in six people who become unconscious will wake up — the other 5/6ths won’t wake up. However, if the one sixth of people have their blood extracted and distributed, thus killing them, then the five will wake up and live a normal life. It seems in this case that it’s obviously worth extracting the blood to save 5/6ths of those affected, rather than only 1/6ths of those affected.
Similarly, if we imagine that 90% of the world needed organs, and we could harvest one person’s organs to save 9 others, it seems clear it would be better to wipe out 10% of people, rather than 90%.
This is just “moral state of emergency” stuff. All the comments about those intuitions made above apply here.
It’s not a moral state of emergency—in each case, the ratio of rights violated to rights protected is only one to five. To see this case more clearly, I’ll quote Savuluscu (partly because I hadn’t the time in the original debate).
In Transplant, a doctor contemplates killing one innocent person and harvesting his/her organs to save 5 people with organ failure. This is John Harris’ survival lottery.
But this is a dirty example. Transplant imports many intutions. For example, that doctors should not kill their patients, that those with organ failure are old while the healthy donor is young, that those with organ failure are somehow responsible for their illness, that this will lead to a slippery slope of more widespread killings, that this will induce widespread terror at the prospect of being chosen, etc, etc
A better version of Transplant is Epidemic.
Epidemic. Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
If you were a person in Epidemic, which policy would you vote for? The first policy, Inaction, is one in which nothing is done. One in six of the world’s population surives. The second policy is Extraction, which kills one but saves five others. There is no way to predict who will be an antibody producer. You don’t know if you will be one of the six who can mount an immune reaction or one of the five in six who don’t manage to mount an immune response and would die without the antibody serum.
Put simply, you don’t know whether you will be one who could survive or one who would die without treatment. All you know for certain is that you will catch the disease and fall unconscious. You may recover or you may die while unconscious. Inaction gives you a 1 in 6 chance of being a survivor. Extraction gives you a five in 6 chance.
It is easy for consequentialists. Extraction saves 5 times as many lives and should be adopted. But which would you choose, behind the Rawlsian Veil of Ignorance, not knowing whether you would be immunocompetent or immunodeficient?
I would choose Extraction. I would definitely become unconscious, like others, and then there would be a 5 in 6 chance of waking up to a normal life. This policy could also be endorsed on Kantian contractualist grounds. Not only would rational self-interest behind a Veil of Ignorance endorse it, but it could willed as a universal law.
Consequentialism and contractualism converge. I believe other moral theories would endorse Extraction.
Since Extraction in Epidemic is the hardest moral case of killing one to save 5, if it is permissible (indeed morally obligatory), then all cases of killing one innocent to save five others are permissible, at least on consequentialist and contractualist grounds.
There is no moral distinction between killing and letting die, despite many people having intuitions to the contrary.
Objection 4
A fourth objection is that, upon reflection, it becomes clear that the action of the doctor wouldn’t be wrong. After all, in this case, there are four more lives saved by the organ harvesting. It seems quite clear that the lives of four people are fundamentally more important than the doctor not sullying themself.
That’s not a further objection. That’s just banging the table and insisting that the only moral principles that are relevant are consequentialist ones — which is, of course, precisely the issue in dispute. Also worth pausing here to note the relevant higher-order evidence. As far as I know, utilitarianism is a distinct minority position among professional philosophers who have ethics as their primary academic specialization (i.e. the people who are most likely to have done extensive reflection on this!).
No—I was explaining how, as a consequentialist, when I consider the morally salient features of the situation, the things that are actually important, that actually matter to people, consequentialism seems to fundamentally capture what’s most important. The notion that we have most reason to kill the person and harvest their organs is not an implausible one.
Objection 5
Fifth, we would expect the correct view to diverge from our intuitions in a wide range of cases, the persistence of moral disagreement and the fact that throughout history we’ve gotten lots of things morally wrong show that the correct view would sometimes diverge from our moral intuitions. Thus, finding some case where they diverge from our intuitions is precisely zero evidence against utilitarianism, because we’d expect the correct view to be counterintuitive sometimes. However, when it’s counterintuitive, we’d expect careful reflection to make our intuitions become more in line with the correct moral view, which is the case, as I’ve argued here.
The comments on meta-ethics above are relevant here. I’ll just add three things here. First, moral judgments and moral intuitions aren’t the same thing. An intuition is an immediate non-inferential judgment. Other kinds of judgments are indirectly and partially based on moral intuitions as well as morally relevant factual information and so on. One big problem with appealing to people having moral judgments in the past that seem obviously crazy to us now as evidence that moral intuitions can steer us wrong is that we have way more access to what moral judgments people made in the past than how much those judgments were informed by immediate intuitions that differed from ours (like, they would have had different feelings of immediate approval and disapproval about particular cases) and how much they were informed by, for example, wacky factual assumptions (e.g. “God approves of slavery and He knows what’s right more than I do” or “women are intellectually inferior to men and allowing them to determine their own destiny would likely lead to disaster”).
There are lots of cases of factual errors in the past. But also, as Singer notes, throughout much of human history, people have harbored fairly egregious views about the moral insignificance of other people. Additionally, even if we were to think that we were special in not having any substantial moral errors, unlike previous societies, the presence of disagreement would mean that many of us are wrong. As I pointed out in the actual debate, even an anti-realist is likely to recognize that they are not infallible, and that if they reflected more they could change their moral views in better ways. Perhaps not objectively better ways, but if knowing more would make you care more about animals, for example, then it seems you should care about animals.
Second, the persistence of moral disagreement could just be evidence of not everyone having identical deep moral intuitions or it could be evidence that some people are better than others at bringing their moral intuitions into reflective equilibrium or (most likely!) some of each without being evidence that some (but not other) intuitions are failing to make contact with the underlying moral reality.
Ideally, we want the types of moral intuitions that other people won’t look back on in 100 years the same way we look back on slavery. However, as I’ve argued at great length, upon reflection, we do converge—specifically on utilitarianism.
Third, even if there is an underlying moral reality, moral intuitions are (however this works!) presumably our only means of investigating it. If you believe that, I don’t see how you can possibly say that the counterintuitive consequences of utilitarianism are “zero” evidence against utilitarianism. They’re some evidence. They could perhaps (“on reflection”) be outweighed by other intuitions. Whether that’s the case is…well….what the last ten thousand words have been about!
I explain this in greater detail here. But the basic idea is that, while intuitions are the way we gather evidence for our moral theory, one counterintuitive result isn’t any evidence, because we’d expect the correct moral theory to sometimes be unintuitive, given the infallibility of our moral intuitions. I also pointed this out in the actual debate with Ben.
Objection 6
Sixth, if we use the veil of ignorance, and imagine ourself not knowing which of the six people we were, we’d prefer saving five at the cost of one, because it would give us a 5/6ths, rather than a 1/6ths chance of survival.
If this is correct, it shows that to achieve the sort of neutrality that the veil of ignorance is supposed to give us, agents in the original position had better be ignorant of how likely they are to be the victim or beneficiary of any contemplated harm.
This is absurd! The reason the veil of ignorance is good is because it allows us to be rational and impartial—we don’t know who we are. Morality is just what we’d do if we were totally rational and impartial and that’s be utilitarian. There’s no justification for depriving us of the extra information.
Notice that without that layer of ignorance, the standard descriptions of the thought experiment aren’t actually true. “You don’t know whether you’ll be male or female, black or white, born into a rich family or a poor family, if there’s slavery you won’t know whether you’re a slave or a master,” etc. Some of these may be true but some of them won’t. Say you’re considering enslaving 1% of the population to serve the needs of the other 99%. If you’re behind the veil of ignorance but you know that, if you form the belief that you won’t be a slave — and you’re right — does that not count as knowledge? You had accurate information from which you formed an inference that you could be 99% sure was correct! On at least a bunch of boringly normal analyses of what makes true belief knowledge, the person who (correctly) concludes from behind the veil of ignorance that they won’t be a slave, and thus endorses slavery out of self-interest, does know they won’t be a slave. That very much defeats the point.
A final thought before leaving the veil of ignorance:
If there was only one rich person, it wouldn’t make sense to structure society as if one was just as likely to be rich as to be poor.
A final thought before leaving the veil of ignorance:
What if we came up with some impossibly contrived scenario whereby harvesting one little kid’s organs (instead of giving him a candy bar) would somehow save the lives of one hundred billion trillion people? As I’ve already indicated, I’m not entirely sure what I make of “moral state of exception” intuitions, but if you do take that idea seriously, here’s a way of cashing it out:
Rawlsianism is a theory of justice — although one that wisely separates justice from interpersonal morality, confining itself to the question of what just basic institutions would look like rather than going into the very different moral sphere of how a person should live their individual life. Plausibly, though, a virtuous person confronted with the choice between upholding and undermining the rules of a just social order should almost always uphold. Perhaps, though, in really unfathomably extreme scenarios a virtuous person would prioritize utility over justice. Again: I’m not entirely sure if that’s right and the good news is that no one will ever have any particular reason to have to figure it out since (unlike more grounded cases of conflicts between justice and utility) it’s just never ever going to come up.
But the entire attractiveness of the veil of ignorance is that it allows us to accurately reason about what we should do. If it’s just a theory of justice, but then you say that there’s some other morality, that wildly complicates the moral theory, sacrificing parsimony. On top of that, it becomes impossible to divvy people up into social groups. If you think you’re equally likely to be rich or poor, black or white, is the same true of being in China vs Iceland? There’s no non-arbitrary way of doing this.
…and that’s a wrap! I’m obviously deeply unpersuaded that any of these arguments actually give anyone much reason to reconsider the deep moral horror nearly everyone has when thinking about this consequence of utilitarianism, but there’s certainly enough here to keep it interesting.
Thanks Ben! You kept it interesting too!
One More Objection to Rights Just as a Treat
This is an excerpt from my book.
10 (Chappell, July 31, 2021) has given a decisive paradox for deontology. The scenario starts with two obvious assumptions “(1) Wrong acts are morally dispreferable to their permissible alternatives. If an agent can bring about either W1 or W2, and it would be wrong for them to bring about W1 (but not W2), then they should prefer W2 over W1.
(2) Bystanders should similarly prefer, of a generic moral violation, that it not be performed. As Setiya (2018, p. 97) put it, "In general, when you should not cause harm to one in a way that will benefit others, you should not want others to do so either.”
He then adds
“(3) Five Killings > One Killing to Prevent Five.” > here just means preferable from the standpoint of a third party benevolent observer. So this just means that a third party should prefer five killings according to the deontologist to one killing to prevent 5. He gives the following definitions
"Five Killings: Protagonist does nothing, so the five other murders proceed as expected.
One Killing to Prevent Five: Protagonist kills one as a means, thereby preventing the five other murders.”
This is very unintuitive. However, he has an additional argument for why it’s wrong.
He introduces claim 4
“(4) One Killing to Prevent Five >> Six Killings (Failed Prevention).^[Here I use the '>>' symbol to mean is vastly preferable to. Consider how strongly you should prefer one less (generic) murder to occur in the world. I will use 'vast' to indicate preferences that are even stronger than that.]”
Six killings are defined as “Six Killings: Instead of attempting to save the five, Protagonist decides to murder his victim for the sheer hell of it, just like the other five murderers.” So this is very intuitive, six murders are clearly more than one murder worse than one murder which prevents five murders.
He then introduces claim five as the following. “(5) Six Killings (Failed Prevention) >= Six Killings.”
Six killings failed prevention is defined as “Six Killings (Failed Prevention): As above, Protagonist kills one as a means, but in this case fails to achieve his end of preventing the five other murders. So all six victims are killed.”
This is obvious enough. Killing one indiscriminately after 5 other people commit murder isn’t any better than killing one to try to save 5, but ultimately being unsuccesful.
Claim six is “(6) It is not the case that Five Killings >> Six Killings.” To be >> relative to another state of affairs, the difference has to be more than one extra murder. However, in this case, the difference is precisely one extra murder. This claim is thus trivially true.
He then concludes “
Recall, from (3)--(5) and transitivity, we have already established that deontologists are committed to:
(7) Five Killings > One Killing to Prevent Five >> Six Killings (Failed Prevention) >= Six Killings.
Clearly, (6) and (7) are inconsistent. By transitivity, the magnitude of preferability between any two adjacent links of the chain must be strictly weaker than the preferability of the first item over the last. But the first and last items of the chain are Five Killings and Six, which differ but moderately in their moral undesirability. The basic problem for deontologists is that there just isn't enough moral room between Five Killings and Six to accommodate the moral gulf that ought to lie between One Killing to Prevent Five and Six Killings (Failed Prevention). As a result, they are unable to accommodate our moral datum (4), that Six Killings (Failed Prevention) is vastly dispreferable to One Killing to Prevent Five.”
This argument proves a straightforward paradox for the deontologist, one that is very hard to refute.
Concluding remarks
Well, that was a lot of fun! Thanks for the response Ben, and thanks to everyone for reading through this. If you got to the end, you have my respect.
First Rebuttal For My Debate With Arjun Panickssery
Part 2
INTRODUCTION
Arjun has written his opening statement in our debate about utilitarianism (my opening statement can be found here).
As far as I know, all attempts to find this have proven flawed, but I think that weak deontology is less flawed, and more importantly that in light of moral uncertainty people should make choices that take into account the possibility that most reasonable moral theories could be correct.
I’d agree that given moral uncertainty, we shouldn’t act as strict utilitarians. However, this fact does nothing to show utilitarianism is not correct. This debate is about what one in fact has most reason to do — be it the utilitarian act in all situations or some other — so pointing out what it’s reasonable to do given moral uncertainty (which is much like factual uncertainty) does nothing to show that utilitarianism is not correct. Discussion of how we should practically reason given uncertainty has nothing to say about which theory is actually correct.
Arjun next claims that part of utilitarianism involves believing
Hedonism or preferentism: The only intrinsic good is pleasure (for the “hedonistic utilitarian”) or desire-satisfaction (for the “preference utilitarian”).
Now, while I am a hedonist, this is a bad definition. One can, like Chappell, be a utilitarian objective list theorist.
Consequentialism suffers from well-known thought experiments in which intuitions leads toward decision-making based on factors other than consequences. These are well known so I won’t list them: I mentioned some in Just Say No to Utilitarianism and you can find some more in the first section of Michael Huemer’s blog post Why I Am Not a Utilitarian.
I wrote a ten part blog series response to Huemer’s why I’m not a utilitarian—see parts 1, 2, 3, 4, 5, 6, 7, 8 ,9, and 10 .
My opening statement provided an extensive defense of hedonism, so nothing more is worth saying on that subject. I’ll return to the Just Say No to Utilitarianism article in a moment.
PARTIALITY FAILS WHEN WE REALLY CONSIDER ITS IMPLICATIONS
The weakest of the three tenets is impartiality. It conflicts with the strong intuition that people have particular obligations to specific people as a result of their relationships to them. Parents have an obligation to consider the interests of their children over the interests of strangers, all else equal.
There are several objections to this point.
First, a strong form of impartiality is clearly evolutionarily debunkable—there’s an obvious evolutionary reason for us to favor our close kin over strangers. However, as the moral circle expands, we should care more about others.
Second, it’s collectively self defeating. After all, if we all do what’s best for our families at the expense of others, given that everyone is part of a family, every person doing what’s best for their own family will be bad for families as a whole. This is the basic logic behind the prisoner’s dilemma — and it’s argued in much greater detail by Parfit.
Third, we can rig up scenarios where this ends up being infinitely bad. Suppose that both you and I both have families that will experience 100,000,000 units of suffering. However, we both have 50,000,000 opportunities to either decrease our own families suffering by one or to decrease the others families suffering by two. If we both do what’s best, rather than just what’s best for our family, neither will suffer at all, while if we act on the alternative maxim, this would end up being unfathomably morally bad.
Fourth, it seems very clear that when we think in terms of what is truly important, our family members are not intrinsically more important than others, meaning that our special obligations to them are only practical. As Chappell says
As a general rule, when other theorists posit compelling agent-relative reasons (e.g. to care about one’s own children), I don’t want to deny them. I’d rather expand upon those reasons, and make them agent-neutral. You think your kid matters immensely? Me too! In fact, I think your kid matters so much that others (besides you) have similarly strong reasons to want your kid’s life to go well. And, yes, you have similarly strong reasons to want other kids’ lives to go well, too.
Fifth, there’s an obvious reason why, even if others are just as important as those close to us, we’d neglect their interests. We generally don’t spend very much time thinking about strangers who are far away—just as they don’t spend much time thinking about us—and so we don’t really think about our obligations to them. As Chappell says
We tend not to notice those latter reasons so much. So it might seem incredible to claim that you have equally strong reasons to want the best for other kids. (They sure don’t feel as important to us.) But reasons only get a grip on us insofar as we attend to them, and we tend not to think much about strangers—and even if we tried, we don’t know much about them, so their interests are apt to lack vivacity.
The better you get to know someone, the more you tend to (i) care about them, and (ii) appreciate the reasons to wish them well. Moreover, the reasons to wish them well don’t seem contingent on you or your relationship to them—what you discover is instead that there are intrinsic features of the other person that makes them awesome and worth caring about. Those reasons predate your awareness of them. So the best explanation of our initial indifference to strangers is not that there’s truly no (or little) reason to care about them (until, perhaps, we finally get to know them). Rather, the better explanation is simply that we don’t see the reasons (sufficiently clearly), and so can’t be emotionally gripped or moved by them, until we get to know the person better. But the reasons truly were there all along.
Sixth, utilitarianism can provide a good reason for us to have special obligations. There’s a good practical reason for us to care more about our families. We have much greater ability to influence our family and close friends than others, and creating tight knit relationships that require adopting special obligations like marriage or friendship is clearly a utilitarian good. This combined with the previous arguments is the best accounts of special obligations—they make sense on practical grounds, but when they don’t, they don’t seem valuable.
Seventh, it seems like for something to be morality, it must be impartial. Egoism is not a legitimate candidate for morality, because it doesn’t consider what we have most impartial reason to do overall—it just considers what’s best for us.
ADRESSING ARJUN’S COUNTEREXAMPLES
Arjun’s counterexamples come from Caplan, and he described them in this article.
Grandma: Grandma is a kindly soul who has saved up tens of thousands of dollars in cash over the years. One fine day you see her stashing it away under her mattress, and come to think that with just a little nudge you could cause her to fall and most probably die. You could then take her money, which others don’t know about, and redistribute it to those more worthy, saving many lives in the process. No one will ever know. Left to her own devices, Grandma would probably live a few more years, and her money would be discovered by her unworthy heirs who would blow it on fancy cars and vacations. Liberated from primitive deontic impulses by a recent college philosophy course, you silently say your goodbyes and prepare to send Grandma into the beyond.
I think one should not kill Grandma in this case — after all, things go best when you’re the type of person who wouldn’t kill your grandmother to donate her money to save lives. Virtues of a good utilitarian would rule this out. As Chappell notes
While utilitarianism as a theory is fundamentally impartial, it does not recommend that we attempt to naively implement impartiality in our own lives and decision-making if this would prove counterproductive in practice. This allows plenty of scope for utilitarians to accommodate various kinds of partiality on practical grounds.
But we can change it slightly to ask whether, were a vicious person to do this and then give away half her wealth, who had no other obligations, whether that would be good. My answer to that would be yes, relative to doing nothing. After all, it would save a bunch of lives.
Chappell additionally notes
Finally, it is worth flagging that the history of partiality includes many examples of group discrimination, such as discrimination based on race, sex, or religion, that we now recognize as morally unacceptable. While this certainly does not prove that all forms of partiality are similarly problematic, it should at least give us pause, as we must consider the possibility that some of our presently-favored forms of partiality (or discrimination on the basis of perceived similarity or closeness) could ultimately prove indefensible.
Our intuitions about this case are caused by a few things. First, contemplating of how bad it is to die. However, in this case, we’re saving a bunch of lives, so the badness of death cuts both ways. If, rather than killing grandma to save a bunch of other people, our grandma was the one who was saved, or someone else close to us was saved, at the expense of someone else’s grandmother, our intuitions about the case would flip. It’s only when you compare a real, flesh and blood person to nameless, faceless, far away strangers that things seem unintuitive.
Given that this would save the most lives, it seems clear that a perfectly moral third party observer would hope that grandma would die at that moment. After all, that would bring about a better world. But if we have most reason to hope for something to happen, it seems we also would have reason to bring it about.
Additionally, for this case to be justified by utilitarianism, we’d have to have near total certainty — perhaps a declaration from god — that we wouldn’t be found out. Thus, biting this bullet doesn’t require accepting any unintuitive, real world results.
Additionally, grandma would choose to be killed from behind the veil of ignorance. If she was a random person near death—either herself or a person dying of malaria who would be saved—she would consent to this. Given that making her totally rational and impartial would make her favor the action, that gives good reason to favor the action.
Additionally, comments I made about organ harvesting in this article clearly apply here—I’ll quote them in full.
What’s going on in our brains—what’s the reason we oppose this? Well, we know that social factors and evolution dramatically shape our moral intuitions. So, if there’s some social factor that would result in strong pressure to hold to the view that the doctor shouldn’t kill the person, it’s very obvious that this would affect our intuitions. Are there?
Well, of course. A society in which people went around killing other people for the greater good would be a much worse society. We have good rules to place strong prohibitions on murder, even for the allegedly greater good.
Additionally, it is a practical necessity that we accept, as a society, some doing allowing distinction. Given that doing the maximally good thing all the time would be far too demanding, as a society, we treat there as being some fundamental distinction between doing and allowing. Society would collapse if we treated murder as being only a little bit bad. Thus, it’s super important that we treat murder as very bad. But given that we can’t treat failing to do something unfathomably demanding as horrendous—equivalent to murder—we have to treat there as being some distinction between doing and allowing.
After this distinction is in place, our intuitions about organ harvesting are very obviously explainable. If killing is treated as unfathomably evil, while not saving isn’t, then killing to save will be seen as horrendous.
To see this, imagine if things were the other way. Imagine if we were living in a world in which every person will kill one person per day, in an alternative multiverse segment, unless they fast during that day. Additionally, imagine that, in this world, each person saved dozens of people per day in alternative multiverse segment, unless they take drastic action. In this world, it seems clear that failing to save would be seen as much worse than killing, given that saving is easy, but failing to kill is very difficult. Additionally, imagine that these people saw those who they were saving, and they felt empathy for them. Thus, not saving someone would provoke similar internal emotional reactions in that world as killing does in ours.
So what do we learn from this. Well, to state it maximally bluntly and concisely, many of our non-utilitarian intuitions are the results of social norms that we design to have good consequences, which we then take to be significant independently of their good consequences. These distinctions are never derivable from plausible first principles, never have clear delineations, and always result in ridiculous reductios. They are more epiphenomena—an unnecessary byproduct of correct moral reasoning. We correctly see that society needs to enshrine rights as a legal concept, and then incorrectly feel an attachment to them as an intrinsic feature of morality.
When we’re taught moral norms as a child, we’re instructed with rigid norms like “don’t take other people’s things.” We try to reach reflective equilibrium with those intuitions, carefully reflecting until they form coherent networks of moral beliefs. Then, later in life, we take them as the moral truth, rather than derivative heuristics.
Consider the situation in greater depth and the utilitarian conclusion becomes more clear. The death of your grandmother is a tragedy, but the death of dozens of others is a far greater tragedy. You’re just choosing between those states of affairs.
To quote my earlier article again.
Similarly, as Yetter-Chappell points out, there’s lots of status quo bias. A major reason why … it seems wrong to push the guy off the bridge in the trolley problem is because that would deviate from the status quo. If we favor the status quo, then it’s no surprise that utilitarianism would go against our intuitions about favoring the status quo. Our aversion to loss also explains why we want to keep things similar to how they are currently.
If we accept that the lives of lots of people are more important than the life of one grandmother, then the aversion to killing ones grandmother to save many lives must be grounded in either
A) Special obligations. However, utilitarianism provides a great account of this—it explains why we have special obligations to save our family members.
B) Some conception of rights—this was refuted in my first article.
C) Status quo bias—for it explicitly advocates against shifting from a worse world that happens to be the status quo to a better world.
To quote my article for a third time,
Similarly, people care much more about preventing harms from foes with faces than foes without faces. So in the organ harvesting case, for example, when the foes have a face (namely, you) it’s entirely clear why the one murder begins to seem worse than the five deaths.
Now let’s modify the scenario to imagine that it involved killing your grandmother to save five of your other grandparents. Well, this seems clearly worth it, as I argue in my debate opening statement. The death of a grandmother is very terrible, but the death of five is grandparents is worse. This is especially true because all of your grandparents would rationally be in favor of it, assuming they didn’t know who it was.
Thus, the scenario is as follows. Five of your grandparents will die and one will be fine unless you kill the one who will be fine. However, you don’t know which one will be fine and which ones will die. This case seems structurally similar to the case of killing a grandparent to kill five others — it just robs you of access from the start of which grandparent will be saved. However, impartially considered, saving five other people is as important as saving five of your grandparents, therefore, by transitivity, if you should kill your grandparent in this scenario, you should also in the other scenario.
Now, let’s modify the scenario again. You have two choices
A) Kill two random people out of the five people that will be saved to save all five
B) Kill your grandmother to save five
C) Don’t save any of the 5.
A would clearly be better than C — it is, after all, a pareto improvement. Every single person in the group would clearly hope that you’d choose A over C. However, B plausibly seems better than A — you kill fewer people, and the person you do kill is old and near death. Thus, by transitivity, B is better than C, which entails the utilitarian judgment about this case.
Finally, in this scenario, it would clearly be better to convince the grandmother to save lives. Thus, you also need a guarantee that you won’t be able to convince her to donate.
Arjun’s second counterexample is as follows.
Child: Your son earns a good living as a doctor but is careless with some of his finances. You sometimes help him out by organizing his receipts and invoices. One day you have the opportunity to divert $1,000 from his funds to a charity where the money will do more good; neither he nor anyone else will ever notice the difference, besides the beneficiaries. You decide to steal your child’s money and promote the overall good.
Now, you plausibly have good pragmatic reasons to avoid stealing from your child. Relationships with covert theft tend not to work out very well. However, the act considered in isolation would clearly be good.
In this case, seriously assess the salient features. On the one hand, you have the ability to save about one fourth of a person. On the other hand, it would cost someone else 1000 dollars that they don’t need. Clearly, a person’s life is much more important than 1000 dollars.
All the things I said about the previous situation apply here — it’s plausibly rooted in status quo bias, relies on an erroneous notion of rights, it would be hoped for by a perfectly moral third party observer, and so on.
Let’s consider modifying the scenario to be about stealing 9000 dollars — after all, there are plausibly zero people who think it would be bad to steal 1000 but it would be good to steal 9000. Well in this case the question is whether you should steal money from your child that won’t adversely affect them to save the lives of two people. The answer is obviously yes! Stealing to save several lives is a deal worth taking.
The only reason this scenario provokes horrifying indifference about the fate of several lives is because the way in which the lives are saved is indirect, society would be dysfunctional if people saw donating money to effective charities as worth stealing for, and the person harmed is someone who we have good reason to have special obligations to. Yet when we really reflect on the scenario — what really matters about it — it’s comparing trifles to human lives. It’s thus a very easy decision.
Moreover, for the reasons described above, it’s exactly the type of intuition that we’d expect to be unreliable. When we compare people that are close to us to far away, nameless faceless people, it’s totally unsurprising that we’d care more about the people close to us. This scenario is like Ted Cruz proposing the reductio to utilitarianism that it cares equally about Americans and Iraqis — the fact that one theory privileges the very near isn’t a good thing.
I’M NOT EMOTIONAL; YOU ARE
Next, Arjun says
I’m not sure what angle Matthew will take, so I’ll only anticipate two counterarguments in advance. First, there’s a response rejecting the intuitions in the specific cases on the grounds that they’re based on an emotional reaction that isn’t morally relevant. Utilitarianism also rests on intuitions and a case just as strong could be made that those intuitions have an emotional motivation.
Utilitarianism rests on motivations, yet its motivations are much more reliable. To quote an earlier article I wrote
One 2012 study finds that asking people to think more makes them more utilitarian. When they have less time to think, they become conversely less utilitarian. If reasoning lead to utilitarianism, this is what we’d expect. More time to reason would make people proportionately more utilitarian.
A 2021 study, compiling the largest available dataset, concluded across 8 different studies that greater reasoning ability is correlated with being more utilitarian. The dorsolateral prefrontal cortex’s length correlates with general reasoning ability. It’s length also correlates with being more utilitarian. Coincidence? I think not1.
Yet another study finds that being under greater cognitive pressure makes people less utilitarian. This is exactly what we’d predict. Much like being under cognitive strain makes people less likely to solve math problems correctly, it also makes them less likely to solve moral questions correctly. Correctly being in the utilitarian way.
Yet the data doesn’t stop there. A 2014 study found a few interesting things. It looked at patients with damaged VMPC’s—a brain region responsible for lots of emotional judgements. It concluded that they were far more utilitarian than the general population. This is exactly what we’d predict if utilitarianism were caused by good reasoning and careful reflection, and alternative theories were caused by emotions. Inducing positive emotions in people conversely makes them more utilitarian—which is what we’d expect if negative emotions were driving people not to accept utilitarian results.
THIS APPROACH TO ETHICS IS BAD
As one might be able to deduce from this sub-header, I’m not a fan of this approach to ethics that involves rejecting plausible principles that explain a lot of moral data based on a few apparent counterexamples. I argue this point in this article
The fact that there are lots of cases where utilitarianism diverges from our intuitions is not surprising on the hypothesis that utilitarianism were correct. This is for two reasons.
There are enormous numbers of possible moral scenarios. Thus, even if the correct moral view corresponds to our intuitions in 99.99% of cases, it still wouldn’t be too hard to find a bunch of cases in which the correct view doesn’t correspond to our intuitions.
Our moral intuitions are often wrong. They’re frequently affected by unreliable emotional processes. Additionally, we know from history that most people have had moral views we currently regard as horrendous.
Because of these two factors, our moral intuitions are likely to diverge from the correct morality in lots of cases. The probability that the correct morality would always agree with our intuitions is vanishingly small. Thus, given that this is what we’d expect of the correct moral view, the fact that utilitarianism diverges from our moral intuitions frequently isn’t evidence against utilitarianism. To see if they give any evidence against utilitarianism, let’s consider some features of the correct moral view, that we’d expect to see.
The correct view would likely be able to be proved from lots of independent plausible axioms. This is true of utilitarianism.
We’d expect the correct view to make moral predictions far ahead of its time—like for example discerning the permissibility of homosexuality in the 1700s.
While our intuitions would diverge from the correct view in a lot of cases, we’d expect careful reflection about those cases to reveal that the judgments given by the correct moral theory to be hard to resist without serious theoretical cost. We’ve seen this over and over again with utilitarianism, with the repugnant conclusion , torture vs dust specks , headaches vs human lives, the utility monster, judgments about the far future, organ harvesting cases and other cases involving rights, and cases involving egalitarianism. This is very good evidence for utilitarianism. We’d expect incorrect theories to diverge from our intuitions, but we wouldn’t expect careful reflection to lead to the discovery of compelling arguments for accepting the judgments of the incorrect theory. Thus, we’d expect the correct theory to be able to marshal a variety of considerations favoring their judgments, rather than just biting the bullet. That’s exactly what we see when it comes to utilitarianism.
We’d expect the correct theory to do better in terms of theoretical virtues, which is exactly what we find.
We’d expect the correct theory to be consistent across cases, while other theories have to make post hoc changes to the theory to escape problematic implications—which is exactly what we see.
There are also some things we’d expect to be true of the cases where the correct moral view diverges from our intuitions. Given that in those cases our intuitions would be making mistakes, we’d expect there to be some features of those cases which make our intuitions likely to be wrong. There are several of those in the case of utilitarianism’s divergence from our intuitions.
A) Our judgments are often deeply affected by emotional bias1
B) Our judgments about the morality of an act often overlap with other morally laden features of a situation. For example, in the case of the organ harvesting case, it’s very plausible that lots of our judgment relates to the intuition that the doctor is vicious—this undermines the reliability of our judgment of the act.
C) Anti utilitarian judgments get lots of weird results and frequently run into paradoxes. This is more evidence that they’re just rationalizations of unreflective seemings, rather than robust reflective judgment.
D) Lots of the cases where our intuitions lead us astray involve cases in which a moral heuristic has an exception. For example, in the organ harvesting case, the heuristic “Don’t kill people,” has a rare exception. Our intuitions formed by reflecting on the general rule against murder will thus be likely to be unreliable.
(I elaborate more on this point in the conclusion of the linked article, but I’m excluding it for word count reasons).
TWO MORE CHALLENGES
While we’re here, I’ll present two explanatory challenges for non-utilitarian accounts
1
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
The non utilitarian may object that the badness of the act depends on how much harm is done. They might say that the first action is a more serious rights violation. Suppose the formula they give is that the badness of a rights violation = twice the amount of suffering caused by that rights violation.
This leads them open to a few issues. First, this rules out all harms that don’t cause any suffering. Thus, this account can’t hold that harmless rights violations are bad. Second, it doesn't seem to go well with the idea of rights. Rights violations seem to add bad things to the act done, independent of suffering caused.
Maybe the deontologist can work out a complex arithmetic to avoid this issue. However, this is an issue that is easy to solve for utilitarians, yet which requires complexity for deontologists and others who champion rights.
2
Consequentialism provides the only adequate account of how we should treat children. Several actions being done to children are widely regarded as justifiable, yet are not for adults.
Compelling them to do minimal forced labor (chores).
Compelling them to spend hours a day at school, even if they vehemently dissent and would like to not be at school.
Forcing them to learn things like multiplication, even if they don’t want to.
Forcing them to go to bed when their parents think will make things go best, rather than when they want to.
Not allowing them to leave their house, however much they protest.
Disciplining them in ways that cause them to cry, for example putting them on time-out.
Controlling the food they eat, who they spend time with, what they do, and where they are at all times.
However, lots of other actions are justifiable to do with adults, yet not with children.
Having sex with them if they verbally consent.
Not feeding them (i.e. one shouldn’t be arrested if they don’t feed a homeless person nearby. They should, however, if they don’t feed their children. Not feeding ones children is morally worse than beating their children, while the same is not true of unrelated adults.)
Employing them in damaging manual labor.
Consequentialism provides the best account of these obligations. Each of these obligations makes things go best, which is why they apply. Non consequentialist accounts have trouble with these cases.
One might object that children can’t consent to many of these things, which makes up the difference. However, consent fails to provide an explanation. It would be strange to say, for example, that the reason you can prohibit someone from leaving your house is because they don’t consent to leaving your house. Children are frequently forced to do things without consent, like learn multiplication, go to school, and even not put their fingers in electrical sockets. Thus, any satisfactory account has to explain why their inability to consent only bars them from consenting to some of those things.
CONCLUSION
Well, that’s all for now folks. That’s my response to Arjun. His criticisms of utilitarianism are the standard (unsuccessful) objections. See you in the next one!
Second Rebuttal For My Debate With Arjun Panickssery
Part 3
INTRODUCTION
Arjun has written his rebuttal to my opening statement. A significant majority of the points I raised in my opening statement were unaddressed, and the ones that were addressed were not adequately refuted.
THEORETICAL VIRTUES
The first point I raised was about theoretical virtues — utilitarianism is simpler, more parsimonious, more clear, and so on. This favors utilitarianism considerably, for theoretical virtues are crucial to evaluating a theory.
RETURN TO HISTORY
Two other features favor utilitarianism based on the historical record.
1 Utilitarians, when they diverge from common sense, tend to be right — often hundreds of years ahead of their time. Bentham, for example, supported legalizing homosexuality in the 1700s. We’d expect the correct moral theory to get things right far ahead of time, and that’s exactly what we observe.
2 All examples of moral atrocities throughout history have contradicted utilitarianism because they’ve involved systematic exclusion from the moral circle — something which utilitarianism rules out.
To both of these points, we saw no response. As is typical of the critics of utilitarianism, there was no refutation of the many features of the cumulative case for utilitarianism.
THE SYLLOGISM WHICH IF TRUE PROVES UTILITARIANISM
This is the syllogism in question.
These premises, if true prove utilitarianism.
1 A rational egoist is defined as someone who does only what produces the most good for themselves
2 A rational egoist would do only what produces the most happiness for themselves
3 Therefore only happiness is good (for selves who are rational egoists)
4 The types of things that are good for selves who are rational egoists are also good for selves who are not rational egoists unless they have unique benefits that only apply to rational egoists
5 happiness does not have unique benefits that only apply to rational egoists
6 Therefore only happiness is good for selves who are or are not rational egoists
7 All selves either are or are not rational egoists
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
Arjun asks for clarification if my argument is this
Call a “rational egoist” someone who does only what maximizes his self-interest.
Hedonism is correct.
So happiness is the only kind of good.
So a rational egoist only does what maximizes his happiness.
If something is in the self-interest of rational egoists but not good for people in general, then it must have unique benefits that only apply to rational egoists.
A person’s happiness is in his self-interest if he is a rational egoist, but it doesn’t have unique benefits to him only if he is a rational egoist.
So a person’s happiness must be good for him.
In order for something to be good in general, it has to be in the self-interest of some people.
Because hedonism is true, this is the same as saying that for something to be good in general, it must make some people happy.
Only the total happiness is like this.
So the total happiness is the only good.
We should act in a way that maximizes the good.
So we should act in a way that maximizes the total happiness, which is utilitarianism.
This seems similar to my argument but is in some ways different. Given that my argument is valid, Arjun should explain which premises he rejects.
I guess that “happiness” when used separately from “happiness for oneself” means “the total happiness,” because the argument might be trivially circular otherwise? I’m not really sure what’s going on here and it’s possible I’ve changed the argument in trying to rewrite it.
This relates to this conjunction of premises.
8 Therefore, only happiness is good for selves
9 Something is good, if and only if it is good for selves
10 Therefore only happiness is good
11 We should maximize good
12 Therefore, we should maximize only happiness
These establish that only happiness is good and that we should maximize good. Saying happiness is good means it’s good overall, not merely for selves.
Even if we accept that hedonism is correct, which I wouldn’t grant, claim (8) and (12) aren’t plausible unless you already accept the conclusion that utilitarianism is correct.
I already provided an argument for this
It seems hard to imagine something being good, but being good for literally no one. If things can be good while being good for no one, there would be several difficult entailments that one would have to accept, such as that there could be a better world than this one despite everyone being worse off.
To advance the claim in greater detail, here are two specific implications
1 A universe with no life could have moral value, given that things can be good or bad, while being good or bad for no one. The person who denies it could claim that things that are good must relate to people in some way, despite not being directly good for people, yet this would be ad hoc, and a surprising result, if one denied it.
2 If something could be bad, while being bad for no one, then it could be the case that galaxies full of people experiencing horrific suffering, for no ones benefit could be a good state of affairs, relative to one where everyone is happy and prosperous, but things that are bad for no one, yet bad, nonetheless are in vast quantities. For example, suppose we take the violation of rights to be bad, even if it’s bad for no one. A world where everyone violated everyone else's rights unfathomable numbers of times, in ways that harm literally no one, but where everyone prospers, based on the number of people affected, could be morally worse than a world in which everyone endures the most horrific forms of agony imaginable.
There are things that are good that aren’t in the direct self-interest of any particular person, and you have reasons to act other than maximizing the good, like to meet your obligations.
No argument was given for this conclusion. Beginning with the claim that things can be good while being good for no one, Arjun gave no examples of such things, so the response will depend on which specific examples he has in mind. The claim that we have special obligations was addressed in the previous article. A few more objections.
1 This would make it distinct from other domains of practical reasoning like the epistemic domain, where we should believe what we have the most reason to believe, but there are no absolute epistemic obligations.
2 As I argue here, utilitarianism can explain all other normative concepts in terms of value. However, introducing the concepts of obligation — which are not reducible to other normative concepts — complicates the fundamental normative concept.
3 This argument is a syllogism
1 Things are good iff they are important
2 Obligations cannot be grounded in things that are not important
Therefore, obligations must be grounded in things that are good
3 If obligations are grounded in things that are good then things’ status as good explains the obligations
Therefore, things’ status as good explains the obligations
Non-consequentialism denies that things’ status as good explains the obligation
Therefore, non-consequentialism is false.
Moving on to hedonism, Arjun says he rejects it, but he hasn’t addressed the dozens of arguments I’ve given for it. Thus, hedonism is on solid footing.
HARSANYI’S PROOF
I’ll just quote what I said about Harsanyi’s argument in the opening statement.
Harsanyi’s argument is as follows.
Ethics should be impartial—it should be a realm of rational choice that would be undertaken if one was making decisions for a group, but was equally likely to be any member of the group. This seems to capture what we mean by ethics. If a person does what benefits themselves merely because they don’t care about others, that wouldn’t be an ethical view, for it wouldn’t be impartial.
So, when making ethical decisions one should act as they would if they had an equal chance of being any of the affected parties. Additionally, every member of the group should be VNM rational and the group as a whole should be VNM rational. This means that their preferences should have the following four features of rational decision theory which are accepted across the board (they’re slightly technical but they’re basically universally agreed upon).
These combine to form a utility function, which represents the choice worthiness of states of affairs. For this utility function, it has to be the case that a one half chance of 2 utility is equally good to certainty of 1 utility. 2 utility is just defined as the amount of utility that’s sufficiently good for a 50% chance of it to be just as good as certainty of 1 utility.
So now as a rational decision maker you’re trying to make decisions for the group, knowing that you’re equally likely to be each member of the group. What decision making procedure should you use to satisfy the axioms? Harsanyi showed that only utilitarianism can satisfy the axioms.
Let’s illustrate this with an example. Suppose you’re deciding whether to take an action that gives 1 person 2 utility or 2 people 1 utility. The above axioms show that you should be indifferent between them. You’re just as likely to be each of the two people, so from your perspective it’s equivalent to a choice between a 1/2 chance of 2 utility and certainty of 1 utility. We saw before that those are equally valuable, a 1/2 chance of 2 utility is by definition equally good to certainty of 1 utility. 2 utility is just the amount of utility for which a 1/2 chance of it will be just as good as certainty of 1 utility. So we can’t just go the Rawlsian route and try to privilege those who are worst off. That is bad math!! The probability theory is crystal clear.
Now let’s say that you’re deciding whether to kill one to save five, and assume that each of the 6 people will have 5 utility. Well, from the perspective of everyone, all of whom have to be impartial, the choice is obvious. A 5/6 chance of 5 utility is better than a 1/6 chance of 5 utility. It is better by a factor of five. These axioms combined with impartiality leave no room for rights, virtue, or anything else that’s not utility function based.
This argument shows that morality must be the same as universal egoism—it must represent what one would do if they lived everyone’s life and maximized the good things that were experienced throughout all of the lives. You cannot discount certain people, nor can you care about agent centered side constraints.
RIGHTS
I gave a series of objections to rights, most of which were unaddressed.
Arjun responded to my first objection to rights as follows.
Everything that we think of as a right is reducible to happiness. For example, we think people have the right to life. Yet the right to life increases happiness. We think people have the right not to let other people enter their house, but we don’t think they have the right not to let other people look at their house. The only difference between shooting bullets at people, and shooting soundwaves (ie making noise) is one causes a lot of harm, and the other one does not.
Emphasis mine. This principle isn’t universally true. Rights violations aren’t just a kind of harm, since you can harm someone without violating his rights and violate someone’s rights without harming him. For example, it could make me unhappy that someone exists, but his existence doesn’t violate any of my rights. Someone could also force me to improve my diet, which wouldn’t harm me but would be a violation of my rights.
I didn’t claim that rights violations were always harmful. Rather, my claim was that the basis for rights is rooted in utilitarian considerations. All of the things we think of as rights generally make things go best — and when they don’t, we don’t think that they’re rights. The examples of entering and looking at houses and sounds illustrates this.
The diet counterexample is false. Third parties forcing diets on people wouldn’t make their lives better.
For example, we don’t think it’s a violation of rights to look at people, but if every time we were looked at we experienced horrific suffering, we would think it was a rights violation to look at people.
I don’t endorse a position where you should impose horrific suffering to avoid violating any rights, just that respecting rights gives you some good reason for action irrespective of the consequences of your action.
This does not address my argument. I was explaining that looking at people would be a rights violation if it caused harm, because rights violations are rooted in consequentialist considerations. Arjun’s response is a red herring.
(Including pictures of red herrings doesn’t burn my word count).
My second objection to rights was as follows.
2 If we accept that rights are ethically significant then there’s a number of rights violated that could outweigh any amount of suffering. For example, suppose that there are aliens who will experience horrific torture, that gets slightly less unpleasant for every leg of a human that they grab, without the humans knowledge or consent, such that if they grab the leg of 100 million humans the aliens will experience no torture. If rights are meta ethically significant, then the aliens grabbing the legs of the humans, in ways that harm no one would be morally bad. The amount of rights violations would outweigh. However, this doesn’t seem plausible. It seems implausible that aliens should have to endure horrific torture, so that we can preserve our sanctity in an indescribable way. If rights matter, a world with enough rights violations, where everyone is happy all the time could be worse than a world where everyone is horrifically miserable all of the time but where there are no rights violations.
Arjun replies
I don’t think this follows. It’s possible that there are different kinds of rights and different kinds of suffering and that some of these are incommensurable
But if any rights violations are worse than a tiny amount of suffering, then a sufficiently vast number of rights violations would be worse than a comparatively enormous amount of suffering. This is, however, implausible; a world where everyone’s rights are intact but everyone is horrifically miserable would be worse than one in which people’s rights were constantly violated but everyone was super well off.
I next said
If my opponent argues for rights then I’d challenge him to give a way of deciding whether something is a right that is not based on hedonic considerations.
Arjun replied
I would start with intuitions from particular cases.
But if the particular cases don’t coalesce into a coherent account, then it’s an immensely complex theory, which has to posit as a brute fact the vast coincidence that all the rights make things go best and has to sacrifice a nearly infinite amount of simplicity. When theories are both complicated and have to posit nigh miraculous coincidences, that’s when you know something has gone awry. As Bradley notes in his book
“This insistence on simplicity is far from universally shared among philosophers, who sometimes insist that the truth of the matter about ethics must be complicated. To these philosophers, I can only say that complicated views always go wrong somewhere; where exactly they go wrong is often concealed by the complexity. The more complex the view, the more work it takes to draw out the unwelcome consequences—but they are always there.”
I presented four more objections to rights that were not addressed — I won’t rehash them for this reason. They show that accepting rights requires biting enormous bullets and leads to paradox.
However, I did present one objection that Arjun replied to
Torture Transfer: Mary works at a prison where prisoners are being unjustly tortured. She finds two prisoners, A and B, each strapped to a device that inflicts pain on them by passing an electric current through their bodies. Mary cannot stop the torture completely; however, there are two dials, each connected to both of the machines, used to control the electric current and hence the level of pain inflicted on the prisoners. Oddly enough, the first dial functions like this: if it is turned up, prisoner A’s electric current will be increased, but this will cause prisoner B’s current to be reduced by twice as much. The second dial has the opposite effect: if turned up, it will increase B’s torture level while lowering A’s torture level by twice as much. Knowing all this, Mary turns the first dial, immediately followed by the second, bringing about a net reduction in both prisoners’ suffering.
Arjun admits
In general, weak deontology suffers from the problem that two actions A and B can both be wrong independently but acceptable if done at once or in quick succession as part of a combined action, which is unintuitive. The best argument for utilitarianism is the flaws in all other moral theories, but utilitarianism’s flaws are more severe and it’s more likely that there is an superior undiscovered moral theory than that utilitarianism is correct.
This is a general problem for rights, which are the most frequent counterargument against utilitarianism. Thus, non-consequentialism seems to be toast.
OTHER EXPLANATORY DEFICITS
In section 6, I present 6 cases that other theories can’t address. As evidence of this fact, Arjun addressed none of them.
Case 1
Imagine you were deciding whether or not to take an action. This action would cause a person to endure immense suffering—far more suffering than would occur as the result of a random assault. This person literally cannot consent. This action probably would bring about more happiness than suffering, but it forces upon them immense suffering to which they don’t consent. In fact, you know that there’s a high chance that this action will result in a rights violation, if not many rights violations.
If you do not take the action, there is no chance that you will violate the person’s rights. In fact, absent this action, their rights can’t be violated at all. In fact, you know that the action will have a 100% chance of causing them to die.
Should you take the action? On most moral systems, the answer would seem to be obviously no. After all, you condemn someone to certain death, cause them immense suffering, and they don’t even consent. How is that justified?
Well, the action I was talking about was giving birth. After all, those who are born are certain to die at some point. They’re likely to have immense suffering (though probably more happiness). The suffering that you inflict upon someone by giving birth to them is far greater than the suffering that you inflict upon someone if you brutally beat them.
So utilitarianism seems to naturally—unlike other theories—provide an account of why giving birth is not morally abhorrent. This is another fact that supports it.
Case 2
Suppose one is deciding between two actions. Action 1 would have a 50% chance of increasing someone’s suffering by 10 units and action two would have a 100% chance of increasing their suffering by 4 units. It seems clear that one should take action 2. After all, the person is better off in expectation.
However, non utilitarian theories have trouble accounting for this. If there is a wrongness of violating rights that exists over and above the harm that was caused, then assuming that we say that the badness of violating rights is equivalent to 8 units of suffering, action 1 would be better (½ chance of 18 is less bad than certainty of 12).
Case 3
Suppose you stumble across a person who has just been wounded. They need to be rushed to the hospital if they are to survive. If they are rushed to the hospital, they will very likely survive. The person is currently unconscious and has not consented to being rushed to the hospital. Thus, on non utilitarian accounts, it’s difficult to provide an adequate account of why it’s morally permissible to rush a person to the hospital. They did not consent, and the rationale is purely about making them better off.
Case 4
An action will make everyone better off. Should you necessarily do it? The answer seems to be yes, yet other theories have trouble accounting for that if it violates side constraints.
Case 5
When the government taxes is that objectionable theft? If not, why not? Consequentialism gives the only satisfactory account of political authority.
Case 6
Suppose one was making a decision of whether to press a button. Pressing the button would have a 50% chance of saving someone, a 50% chance of killing someone, and would certainly give them five dollars. Most moral systems, including deontology in particular, would hold that one should not press the button.
However, Mogenson and Macaskill argue that this situation is analogous to nearly everything that happens in one’s daily life. Every time a person gets in a car they affect the distribution of future people by changing very slightly the time in which lots of other people have sex. They also change traffic distributions, potentially reducing and potentially increasing the number of people who die in traffic accidents. Thus, every time a person gets in a car, there is a decent chance they’ll cause an extra death, a high chance of changing the distribution of lots of future people, and a decent chance they’ll prevent an extra death. Given that most such actions produce fairly minor benefits, it is quite analogous to the scenario described above about the button.
Given that any act which changes the traffic by even a few milliseconds will affect which of the sperm out of any ejaculation will fertilize an egg, each time you drive a car you causally change the future people that will exist. Your actions are thus causally responsible for every action that will be taken by the new people you cause to exist. The same is true if you ever have sex; you will change the identity of a future person.
ARJUN’S OBJECTIONS TO MY REBUTTAL
Arjun also posted some responses to my rebuttal to his opening statement. He didn’t address most of my responses, so I won’t repeat them.
Arjun raised a point about moral uncertainty meaning we shouldn’t act purely as utilitarians. (My rebuttal is in italics, Arjun’s response isn’t).
I’d agree that given moral uncertainty, we shouldn’t act as strict utilitarians. However, this fact does nothing to show utilitarianism is not correct. This debate is about what one in fact has most reason to do — be it the utilitarian act in all situations or some other — so pointing out what it’s reasonable to do given moral uncertainty (which is much like factual uncertainty) does nothing to show that utilitarianism is not correct. Discussion of how we should practically reason given uncertainty has nothing to say about which theory is actually correct.
Emphasis mine. It’s unclear to me whether Matthew intends this debate to be about what one has the most reason to do or about which moral theory is most likely to be correct. I don’t think these are the same question, since you could have the most reason to take an action that contradicts the moral theory that you find most likely. For example, even if you’d give 3 to 1 odds against moral realism being true, you should still act as if it’s true, since if it’s false then it doesn’t matter what you do anyway.
Perhaps I was ambiguous — a reason only counts as a genuine reason if it is relevant on the true moral theory. If deontology is false, and rights only give us reasons to do things assuming deontology, then rights don’t give us genuine reasons. Thus, these are the same question.
a strong form of impartiality . . . collectively self defeating. After all, if we all do what’s best for our families at the expense of others, given that everyone is part of a family, every person doing what’s best for their own family will be bad for families as a whole. . .
It’s not clear to me what’s meant by “strong” here. But more importantly, the hypothetical posed is meaningfully different from the question of what to do in an individual case.
Scenario A: A parent faced with a choice between saving his own child and saving the children of two distant strangers has a good reason—his obligation to act in the interests of his own children—to take the first option.
Scenario B: Suppose he were faced instead with the knowledge that 100 randomly selected people (including himself, potentially) would be presented the dilemma above. Given the opportunity to force them all to choose one option or the other, he should force them all to choose to save the strangers. This follows the same principle since it’s in the best interest of his own child as well as all children.
These are two distinct scenarios. Choosing your own child in Scenario A doesn’t somehow force other people to choose the wrong option in Scenario B, and there’s no contradiction in following a general principle that leads you to be partial to your own children in both cases.
This is why it’s collectively self-defeating. If we all privilege our families — as the prisoner’s dilemma shows — we’ll all be worse off. Thus, the decision one should endorse as a moral rule from the perspective of the standpoint that they want to maximize differs from the decision that they actually make. It’s the same basic idea as Huemer’s paradox of weak deontology — just applied to a different context.
I’m not sure exactly what’s meant by “intrinsic importance,” but from my reading, the idea that decisions should be made based on “intrinsic importance” assumes impartiality, so this is circular. Your particular agent-relative obligations give you good reasons other than anyone’s “intrinsic importance.”
The concept of being important, or really mattering, seems like a relatively simple concept. If morality isn’t grounded in what really matters — and if truly being important is an alien concept to our conception of morality — what useful thing are we even doing? Morality isn’t significant if it’s not about what really matters.
The other objections to partiality were not addressed, but I won’t repeat them here.
CONCLUSION
Given the multitude of unaddressed arguments, the ball is largely in Arjun’s court at this point. I look forward to seeing how he addresses the cumulative case for utilitarianism and the many arguments for it. So far, this has been a useful and fruitful exchange.
To be continued.
As always, there’s a simple objection to arguments for normative theories that presume moral realism: there are no good arguments for moral realism. All versions of moral realism are trivial, false, or unintelligible. You say:
“On the point about moral realism, anti-realism is wildly implausible, as I argue here.”
I think you failed to show that moral antirealism is even a little bit implausible. It only seems implausible if you endorse moral realism, but of course to an antirealist that would seem implausible. I have yet to see any claims about moral antirealism being “implausible” that don’t presuppose moral realism.
I wrote a few responses to that original post, though Substack doesn’t seem to have a good format for extended replies. I have yet to create a blog of my own where we can have a back and forth on the matter, but I’d like to continue that discussion
You also recommend Parfit’s On What Matters. Do you know if OWM has any good arguments for moral realism? The only bits I recall from looking at the sections on moral realism consisted of Parfit more or less claiming that antirealists often just lacked the proper concepts. This struck me as speculative psychologizing, rather than anything amounting to a reasonable objection to antirealists.
Have you read ethical intuitionism? I haven’t read the book, but I’d be interested in discussing any arguments in the book for why antirealism is implausible.
You say: “The semantic account and the epistemological account would both be similar to how we came to know about and talk about other abstract realms, such as math and sets.”
What is the claim here, exactly? I don’t think ordinary mathematical language presumes any kind of “abstract realm” of mathematical objects or facts. Are you claiming that it does? I’m also not sure anyone has come to know about any substantive kind of mathematical realism. Even if they did, it’s not clear why I should think moral realism was anything like mathematical realism. Moral realism strikes me as far more likely to be a parochial human invention than math.
You say, “We can reason about such things, and they can explain particular moral judgments that we make, which feature in our moral language.”
I can reason about what music I am going to listen to and what food I am going to eat. Yet I am not a gastronomic or music realist. The ability to reason about normative and evaluative domains is not a good indication that those domains are comprised of stance-independent normative facts.
As far as explaining the moral judgments that “we” make, this is an empirical claim. If moral philosophers want to explain moral judgments, they’d need to provide a good account of what a moral judgment is, how their metaethical accounts can explain them, and so on, in such a way that they engage with the empirical evidence on the matter. I rarely see moral philosophers do this. They instead seem to rely largely on toy sentences and armchair analysis. I don’t think these are the proper methods for making discoveries about what nonphilosophers are doing when they engage in moral judgment. Again: descriptive questions about the moral judgments of nonphilosophers are best resolved by appeal to empirical evidence, not armchair speculation.