The Case Against the Case Against Consequentialism Reconsidered
Responding to a book criticizing consequentialism
Introduction
Nikil Mukerji has a book called “The Case Against Consequentialism Reconsidered.” I don’t think the criticisms can succeed — thus, I’d advocate reconsidering the case in “The Case Against Consequentialism Reconsidered.” Here, I’ll explain why I think this. Here, I’ll present things that I think the book gets wrong.
Worth noting: this is not a comprehensive response to the book. I agreed with much of what was said — especially about methodology — and I spend a lot of time addressing points that weren’t spent too much time on. The reason for this is that, as primarily a Benthamite, I’m more interested in addressing the objections to hedonistic utilitarianism than consequentialism broadly. A lot of the book is spent explaining why, for example, the considerations presented in section 10 of this article are, if successful, decisive reasons to reject all forms of consequentialism. I tend to agree with this — however, I do not think that the considerations are decisive.
1 Intuitive fit
This said, we can introduce an approximate criterion of justification for moral doctrines. Rawls says that we may provisionally think of them as the “attempt to describe our moral capacity” (Rawls 1971/1999, 41) and the high-level and lowlevel moral intuitions that issue from it. This suggests that their acceptability is determined, at least in part, by how well it fits the moral claims which we intuitively endorse. Elsewhere, I called this criterion “intuitive fit” (Mukerji 2013c, 299).
(p.21)
The basic idea expressed here is that we want the theory to fit with our intuitions as much as possible. But this is wrong. As I explain here, it would be deeply suspicious if we knew that the correct moral theory fully accorded with our intuitions — because we know that our intuitions are often wrong. Thus, we don’t want a theory that accords with our intuitions.
Instead, we want a theory that accords with our intuitions a lot of the time — roughly as much we we’d expect it to on the hypothesis that our intuitions are mostly right but far from infallible. If you knew that some method got you the right answer 90% of the time, and there were two possible answer sheets — one which held that you got the right answer 90% of the time and the other which held you got the right answer 100% of the time, this would count in favor of believing the 90% one even though the percent is lower.
This is just doing the bayesian math wrong. If some method is unreliable sometimes, then you don’t want to maximally accord with the method. This is true even if the method is mostly right!
2 Higher level vs lower level intuitions
Consequentialists often think that we should champion higher-level intuitions over lower-level intuitions. Thus, we should trust our intuition more which says that we should do what makes things go best more than we should our intuition about the organ harvesting case. One reason to think this that’s mentioned by Mukerji is that our case-specific intuitions are distorted by lots of salient emotional biases.
Mukherji gives a couple of responses which he judges unsuccessful, before saying
empirical findings can be acknowledged, but the link between these findings and the conclusion drawn by proponents of TD can be disputed.
(p.33)
Next, he says
This leaves us with the third strategy, viz. to argue that low-level intuitions are not entirely discredited by empirical findings. How can this be done? Consider, first, framing effects. Does the fact that our lowlevel intuitions are subject to framing effects show that we should dismiss them tout court and trust only high-level intuitions? There are two reasons, I believe, why this would be a hasty conclusion to draw.
(p.34)
I don’t think it would mean we should just totally throw out the intuitions. But it is a reason to place more trust in the higher level intuitions.
First of all, it has not been shown (nor do we have much reason to suspect) that all case-based intuitions are susceptible to these effects. Rather, it has been reported by some researchers that certain experiments could not demonstrate the existence of framing effects. Petrinovic and O’Neill (1996), e.g., failed to detect wordingrelated framing effects in some cases. To be sure, this does not demonstrate that there were no framing effects. However, it does give us reason to doubt the sweeping conclusion that all our low-level intuitions are susceptible to these effects
But we still have lots of evidence showing that the framing effects are significant in lots of cases.
Secondly, even if all low-intuitions were, in fact, affected by framing effects, this would not mean that we have to dismiss them tout court. Presumably, framing effects arise from the fact that, e.g., different wordings or different contexts draw our attention to particular features of it. This, in turn, may lead to a well-known problem, to wit, that we neglect other features which may be of equal importance (cf. Brink 1984, 117). If we know that we tend to have such “blind spots,” as Bazerman and Tenbrunsel (2011) and Sorensen (1998, 273) call them, we can, it seems, discipline ourselves. We can try to focus on all relevant aspects of a moral problem and carefully consider our intuitive verdicts
But we have good evidence that that doesn’t work. Humans aren’t good at avoiding their own bias. There are various psychological effects that explain this — making people aware of their biases doesn’t actually affect their bias.
After this, Mukherji declares
For it seems that the arguments presented by proponents of TD, who attack the credibility of low-level intuitions, are rather shaky
(p.37)
But this isn’t true at all. Consider the following few arguments for trusting higher level intuitions.
Most of our higher order intuitions seem to be correct (E.g. intuitions about transivitivity) and intuitions that the fact that some act would have good outcomes counts in favor of it.
Higher order intuitions have implications across vast nmbers of lower order cases. Thus, if they were wrong, we’d expect that to show up — we’d expect them to be disastrously wrong about lots of cases. If they do at least minimally well, we should keep them around.
As I explain here, when there’s a conflict between our higher and lower-order intuitions, we should trust the higher-order ones. This is because, as a result of applying to lots of cases and our intuitions being fallible, we should expect our lower-order intuitions to conflict with correct higher order intuitions. Thus, the fact that they do wouldn’t be evidence — it’s what we should expect.
This is especially true if various higher-order intuitions converge. It would be very surprising if very converging higher-order intuitions were all wrong. But as I’ve argued previously, there are lots of higher-order intuitions supporting both consequentialism and utilitarianism.
It seems that supporters of BU could say something about high-level moral principles which is similar to what David Hume said about abstract ideas (cf. Hume 1888/1960, 25–33). As an empiricist, Hume believed that all ideas are derived from prior sense impressions. Since every impression is an impression of a concrete object, all ideas, he thought, had to be concrete as well. Thus, Hume reasoned, when we appear to think abstractly, we actually have a concrete idea in mind which we then allow to relate to other objects that are sufficiently similar in its qualities. Something like this may be going on when we think of an abstract principle and form an intuition about it. It may be that we do not consider it in its abstractness, but imagine concrete cases to which it applies and then say “yes” or “no” to it depending on whether its implications in these cases seem intuitively acceptable. It may be, that is, that whenever we think we have a high-level intuition about a principle, we have, in fact, one or more (muddled) low-level intuitions about the anticipated implications of the principle in particular cases that we happen to think up. In support of this thesis, one could, e.g., cite Amartya Sen, who said something remarkable in the context of social choice theory. Social choice theory offers an axiomatic take on moral problems. Most of what happens in it happens on a rather abstract plain, where theorists give much attention to the credibility of the axioms which are more or less formalized versions of high-level moral principles.
When we make higher-order intuitions, we’re coming up with generalizations across lower-order cases. There’s no single intuition that convinces me that it’s unwise to act contrary to one’s interests, all else equal. Additionally, none of the arguments that I’ve just proffered are addressed by this, because they all rely on higher-order principles being generalizations across cases.
3 Scanlon’s Case
Many philosophers have pointed out that CU is “supremely unconcerned with ( ::: ) interpersonal distribution.”133 (Sen 1973/1997, 16) This, one may say, is because it only looks at the sum of happiness that individuals share between them and not at how this sum is shared out. This aspect of the doctrine can lead to counterintuitive implications in a number of cases. Let us look at some of the most important difficulties for CU which arise in this context. First up, let us look at a case that is introduced by Tim Scanlon.
Jones’s Case Suppose that Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we cannot rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones’s injury will not get any worse if we wait, but his hand has been mashed and he is receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over? (Scanlon 1998, 235)
How does CU answer Scanlon’s question? Presumably, since so many people are watching and enjoying the game, the sum of their individual pleasures outweighs the intense pain that is felt by Jones. Therefore, CU would probably demand that we do not rescue Jones, which, intuitively, is the wrong conclusion to draw. The mere fact that this would produce more good in the aggregate is not a sufficient reason, it seems, to let Jones suffer this much. If we did that, we would not take into account the fact that a single person has to bear the entire costs of our choice. We would not, as John Rawls has famously put it, “take seriously the distinction between persons.” (Rawls 1971/1999, 24)134
(p.151)
I think there are lots of good arguments for biting the bullet here in the Scanlon case.
Argument 1
In this case, we can apply a similar method to the one applied to previous cases. Suppose that we compare Jones’ situation to two people experiencing painful electric shocks that are only 90% as painful as Jones’ shocks. Surely it would be better to prevent the shocks to the two people. Now compare each of those two shocks to two more shocks, which are 60% as painful as Jones’ original one. Surely the 4 shocks are worse than the two. We can keep doing this process until the situation is reduced to a large number of barely painful shocks. Surely a large number of people enjoying football can outweigh the badness of a large number of barely painful shocks. A similar point has been made by (Norcross, 2002).
Argument 2
Additionally, as Norcross points out, we regularly make similar trade-offs. When we lower the speed limit, we recognize that some number of people will die, to increase the speed at which people can reach their destination.
A Specific Defense of Transitivity In This Context
Scanlon could reject transitivity. However, there are extremely strong arguments for transitivity, which were already covered in the section on the utility monster. We can show both supporting arguments specifically when applied to this case.
1= Jones being painfully shocked.
2= Two people being shocked slightly less
3= Four people being shocked slightly less
100 = a lot of people experiencing very minor shocks
101 = The basketball game being shut off for a large number of people.
101>100>99…>3>2>1
1>101 according to the transitivity deniers in this case.
All the worlds are isolated from each other.
101 + 100 > 100 + 99 > 99 + 98 > 98 + 97… > 2>1
G1 contains 101+100+99+98…+3 +2
G2 contains 100+99+98… +3 +2 +1
G1 > G2
We know this is true because each member of the series is better than the parallel member of the other series. The first member of G1 > the first member of G2 the second member of G1 is better than the second member of G2 and so on.
However, if we change the order of the worlds, we get a contradiction.
G1 101+100+99… +3 +2
G2 1+100+99+98… +3 +2
The first member of G2 is better than the first member of G1. However, all of the other members are equal. Thus, G2 is better than G1. However, as we saw earlier, G1>G2. It cannot be the case that both G1>G2 and that G1<G2. Thus, we have decisive reason to reject this view.
We can also see the money pump reasoning applied here. If one starts out in state of affairs 101, they’d be willing to pay a bit to move to state of affairs 100. They’d pay a bit to move from there to state of affairs 99, then to 98, 97…until state of affairs 1. However, because they judge state of affairs 1 to be worse than state of affairs 101, they’d pay a bit to move back to 101. They’ve thus payed a cost to get right back to where they started. This can continue infinitely, resulting in an infinite amount of disvalue for no benefit.
Argument 3
An additional objection can be given to Scanlon’s view. Suppose the game was being watched by 500 million people. Additionally, suppose that the only way to alleviate Jones’ torture was to give each of the 500 million people a 1 in 250 million chance of being tortured. That would clearly be a bad action. It would result in two people being tortured, rather than the current state of affairs which has only one person being tortured.
However, if choosing between giving each of the people a 1 in 250 million chance of being tortured or certainty of watching the game, it would be reasonable to give them the 1 in 250 million chance of being tortured. This risk of torture is well below the risk of death one endures every time they drive a car. Sports game watchers would rationally choose to endure a risk far lower than that endured every time they enter a car in order to watch a sports game--yet those risks conjunctively are worse than Jones’ torture.
Thus by transitivity, we get the conclusion that Jones should continue to be tortured. Jones being tortured is less bad than a 1 in two-hundred-fifty-million risk of torture for 500 million people, which is less bad than a sports game being shut off for 500 million people.
Debunking the intuition
Scanlon’s intuition is debunkable. As we’ve already seen (Huemer, 2008, 909-910) humans are very bad at compounding small harms and benefits. Thus, our judgments based on such aggregation of small effects will be very unreliable.
Second, also shown by Huemer (p.908-909), people are very bad at reasoning about large numbers. It’s hard to intuitively grasp how big a crowd of millions is. They’re particularly unreliable when reasoning about the successive addition of small benefits.
Third, our intuitions are largely affected by judgments about viciousness. When we think of people losing out on small benefits from sports games, or broadcasting delays, it’s very easy to think of it as an inconvenience--one which is an inevitable part of life. This judgment does not evoke the same types of viciousness intuitions as torture cases do. Given that our intuitions about the right act are largely influenced by character judgments, judging the character of the torturer as vicious undermines the quality of the intuition.
Onto the separateness of persons charge
I’ll just quote my previous remarks on the subject
Another objection provided to utilitarianism is that it fails to respect the separateness of persons. This objection seems to be fundamentally confused. Utilitarianism obviously realizes the persons are separate. The question is merely whether that provides for moral significance. Two things being different doesn't mean that we can’t make comparisons between them. It is rational to endure five minutes of pain now to prevent thirty minutes of pain later. This does not fail to respect the “separateness of moments.”
We can make tradeoffs regarding separate things. My happiness is separate from the happiness of others, but we can obviously make tradeoffs. It would be wrong to save my life instead of the lives of hundreds, despite our separateness. If we take the view that interpersonal tradeoffs in happiness are prima facie wrong that runs afoul of pareto optimality. Suppose we can take an action that gives person A five units of happiness and costs person B 2 units of happiness. If we take seriously the “separateness of persons,” we would perhaps not take that action. However, suppose additionally that we can give person B 5 units of happiness and cost person A 2 units of happiness. This act would similarly be wrong. This would make an act that makes everyone better off morally wrong.
The believer in the separateness of persons has to provide a reason to assign normative significance to the separateness of persons in a way that undermines utilitarianism. Impartiality demands that we regard everyone as equal. This does not require that we lose sight of the fact that people are, in fact, different.
4 The repugnant conclusion
Another problem for CU can arise when we compare and evaluate the moral quality of states of affairs which differ in population sizes. In such cases, CU may give rise to a “repugnant conclusion”, as Parfit (1986, 381) famously suggested. The problem, in brief, is this. As we just noted, CU looks only at the sum total of happiness and not at how this total is distributed. Therefore, it does not distinguish a case in which one person has, say, 10 units of happiness from a case in which two persons have 5 units. From the perspective of CU, 100 persons might as well have 0.1 units of happiness (cf. Mukerji 2013c, 305). Morally speaking, it is all the same. The total in each case is, after all, identical. This has an unwelcome consequence. It is that CU may imply that the preferable state of affairs is one in which a 100 billion people “have lives that are barely worth living.” (Parfit 1986, 388) The sum of happiness may, after all, be maximal in that state
Here, somewhat disappointingly, Mukerji ignores all of the literature about the repugnant conclusion1. It turns out that there are lots of independent arguments for why we should accept the repugnant conclusion. There’s a reason that upwards of 20 philosophers — many of whom were not utilitarians — signed a letter saying the following
We agree on the following:
1. The fact that an approach to population ethics (an axiology or a social ordering) entails the Repugnant Conclusion is not sufficient to conclude that the approach is inadequate. Equivalently, avoiding the Repugnant Conclusion is not a necessary condition for a minimally adequate candidate axiology, social ordering, or approach to population ethics.
2. The fact that the Repugnant Conclusion is implied by many plausible principles of axiology and social welfare is not a reason to doubt the existence or coherence of ethics and value theory (although we do not rule out that there may be other reasons for moral skepticism).
3. Further properties of axiologies or social orderings – beyond their avoidance of the Repugnant Conclusion – are important, should be given importance and may prove decisive.
It would have been worth at least discussing why many people — even intuitionists like Huemer — accept the RC. Hint, it’s not because they’re utilitarian. It turns out that the RC is very hard to coherently deny.
Argument 1
We can take a similar approach to the one taken in the previous section. Suppose that we have one person who is extremely happy all of the time. Suppose that they live 1000 years. Surely, it would be better to make 100 people with great lives who live 999 years, than one person who lives 1000 years. We can now repeat the process. 100000 people living 998 years would surely be better than 100 living 999 blissful years. Once we get down to one day and some ungodly large number of people 10^100 for example, we can go down to hours and minutes. In order to deny the repugnant conclusion, one would have to argue for something even more counterintuitive, namely that there’s some firm cutoff. Suppose that we say that the firm cutoff is at 1 hour of enjoyment. They would have to say that infinite people having 59 minutes of enjoyment matters far less morally than one person having a 1 in a billion chance of having an hour of enjoyment. One might reply that the types of enjoyment are incomparable and different. However, as we’ve seen with the torture argument, different types of pleasures are comparable. It is clear that the most blissful physical sensations matter more than learning one relatively trivial thing, that barely improves quality of life. The harms of never being able to experience any positive physical sensations outweigh those of having one interesting book stolen, that would have otherwise been read and enjoyed
Argument 2
Another argument can be made for the conclusion. Most of us would agree that one very happy person existing would be worse than 7 billion barely happy people existing. If we just compare those states of the universe, iterated 1 trillion times, we conclude that 7x10^21 people with barely happy lives matters more morally than 1 trillion people with great lives. To deny this view one could claim that there is some moral significance to the number a billion, such that the moral picture changes when we iterate it a trillion times. Yet this seems extremely counterintuitive. Suppose we were to discover that there were large numbers of happy aliens that we can’t interact with. It would be strange for that to change our consideration of population ethics. The morality of bringing about new people with varying levels of hedonic value should not be contingent on causally inert aliens. This demonstrates that our anti-repugnant conclusion intuitions fall prey to a series of biases.
Argument 3
(Huemer, 2008 p.902) has given another argument for the repugnant conclusion. Suppose we accept
“The Benign Addition Principle: If worlds x and y are so related that x would be the result of increasing the well-being of everyone in y by some amount and adding some new people with worthwhile lives, then x is better than y with respect to utility.7
“Non-anti-egalitarianism: If x and y have the same population, but x has a higher average utility, a higher total utility, and a more equal distribution of utility than y, then x is better than y with respect to utility.8
“Transitivity: If x is better than y with respect to utility and y is better than z with respect to utility, then x is better than z with respect to utility”
we must accept the repugnant conclusion. Huemer goes on to explain why these necessitate RC, writing (p. 902-903)
“To see how these principles necessitate the Repugnant Conclusion, consider three possible worlds (figure 1):
“World A: One million very happy people (welfare level 100).
“World A+: The same one million people, slightly happier (welfare lev el 101), plus 99 million new people with lives barely worth living (welfare level 1).
“World Z: The same 100 million people as in A+, but all with lives slightly better than the worse-off group in A+ (welfare level 3).
“A+ is better than A by the Benign Addition Principle, since A+ could be produced by adding one unit to the utility of everyone in A and adding some more lives that are (slightly) worthwhile. Z is better than A+ by Non-anti-egalitarianism, since Z could be produced by equalising the welfare levels of everyone in A+ and then adding one unit to everyone's utility. Therefore, by Transitivity, Z is better than A. Analogous arguments can be constructed in which world Z has arbitrarily small advantages in total utility; as long as Z has even slightly greater total utility than A, we can construct an appropriate version of A+ that can be used to show that Z is better than A. This suggests that we should embrace not only (RC), but the logically stronger Total Utility Principle: For any possible worlds x and y, x is better than y with respect to utility if and only if the total utility of x is greater than the total utility of y.”
The benign additional principle is quite intuitive to start with. Making new people with good lives and current people better off seems obviously good. Yet it can be supported through the following argument.
Premise 1 If bringing people into existence with good lives is not morally bad, the benign additional principle is true.
Premise 2 Bringing people with good lives into existence is not morally bad.
Therefore, the benign additional principle is true.
Premise 1 is clearly true. Given that making current people better off is good, for it to be bad to make current people better off and bring new people with good lives into existence, it would have to be the case that bringing new people into existence with good lives is morally bad.
Premise 2 is also seemingly robustly supported. If it’s morally bad to bring people with good lives into existence, then bringing enough people with good lives into existence could outweigh any atrocity. On this account, bringing into existence 10^100^100^100 people into existence who enjoy the sensation of eating cake, before dissapearing, would be morally worse than the holocaust. This is deeply implausible.
Additionally, this can be supported with another argument. Surely bringing people with sufficiently positive existence would be morally good (this will be defended more in the next part). Creating an infinitely happy person is morally good. Thus, for it to be morally bad to bring a person into existence with low positive utility, there must be some threshold at which people have positive qualities of life, but their existence becomes bad, rather than good.
However, the existence of a threshold is implausible. If we set the threshold at 2 utility, then an infinite number of people with 1.9999999 utility would be infinitely bad, but an infinite number with 2.1 utility would be very good. However, it doesn’t seem like small changes in the quality of people’s lives that don’t make their lives go from not worth living to worth living are as morally relevant as this account portrays.
Additionally, this account sums up the total value of people’s lives. However, this opens up its own implausible implications. For example, suppose that this day of my life is good, but not good enough to be above the threshold. On this account, whether it would be good for me to exist and live this day of my life depends wholly on whether I have lived on other days. Thus, whether living today is good for me is dependent on past events that exert no causal influence on the present. This is not plausible.
Huemer additionally supports this premise with the modal pareto principle (p.903) which says that if a world is preferable relative to the self interest of everyone involved in either world to a different world, then it is a better world. This principle is very plausible and it entails the benign addition principle, because making everyone better off and giving more people good lives is better for everyone involved.
The second premise is also quite hard to deny. It could only be false if increasing equality of utility, total utility, or average utility could be bad, ceteris paribus. And yet this is wildly implausible. Obviously utility is prima facie good, both average and total. Virtually no one holds that equality of utility is intrinsically bad. This would also strangely hold that it’s morally better to benefit the well off, rather than the poorly off.
One who rejects this might do so on the basis of thinking that there are some experiences that are so worthwhile that they produce greater benefits than the utility provided. They might think that, for example, no number of lizards enjoying the sun at the beach could ever be as good as the experience of enjoying a great philosophy book.
However, this rests on a confusion. Utility just relates to those things that make people better off, so this would be disputing how much utility things provide, rather than disputing the principle itself. This distinction also can’t withstand scrutiny.
We no doubt would assume that one second of enjoying a great philosophy book would produce less utility than enjoying ten great meals. Thus, we recognize that trivial pleasures can bring more utility than the deepest pleasures, in sufficiently large quantities. Thus, enough moments of enjoying great meals for a particular person can benefit them more than merely enjoying a great philosophy book once.
However, it seems clear that enough moments of lizards basking in the sun can outweigh the desirability of eating lots of good meals. Each moment of enjoying the meal is no doubt comparable to some amount of time spent by a human basking in the sun, which is comparable to some amount of time spent by slightly less sentient creatures spending time in the sun, and so on. If we accept that, for example, 20 minutes of reading books=10 hours of eating delicious food=10 hours of basking in the sun for humans=30 hours basking in the sun for homo erectus=150 hours basking in the sun for lizards, it shows that these things are roughly comparable.
The third premise, transitivity, will be defended later.
Argument 4
(Arrhenius, 2000) has his own claim that the repugnant conclusion must be accepted if we accept certain other very reasonable axioms. Let’s look at the axioms.
(p. 257) A) “The Dominance Principle: If population A contains the same number of people as population B, and every person in A has higher welfare than any person in B, then A is better than B”
This is obviously correct. If we have two worlds and everyone is better off in world A than world B then A is better than B.
(p. 257) “B) The Addition Principle: If it is bad to add a number of people, all with welfare lower than the original people, then it is at least as bad to add a greater number of people, all with even lower welfare than the original people.”
This is also obvious. If it’s bad to bring a person with negative 4 utility into existence then it’s more bad to bring ten people with negative 8 utility into existence. Also duh.
C) (p. 253) The Non-Anti-Egalitarianism Principle: A population with perfect equality is better than a population with the same number of people, inequality, and lower average (and thus lower total) welfare.
This is also obvious. A population of everyone with 10 utility is better than a population with more inequality and average utility of 5.
D) ( p. 259) The Minimal Non-Extreme Priority Principle: There is a number n such that an addition of n people with very high welfare and a single person with slightly negative welfare is at least as good as an addition of the same number of people but with very low positive welfare.
E) The rejection of the sadistic conclusion. When adding people without affecting the original people's welfare, it can’t be better to add people with negative welfare rather than positive welfare.
These all in conjunction require we accept the repugnant conclusion.
Argument 5
(Budolfson and Spears, 2018) show that all welfarist axiologies in the population ethics literature imply that, for some welfare levels of the existing population, it can be better to create lots of people with lives barely worth living (10^40, for example), rather than a large but much smaller number of people with great lives (10 billion, for example). I won’t go through all of the axiologies for which they prove this, but this can be easily seen with average utilitarianism.
Suppose there are currently 10^40 people with utility of -5. We can either create 10^40 people with lives barely worth living (utility of 5), or create 10 billion people with utility of 10,000. The first action would be better because it would result in an average utility of zero, while the second would result in an average utility of about -5.
Argument 6
The following premises in conjunction require we accept the repugnant conclusion
1 A person existing with a life barely worth living is good
2 For any number N and event M, N instances of event M are N times as better than M, if the instances of N don't exert a causal impact on each other
3 Infinite people with lives barely worth living don't exert a causal impact on each other
4 If something is infinity times better than something good, it is infinitely good
Therefore, infinite people with lives barely worth living is infinitely good
5 10 billion people with awesome lives isn't infinitely good
6 things that are infinitely good are better than things that are not infinitely good
Therefore, infinite people with lives barely worth living is better than 10 billion people with awesome lives
1 is accepted by everyone except those who accept threshold views, according to which lives can be worth living but still bad from the standpoint of population axiology. Such views are deeply implausible.
First, they require positing some firm threshold at which lives go from good to bad, from the standpoint of population axiology. Wherever this threshold is set, this has the strange implication that no number of people with welfare levels slightly below the threshold can be as good as any number of people with welfare levels above the threshold. Such a result is deeply implausible; how could it be that minimal changes in one’s life, after which their life remains worth living, could be enough to make it so that their life becomes more worthwhile than any number of lives at the previous welfare level?
Second, in their simplest form, they imply a counterintuitive conclusion. They imply, given that barely worthwhile lives are actively bad to add on the view, a world full of horrifically miserable lives is better than a world with some large number of people with great lives and an even vaster number of lives barely worth living.
Argument 7
Suppose we reject the repugnant conclusion. We’d no doubt also reject the following conclusion.
Low probability repugnant conclusion: A 1/100,000,000 chance of creating 10 billion people with worthwhile lives is better than a 1/100,000,000 chance of creating some vast number of people with lives barely worth living (say googolplex).
Define low probability world A as the world where there’s a 1/100,000,000 chance of 10 billion people having awesome lives.
Define low probability world Z as the world where there’s a 1/100,000,000 chance of some vast number of people (say Graham’s number) living barely worthwhile lives
Define high probability World Z* as the world where some vast number of people (say 10^40) but still far fewer people than the number in low probability world Z, have a 100% chance of living lives that are barely worth living.
1 If we should not accept the rc then Low probability world Z < Low probability world A
2 High probability world Z* > Low probability world A
3 Low probability world Z > High probability world Z*
Therefore, by transitivity, low probability world Z > Low probability world A
Therefore, we should accept the rc
1 is very intuitive -- if 10 billion people with great lives is better than some large number of people with barely worthwhile lives (say 10^40) then it seems obvious that a 1 in 100 million chance of 10 billion people with great lives lives would be better than a 1 in 100 million chance of some large number (say 10^40) with barely worthwhile lives. This also follows from
Favorable gambles: If A>B, then a 1 in 100 million chance of A > a 1 in 100 million chance of B.
2 is very plausible. It seems clear that the repugnant conclusion world where 10^40 people live worthwhile lives is better than almost certainly having nothing -- having only a 1 in 100 million chance of having 10 billion worthwhile lives. As we shall see, however, the 100 million number is irrelevant -- the number could be anything. Thus, the person who denies this premise doesn’t get out of the argument; they’d have to similarly accept the following
Define Even lower probability world A as a 1 in 100^100^100^100^100^100^100 chance of 10 billion people living worthwhile lives.
They’d similarly have to accept that Even lower probability world A > High probability world Z*.
3 follows from a very plausible principle.
Expansion: For any probability P of any number N of people living barely worthwhile lives, there’s a better state of affairs in which there’s a .999999999999 of P chance of 100,000,000,000 N people living barely worthwhile lives.
However, if we accept this, then a 100% chance of some vast population will be less good than a probability of .999999999999 of a far vaster population, which will be less good than a .999999999999^2 chance of an even vaster population… which will be less good than a 1 in 100 million chance of a very vast population. Thus, this principle entails that Low probability world Z > High probability world Z*
Now that we’ve proven the three principles, transivity implies low probability world Z > Low probability world A. But by 1, this means we should accept the RC.
The very repugnant conclusion
Budolfson and Spears also show that the same can be shown for the very repugnant conclusion. The very repugnant conclusion states that if given the choice between either creating a large number of people (say 10 billion) with great lives or creating some far greater number of people with lives barely worth living, combined with creating 10 billion terrible lives, one should take the second option. Creating lots of terrible lives combined with enormous numbers of barely worthwhile lives can be better than creating billions of excellent lives.
I’ll merely provide the demonstration for average utilitarianism. On average utilitarianism if there are currently 10^30 people with utility of -5, then it would be better to create 10^40 people with utility of 1 and 1 billion people with utility of -100,000 than to create 10 billion people with utility of 1,000,000. Average utility would be higher if the first option is taken than if the second option is taken.
Budolfson and Spears provide another argument for accepting the very repugnant conclusion. They show that one will have to accept the results of the very repugnant conclusion if they accept three things
1 (p.11) Transitivity
2 (p.11) “Convergence in signs (informal statement). If enough identical lives, at a utility level u, are added to any base population, eventually (possibly in a very large population) the result is a combined population that is overall just as good as some perfectly equal population of the same size as the combined population, in which every person has a utility of the same sign as u,”
3 (p.8) “Extended egalitarian dominance. If population A is perfectly equal-in-welfare and is of greater size than population B, and every person in A has higher positive welfare than every person in B, then A is better than B.”
Another argument can be made for accepting the very repugnant conclusion. If we accept
Transitivity.
The diminishment principle: For any number of people with positive utility P, there can be a better state of the world with some greater number of people experiencing some amount of utility less than P. (E.G. 50 people with utility of 100 is less good than 100,000 people with utility of 99).
The reverse diminishment principle: For any number of people with negative utility N, there can be a worse world with some greater number of people experiencing negative utility of less than N (E.G. 10 people with utility of -100 is less bad than 30 with utility of -90).
Some number of lives with lots of positive utility are worth creating, even at the cost of creating some number of lives with slight negative utility.
The diminishment principle combined with transitivity shows that for any large number of people with very high positive utility, there can be a better state of affairs with a much larger number of utility with utility sufficiently low that their lives are barely worth living. For example 10 billion people with utility of 100,000 is less good than 10^40 people with utility of 1.
The reverse diminishment principle shows that for any number of people with negative utility, there is a worse world where there are more people with lower negative utility. By transitivity, this shows that a world with 10 people with utility of -10,000, that would be less bad than 15 people with utility of -9000, which is less bad than 30 people with utility of -8000…which is less bad than some vast number of people with utility of -1.
Thus, some number of people with high positive utility is good enough to outweigh the harms of some number of people with slightly negative utility, which is worse than some much smaller number of people with very negative utility, meaning that by transitivity, some number of people with high positive utility is enough to outweigh some number of people with very negative utility. However, some much larger number of people with lives barely worth living produce a better state of affairs than the very large population of people with excellent lives, which would mean that, by transitivity, creating some number of people with lives barely worth living is sufficiently good to offset the creation of large numbers of people with extremely negative utility.
Biases and debunkings
This demonstrates that our anti repugnant conclusion intuitions fall prey to a series of biases (Huemer, 2008).
1 We are biased having won the existence jackpot. A non existing person who could have lived a marginally worthwhile life would have perhaps a different view.
2 We have a bias towards roughly similar numbers of people to the numbers who exist today. This is shown by how different our intuitions are when we consider a world with just one person with a great life, rather than a vast number with lives barely worth living.
3 Humans are bad at conceptualizing large numbers. It’s really hard to conceptualize the difference between 1 million years and 10 billion years of good life, even though 10 billion years is 10,000 times longer.
4 Humans are bad at compounding small numbers. It’s hard intuitively to see how lots of very small things could add up to being as good as a very good thing.
5 Fairness
A further, related problem about CU is that it ignores the fairness of a distribution of well-being (cf., e.g., Rawls 1958, Sen 1973/1997, 16). Consider a situation in which I have 10 units of happiness and you have 10 units as well (and neither of us is particularly deserving or undeserving of their happiness). On CU, this situation is just as good as a situation in which, ceteris paribus, I have 20 units and you have nothing (cf. Mukerji 2013c, 305). Hence, on CU, an agent who faces a choice between bringing about the one distribution or the other acts rightly no matter what she does. This is because, according to CU, both options maximize the good. Intuitively, though, the distribution in which I am very happy, while you are very unhappy, is inequitable. It appears to be worse than the distribution in which we both have 10 units of happiness.136
(p.152)
Here, once again, the breadth of arguments for the conclusion is not explored. There are very good reasons to not assign intrinsic value to fairness.
Non-egalitarianism can be defended in a few ways. The first supporting argument (Huemer 2003) can be paraphrased (and modified slightly) in the following way.
Consider two worlds, in world 1 one person has 100 units of utility for 50 years and then 50 units of utility for the following 50 years. A second person has 50 units of utility for the first 50 years, but 100 units of utility for the next 50 years. In world 2, both people have 75 units of utility for all of their lives. These two worlds are clearly equally good, everyone has the same total amount of utility. Morally, in world 1, the first 50 years is just as good as the last 50 years, in both of them, one person has 100 units of utility and the other person has 50 years. Thus the value of world 1 = two times the value of the first 50 years of world 1. World one is just as good as world two, so the first 50 years of world one are half as good as world two. The first 50 years of world 2 are half as good as the total value of world 2, thus the first half of world one, with the same total utility, but greater inequality of utility is just as good as the first half of world 2, with greater inequality of utility but the same total utility. This proves that the distribution of utility doesn’t matter. This argument is decisive and is defended at great lengths by Huemer.
Another argument can be deployed for non-egalitarianism based on the difficulty of finding a viable method for valuing equality. If the value of one’s utility depends on equality this runs into the spirits objection; it implies that if there were many causally inert spirits living awful lives this would affect the relative value of giving people hedonic value. If there was one non spirit person alive, this would imply that the value of them being granted a desirable experience was diminished by the existence of spirits that they could not effect. This is not plausible; causally inert entities have no relevance to the value of desirable mental states.
This also runs into the Pareto objection; to the extent that inequality is bad by itself then a world with 1 million people with utility of six could plausibly be better than a world with 999,999 people with a utility of six and one person with a utility of 28, given the vast inequality.
Rawls formulation, which claimed that we would only care about what happens to the worst off doesn’t work; if we are only supposed to do what benefits the worse off then we should neglect everyone’s interests except those who are horrific victims of the worst forms of torture imaginable. This would imply that we should bring to zero the quality of life of all people who live pretty good lives if that would marginally improve the quality of life of the worst off human.
Rawls' defense of this rule doesn’t work. As (Harsanyi 1975) showed, we would be utilitarian from behind the veil of ignorance. This is because the level of utility referred to as 2 utility is, by definition, the amount of utility which is just as good as a 50% chance of 4 utility. Thus, from behind the veil of ignorance we would necessarily value a ½ chance of 4 utility at equal to 2 utility and always prefer it to certainty of 1.999 utility.
Rawls attempts to avoid this problem by supposing that we don’t know how many people are part of each class. However, it is not clear why we would add this assumption. The point of the veil is to make us impartial, but provide us with all other relevant information. To the extent that we are not provided information about how many people are part of each social class that is because Rawls is trying to stack the deck in favor of his principle. Simple mathematics dictates that we don’t do that.
An additional objection can be given to the egalitarian view. On this view, if an action makes people who are well off much better off and people who are not well off slightly better off, on egalitarian accounts this action is in some sense bad. Because it increases inequality, this action does something bad (though it might be outweighed by other good things). However, this is implausible. Making everyone better off by differing amounts is not partially bad.
We have several reasons to distrust our egalitarian intuitions.
First, egalitarianism relates to politics and politics makes us irrational. Kahan et al (2017) showed, in a fascinating study, that greater knowledge of math made people less likely to solve math problems when the correct answer related to politics. This shows that politics has such a corosive effect on our rationality that it makes greater knowledge of math make it harder to solve math problems.
Second, equality is instrumentally valuable according to utilitarianism; money given to poor people has greater utility given the existence of declining marginal utility. As (Baron, 1993) argues it is possible to easily explain our support for equality as a utilitarian heuristic. Heuristics make our moral judgments often unreliable (Sunstein 2005). It is not surprising that we would care about something that is instrumentally valuable and that should often be pursued given that its pursuit is a good heuristic. We have similar reactions in similar cases.
Third, given the difficulty of calculating utility we might have our judgment clouded by our inability to precisely quantify utility.
Fourth, given that equality is very often valuable; an egalitarian cookie, money, or home distribution produces greater utility than an inegalitarian one, our judgement may be clouded by our comparison of utility to other things. Most things have declining marginal utility. Utility, however, does not.
Fifth, we may have irrational risk aversion that leads us to prefer a more equal distribution.
Sixth, we may be subject to anchoring bias, with the egalitarian starting point as the anchor.
Several more arguments can be provided against egalitarianism. First is the iteration objection. According to this objection, if we found out that half of people had a dream giving them unfathomable amounts of happiness that they had no memory of, their happiness would become subsequently less important. Given that egalitarianism says that the importance of further increases in happiness relates to how much happiness they’ve had previously, to the extent that one had more happiness previously - even if they had it in a dream that they couldn’t remember - their happiness would subsequently become less important.
The egalitarian could object that the only thing that matters is happiness that one remembers. However, this runs into a problem. Presumably to an egalitarian what matters is total happiness that one imagines rather than average happiness. It would be strange to say that a person dying of cancer with months to live is less entitled to happiness than a person with greater average happiness but less lifetime total happiness. However, if this is true then the importance of increasing the happiness of one’s dream self is dramatically more important than increasing the happiness of themselves when they’re awake. To the extent that they’ll forget their dream self, their dream self is a very badly off entity, very deserving of happiness. It would be similarly strange to prioritize helping dementia patients with no memory of most of their lives based on how well off they were during the periods of their life which they can no longer recall.
A second argument can be called the torture argument. Suppose that a person has been brutally tortured in ways more brutal than any other human such that they are the worst off human in history by orders of magnitude. From an egalitarian perspective, their happiness would be dramatically more important than that of others given how poorly off they are. If this is true, then if we set their suffering to be great enough, it would be justified for them to torture others for fun.
A third argument can be called the non-prioritization objection. Surely any view which says that the happiness of poorly off people matters infinitely more than the happiness of well off people is false. If it were true it would imply that sufficiently well off people could be brutally tortured to make poorly off people only marginally well off. Thus, the egalitarian merely draws the line at a lower level of happiness in terms of how much happiness for a poorly off person outweighs improving the happiness of a well-off person. If this is true non egalitarianism ceases having counterintuitive implications. The intuitive appeal of “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 2 being morally wrong,” dissipates when alternative theories endorse “bringing one person with utility one to zero utility in order to bring another person with utility 1 to 5 (it could be more than 5, 5 is just an example) being morally wrong.” At that point utilitarians and egalitarians are merely haggling over the degree of the tradeoff.
A fourth argument comes from (Huemer, 2012). He starts by saying (p.483)
“I start from three premises, roughly as follows: (1) that if possible world x is better than world y for every individual who exists in either world, then x is better than y; (2) that if x has a higher average utility, a higher total utility, and no more inequality than y, then x is better than y; (3) that better than is transitive. From these premises, it follows that equality lacks intrinsic value, and that benefits given to the worse-off contribute no more to the world’s value than equal-sized benefits given to the better-off.”
He does bare this out with the following scenarios.
A) Consider a world of 1 million people with 101 utility. Compare it to one with 2 million people with 50 utility. World 1 is better if we accept that having a higher average utility, total utility, and no more equality than world 2 makes it better than world 1. This follows from every plausible moral view. A world twice as populated with less than half as much welfare per person would be worse.
B) Compare world 1 to world 3. World 3 has 1 million people with 102 utility and 1 million people with 1 utility. World 3 is better given that it’s better for every single person. All the 1 million people have higher utility in world 3, and there are an extra million people with overall positive lives. World 3 is better for all individuals who exist in either world, so it’s better overall.
C) This means that w(3)>w(1)>w(2). This means that by transitivity, w(3)>w(1). Thus, 1 million people with 102 utility and 1 million people with 1 utility is an overall better state of the world than 2 million people with 50 utility.
6 Demandingness
The next problem echoes a point we made earlier. As we have learnt, CU is extremely demanding. According to CU, a moral agent may, e.g., be morally required to give away both her kidneys if, by doing that, she can save two people. We have already discussed one way for the consequentialist to solve this problem. She can reject Maximization and adopt a sufficiently lenient version of Satisficing. Many consequentialists, however, are reluctant to give up Maximization and may look for alternative ways to deal with this problem (e.g. Scheffler 1985; Portmore 2007 and 2011, Roberts 2002). There may be hope for them. As McElwee (2011) points out, the crass demands of CU have not only been connected to its maximizing nature, but also to its impartiality. We have already talked about this aspect of CU on page 105. As we pointed out, the impartiality of CU is a feature that can we can attribute to the ideas of Universalism and Equal Treatment. Maybe it is possible, then, to tinker with these components in order to find alternative routes for circumventing the demandingness problem? If there is, this may give consequentialists a further motivation to embrace those alternatives.
(p.152)
Again, somewhat disappointingly, in the spray and pray of rapid-fire objections to hedonic utilitarianism, Mukerji ignores nearly all of the published literature on the subject defending the initially counterintuitive utilitarian conclusion.
First, utilitarianism is intended as a theory of right action not as a theory of moral character. Virtually no humans always do the utility maximizing thing--it would require too great of a psychological cost to do so. Thus, it makes sense to have the standard for being a good person be well short of perfection. However, it is far less counterintuitive to suppose that it would be good to sacrifice oneself to save two others than it is to suppose that one is a bad person unless they sacrifice themselves to save two others. In fact, it seems that any plausible moral principle would say that it would be praiseworthy to sacrifice oneself to save two others. If a person sacrificed their lives to protect the leg of another person, that act would be bad, even if noble, because they sacrificed a greater good for a lesser good. However, it’s intuitive that the act of sacrificing oneself to save two others is a good act.
The most effective charities can save a life for only a few thousand dollars. If we find it noble to sacrifice one's life to save two others, we should surely find it noble to sacrifice a few thousand dollars to save another. The fact that there are many others who can be saved, and that utilitarianism prescribes that it’s good to donate most of one’s money doesn’t count against the basic calculus that the life of a person is worth more than a few thousand dollars.
Second, while it may seem counterintuitive that one should donate most of their money to help others, this revulsion goes away when we consider it from the perspective of the victims. From the perspective of a person who is dying of malaria, it would seem absurd that a well off westerner shouldn’t give up a few thousand dollars to prevent their literal death. It is only because we don’t see the beneficiaries that it seems too demanding. It seems incredibly demanding to have a child die of malaria to prevent another from having to donate.
(Sobel, 2007) rightly points out that allegedly non-demanding moralities do demand a great deal of people. They merely demand a lot of the people who would have been helped by the allegedly demanding action. If morality doesn’t demand that a person gives up their kidney to save the life of another, then it demands the other person dies so the first doesn’t have to give up a kidney. Sobel argues that there isn’t a satisfactory distinction between demands on the victim of ill fortune and the demands of well-off people who consequentialism places significant demands on.
If a privileged wealthy aristocracy objected to a moral theory on the grounds that it requested they donate a small share of their luxury to prevent many children from dying, we wouldn’t take that to be a very good objection to that moral theory. Yet the objection to utilitarianism is almost exactly the same--minus the wealthy aristocracy part. Why in the world would we expect the correct moral theory to demand we give up so little, when giving up a vacation could prevent a child from dying. Perhaps if we consulted those whose deaths were averted as a result of the foregone vacation or nicer car, utilitarianism would no longer seem so demanding.
Third, we have no a priori reason to expect ethics not to be demanding. The demandingness intuition seems to diffuse when we realize our tremendous opportunity to do good. The demandingness of ethics should scale relative to our ability to improve the world. Ethics should demand a lot from superman, for example, because he has a tremendous ability to do good.
Fourth, the drowning child analogy from (Singer, 1972) can be employed against the demandingness objection. If we came across a drowning child with a two thousand dollar suit, it wouldn’t be too demanding to suggest we ruin our suit to save the child. Singer argues that this is analogous to failing to donate to prevent a child from dying.
One could object that the child being far away matters. However, distance is not morally relevant. If one could either save five people 100 miles away, or ten 100,000 miles away, they should surely save the ten. When a child is abducted and taken away, the moral badness of the situation doesn’t scale with how far away they get.
A variety of other objections can be raised to the drowning child analogy, many of which were addressed by Singer.
Fifth, demandingness is required to obey cross world pareto optimality. Consider two possible worlds: world one has immense opportunity to help people, world two has very little opportunity to help people, such that in world two utilitarianism demands virtually nothing. If ordinary morality is similarly demanding across both worlds, then the demanding morality would be, across worlds, both better for you and for others.
Utilitarianism is not demanding because of some inherent reason to be a saint. Rather, utilitarianism is demanding at this place, time, and social location, because we have immense opportunities to make the world a better place. When the evidence changes, so should the demandingness of our morality--particularly if we want to obey cross world pareto optimality.
Sixth, Kagan (1989) provides the most thorough treatment of the subject to date up, and argues persuasively that there is no philosophically persuasive defense of morality not being demanding. Similar accounts can be found in (Pogge, 2005), (Chappell, 2009), and many others.
Kagan rightly notes that ordinary morality is very demanding in its prohibitions, claiming that we are morally required not to kill other people, even for great personal gain. However, Kagan argues a distinction between doing and allowing and intending and foreseeing cannot be drawn successfully, meaning that there’s no coherent account of why morality is demanding on what we can’t do, but not on what we must do.
Seventh, a non-demanding morality can be collectively infinitely undesirable. Currently, to increase the welfare of a person on the other side of the world by N, the cost is far less than N/2, for affluent people. However, we can stipulate that it’s only N/2, and the implication still goes through. Suppose two people can both endure a cost of N/2 to benefit another far away person by N. If everyone does this, everyone is better off. We can stipulate this process being iterated enough to make non-demanding moralities make everyone infinitely worse off. If you have infinite opportunities to make a person 1 utility better off at the cost of .5 utility, and they have the same, you’ll both be left infinitely better off, rather than infinitely worse off, if we stipulate that each time you take the option the .5 utility option, it costs the other person .6 utility.
Eighth, our intuitions about such cases are debunkable. (Braddock, 2013) argues that our beliefs about demandingness were primarily formed by unreliable social pressures. Thus, the reason we think morality can’t be overly demanding is because of norms about our society, rather than truth tracking reasons. Additionally, (Ballantyne and Thurow, 2013) argue that partiality, bias, and emotions all undermine the reliability of our intuitions. We have a strong partial and biased reason to oppose demanding morality, meaning that this is a prime target of debunking. This similarly provides us with a strong emotional opposition to demandingness.
Greater evidence for the social pressure thesis from Braddock comes from the fact that our intuitions about demandingness are hard to explain, except in light of social pressures. Several strange features of our obligations are best explained by this theory.
It’s generally recognized that people have an obligation to pay taxes, despite that producing far less well-being than saving the life of people in other countries. There are obvious social pressures that encourage paying taxes.
As (Kagan, 1989) points out, morality does often demand we save others, such as our children or children we find drowning in a shallow pond. This is because social pressures result in caring about people in one’s own society, who are in front of them, rather than far away people, and especially about one’s own children.
Braddock notes (p.175-176) “These processes include but are not limited to the internalization of social norms through familiar socialization practices, sanction practices, conformist pressures, modeling processes, and so on. What we think is too demanding is largely influenced by what people around us think is too demanding, much like, as a general matter, what we are likely to believe and do is influenced by what people around us believe and do. And even if those around us have not expressed these intuitions or ever explicitly entertained them before their minds, nonetheless from our earliest days, the content of our demandingness intuitions is plausibly influenced by which norms of beneficence people adopt and which attitudes they express about sharing and giving.”
Ninth, as (Braddock, 2013) notes, this problem applies to nearly all plausible theories, not merely consequentialism. Braddock writes (p.169) '
“The targets get branded as being “too demanding,” “unreasonably demanding,” “infeasible,” “unrealistic,” “unlivable,” “impracticable,” or “utopian.” The idea is not just that the targeted views are demanding—every plausible moral view is at least somewhat demanding in that it imposes some moral obligations upon us—but rather that they are excessively demanding. Usual suspects include: act consequentialism,1 rule consequentialism,2 Kantian ethics,3 virtue ethics,4 Scanlonian contractualism,5 the Golden Rule,6 Peter Singer’s famous strong principle of beneficence,7 commonsense principles of beneficence,8 egalitarian and cosmopolitan principles of distributive justice,9 socioeconomic rights claims,10 and so on.”
(Ashford, 2003), points out that the demandingness problem plausibly applies to Scanlon’s contractualism, meaning that it is a problem for other moral views, even ones designed specifically to avoid the demandingness of utilitarianism.
There’s a puzzling asymmetry between the demandingness objection when applied to utiltiarianism and Christianity. Christianity does seem to be a prime example of a demanding theory. It does, after all, imply that we’re all sinners. However, the demandingness objection is never, to my knowledge, raised against Christianity. It’s not clear why it’s thought to be uniquely a problem for utilitarianism given this fact.
7 Special obligations
Another problem for CU is the fact that it is apparently incompatible with the idea of special obligations. 137 Intuitively, one ought to give preferential consideration to the interests of some persons as against others, including not only oneself but also other persons with whom one has special relationships, such as, for example, the members of one’s own family or friendship circle or local community or nation or various other restricted social groups. (Gewirth 1988, 283) This obligation is special insofar as it is rooted in a special relationship – one that we do not share with people at large. It is easy to see the intuitive force of this idea. Imagine, e.g., the following scenario. A friend of yours is in danger and needs your help.138 She is in a burning building, say, trapped together with other people. In this situation, should you attach the same weight to her well-being as to everyone else’s? It seems that you should not. If you could save only one person in that building, you ought, most certainly, to save your friend. To be sure, in doing that you would be partial. You would abandon the principle of Equal Treatment. However, it seems that this would be sanctioned – nay, required – by morality. CU, however, disagrees. It demands that you ask whose life has the greatest value from an impartial point of view where everyone counts the same. To be sure, the classic utilitarian calculation, too, might yield the conclusion that you ought to save your friend. But this would be a matter of coincidence. It is quite likely that CU will demand that you save someone else, as it is oblivious to the idea of special obligations.
I’ll just quote my defense of partiality allegedly undermining special obligations — for more on this see here.
One could additionally object that impartiality erodes our special obligations to friends and family. This is, however, false. A perfectly rational third party observer would plausibly want people to have commitments that would cause them to act immorally in certain situations. The world is better when we care about our friends and family so much that we would be willing to save them over two strangers. Given that people die constantly, if one was as distraught about harm to their loved ones as they were about harms to strangers, they would either have to spend all of their time crying about the death of strangers or have a callous disregard for the death of their loved ones. Neither of these is desirable and importantly, neither would be approved of by a rational impartial observer.
(Parfit, 1984) described cases of blameless wrongdoing. Things go best when people adopt particular obligations--ones which entail they act wrongly in some cases. Cases involving special obligations are prime examples of such a phenomenon. This can be analogized to cases in which one inculcates in themselves a disgust reaction to taking actions that are usually bad. Suppose, for example, that one hypnotizes themself to feel disgusted by driving while drunk. This is plausibly desirable, even though it would entail they act wrongly in cases where they have to drive their friend to the hospital while drunk to save their life, or in which they’re guaranteed by an omniscient being that they will not get into a car accident or get pulled over if they drive while drunk.
Additionally, taking special obligations as being part of what fundamentally matters is collectively self defeating, as (Parfit,1984) has shown. Imagine Jim can prevent his wife Sue from experiencing two units of suffering or prevent a stranger named Lila from experiencing five units of suffering. Additionally, suppose Lila’s husband Mark can either prevent Lila from experiencing two units of suffering or Sue from experiencing five units of suffering. They would both be better off if they helped the stranger. This shows that special obligations are collectively self defeating.
We can raise the stakes in this case. Suppose that both Lila and Sue start off with 100,000 units of suffering--an amount of suffering hundreds of times worse than an ordinary torture. Jim and Mark both have the options given above 25,000 times. If they helped the stranger, no one would suffer. If they help their wife, both people would experience more suffering than the suffering experienced during brutal torture. This should not be the result of acting morally.
Clearly, one gains no extra value from the particular people who they have close relationships with. Objectively, Lila can’t be both more and less objectively important than Sue in virtue of their special obligations. Thus, if Mark has extra reason to aid Lila rather than Sue, she is not responding to the reasons provided by the things that are ultimately important. Morality should be about bringing about that which is ultimately important.
8 Desert
A further factor that appears to be relevant in this context is historical injustice. If, in the past, a person who deserved a given level of well-being, but got less than that, it seems as though, from the moral point of view, benefits to her are comparatively more valuable than benefits to other people. On CU, though, we cannot take into account desert or historical injustice. We have to consider everyone’s well-being impartially and sum it up. There is no room for anything else in CU.
(p.153)
Unfortunately, the concept of desert runs into thorny ethical dilemmas — one’s that have been well trodden in the literature, yet are not addressed here.
1 If it is good to punish bad people, then we should trade off a certain amount of pleasure with the punishing of bad people. To give an example of a metric, imagine we say that a year of punishment for bad people is good enough to offset an amount of suffering equivalent to one punch in the face. If this is true, googolplex bad people being punished for a year each, combined with as much suffering of benevolent people as googol holocausts, would be better than a world where everyone, including the unvirtuous bad people is relatively happy. Given the absurdity of this, we have reason to reject this view.
The retributivist may reply by arguing that there’s some declining value of retributivism, such that punishing one bad person is worth a punch in the face, but repeated punches in the face outweighs any amount of punishments of bad people. However, this is implausible, given the mere addition paradox. It seems clear that one torture can be offset by several slightly less unpleasant tortures, each of which can be offset by several even less unpleasant tortures. This process can continue until we get a large numbers of “tortures' ' equivalent in pain to a punch in the face, being worse than a torture. If the number of bad people punished is large enough, it could thus outweigh the badness of horrifically torturing galaxies full of people.
They could bite the bullet, however, this is a view that’s so intuitive that we have decisive reasons to reject it. There are other issues with retributivism
2 (Kraaijeveld, 2020) has argued for an evolutionary debunking of retributivism. It’s extremely plausible that we have an evolutionary reason to want to prevent people from doing bad things. It’s unsurprising that we feel angry at bad people, and want to harm them.
3 There’s an open question of how exactly we determine who to punish. Do we punish people for doing bad things? If so, should we punish politicians who do horrific things as a result of bad ideas? Would an idealistic communist leader who brings their country into peril be worthy of harm? If it’s based on motives, then should we punish egoists, who only do what makes them happy, even if they help other people for selfish reasons? If we only punish those who possess both characteristics, would we not punish nazi’s who truly believed they were acting in the greater good? Additionally, should we punish people who think meat is immoral, but eat it anyways? If so, we’d punish a large percentage of people.
4 Our immorality is largely a byproduct of chance. Many serial killers would likely not have been serial killers, had they been raised in a different family. Additionally, many violent criminals would not have been violent criminals had there not been lead in the water. Is it truly just to punish people for things that occurred outside their control, that are causally responsible for their crimes. As we’ve seen throughout history, if we’d been in nazi germany, we’d likely have been nazi’s. In scenarios similar to the Stanford prison experiment, we’d do horrible things. Most people would do horrible things in certain circumstances.
(Pereboom, 1995) argues that there’s no way to maintain the concept of desert, assuming determinism. If we are predetermined to take bad acts, then it’s hard to hold people ultimately morally responsible for those bad acts. Pereboom gives a series of thought experiments showing this, including (p.24) “Case 3: Mr. Green is an ordinary human being, except that he was determined by the rigorous training practices of his home and community to be a rational egoist. His training took place at too early an age for him to have had the ability to prevent or alter the practices that determined his character. Mr. Green is thereby caused to undertake the reasons-responsive process and to possess the organization of first and second-order desires that result in his killing Ms. Peacock.”
Additional cases can show similar conclusions. Suppose that one pressed a button that caused another person to have the desire to kill people. That other person then killed people. It seems hard to imagine that they’re truly morally responsible, even though they were acting on their desires. If our desires are determined, we cannot be morally responsible.
Even if one doesn’t accept determinism, it still seems hard to hold that one is responsible because of external life events. It seems hard to imagine that, if lead in my water supply causes me to be violent, I deserve to suffer if and only if I don’t have to drink lead.
If we accept the following constraints
Jim was very violent.
Jim wouldn’t have been very violent had there not been lead in his water.
If Jim’s violence would occur if and only if there was lead in his water, the lead in his water was the cause of his violence.
Jim deserves to suffer if he was very violent.
If A causes B and B causes C then A causes C
Then we’d have to accept
Lead in his water was the cause of Jim deserving to suffer.
5 Suppose that a person has thirty second every day when they turn evil. Would they deserve to suffer, but only for those thirty seconds? If they killed someone during those 30 seconds, would they deserve to suffer, even not during those thirty seconds. What if Ted Bundy had a change of heart? Would he still not deserve cookies? Suppose a person takes ADHD medication, which incidentally makes them more moral. Should they try to eat cookies when they’re on ADHD medication, because during that time they’re more moral. What if a person took a drug that made them temporarily a psychopath. Should they not eat tasty food during that time, so as to redistribute happiness towards themselves when they’re more moral. Either horn of the dilemma poses problems.
If they say that one's desert waxes and wanes over time based on how moral they are, this has several counterintuitive implications.
A It would say that people’s hedonic value doesn’t matter during their dreams, to the extent that they do immoral things in dreams. At one point I had a very strange dream in which I killed many people (it occurred in a way that made no logical sense—as dreams often do. I think it has to do with a button or something that caused people to cease existing—it was hard to explain). In that dream, I clearly had no moral qualms about anything. (Note, this doesn’t reflect anything about me broadly, just like dreams make some nonsense ideas seem to make sense, this one made moral constraints not even factor into my consideration). It seems like my hedonic value in that dream is no less relevant morally than my hedonic value in any other dream.
B It would say that people should redistribute the rewards that they’ll receive to occur when they think they’ll be maximally moral. For example, if one is planning on taking an ethics class, and thinks it’s more likely that they’ll be extra moral after the ethics class, this would say that they should save cookies for after the ethics class is done, even if they’d enjoy them less.
C It would also say that a person who takes a drug that turns them into a psychopath who wants to kill people for a half hour deserves to suffer during that time (even if they’re locked up so that there’s no risk that they kill anyone).
D It would say that the hedonic value of babies and small children matters proportionately less, because they’re less moral.
E It has the problem of deciding the durations over which one’s character is judged. It’s hard to imagine that a person who is currently eating a snack and not thinking about harming others currently has an immoral character. If it’s based on whether based on their current character they’d do immoral things in particular situations, this runs into the major problem that everyone would do immoral things in an infinite number of circumstances, given the infinite number of metaphysically possible worlds, some of which would no doubt result in them being immoral.
However, if they take the other horn of the dilemma, and argue that one’s desert remains unchanged over time, this runs into problems of its own.
A It would say that Ted Bundy would become no more deserving of happiness if he reformed completely.
B It would say that if Ted Bundy and mother Teresa switched morals, such that Ted Bundy became compassionate and mother Teresa became interested in killing lots of people, Teresa would be less worthy of benefit.
C It becomes impossible to measure how moral someone is. If one starts out moral but becomes immoral, this metric doesn’t give us a way of determining what they deserve.
D In the case described above about Jim, whether Jim deserves to suffer now would depend on what his moral character will be like in the future. This depends on whether or not there is lead in his water. This depends on which neighborhood Jim moves to. Thus, whether or not Jim deserves to suffer currently depends on which neighborhood Jim will move to in the future.
Perhaps you think that what matters is someone’s average moral character. Well, in that case, Bundy would be no more worthy of happiness prior to reforming than he would after reforming. Additionally, this view runs into a problem with mostly benevolent beings with very long lives. Suppose that a person is a horrendous serial killer for 1000 years, but after that they become perfectly moral. Given that not even mother Teresa is perfectly moral, on this account, this person would be more worthy of a cookie than mother Teresa, even when they’re going around murdering people.
This objection becomes especially pressing when combined with the fourth objection. If we accept that I’m either determined to act wrongly in a particular case or there’s some probability of me acting wrongly based on chanciness in physics, it seems hard to imagine that I would become less deserving after I do something that I’m already determined to do. It seems like taking actions that one is already determined to do doesn’t make them less deserving.
Utilitarianism gives a perfectly adequate account of why punishing bad people is justified. Punishing bad people deters bad things. We develop an emotional attraction to punishing bad people, so we can feel good about it, as if we’re punishing those who deserve it.
9 Aggregation
Before we consider some options that allow consequentialists to avoid these complications, let me address a further problem for CU that moral philosophers commonly ascribe to its aggregative nature. CU seems to make very high informational demands. What does this mean? Above we imagined two states of affairs. In the first, we both have 10 units of happiness. In the second, I have 20 units and you have 0 units. In saying this, I supposed that my happiness and your happiness can be measured and compared on a common scale. I supposed, that is, that they are commensurable. This, of course, is a problematic assumption. However, due to its conception of the good, CU relies on it. After all, how else could we make sense of a happiness sum? To calculate a happiness sum, it is obviously necessary to compare the happiness of any two persons with one another (cf., e.g., Hirose 2011, 72). As some believe, this is impossible since happiness is a subjective mental state which is epistemically accessible only to the individual who has it.140 Economists have been aware of this problem for a long time. Stanley Jevons, e.g., already asserted back in 1871 that he sees no way “to compare the amount of feeling in one mind with that in another” since the “susceptibility of one mind may, for what we know, be a thousand times greater than that of another.” And he concluded from this that “[e]very mind is thus inscrutable to every other mind, and no common denominator of feeling is possible.” (Jevons 1871, 21)141 Many economists should subsequently come to share some version of Jevons’s view (cf., e.g., Robbins 1932/2007 and 1938; Samuelson 1947).142 In philosophy, too, the view that interpersonal comparisons of utility are problematic has gathered adherents, particularly amongst philosophers with inclinations towards logical positivism. As Brad Hooker points out, the charge that CU relies on unrealistic assumptions about the possibility of interpersonal comparisons has become one of the most common objections to the doctrine (cf. Hooker 1990, 68).143 Nevertheless, we will disregard this problem of interpersonal comparison and make the (charitable) assumption that consequentialists can solve it
I think it’s very easy to solve — I’ve argued this here.
10 Case 0
Case0 Jones is standing on a footbridge over a railway, as a runaway trolley carrying ten people is hurtling down the tracks. On every plausible theory of well-being, the lives of the ten, it shall be assumed, are as valuable to them as Jones’s life is to him. Jones can tell that, if the trolley is not stopped, it will hit a massive rock at the end of the tracks. It is obvious that the impact will most certainly kill all ten people. If Jones does not do anything, this is what will happen. Jones has, however, two options that will avert this worst possible case. He can jump down onto the tracks. In that case, he would die. The trolley would run into him and squash him. But it would also come to a halt before it collides with the rock, thus saving the ten. Jones’s second option is to throw a sandbag that is lying on the footbridge onto the tracks. This would not stop the trolley. But it would slow it down, such that on impact only the three people sitting at the very front of the trolley would be killed. Jones knows all of his options for acting and all of their respective consequences, and no other facts are morally relevant in this case.
(p.182)
The authors go on to argue that the self-sacrifice option is certainly praiseworthy, but it’s not obligatory. Now, I’m a naturalist about the concept of obligation — I don’t think that there are precise or robust facts about what we’re obligated to do. There are just reasons of different strength.
Given that being perfectly moral is far too demanding, it makes sense to have another term that describes what one has to do to avoid being seriously immoral. We want a term that means ‘look, if you did stuff like this all the time, you’d be a really crappy person.’
Conceived of this way, it makes sense to think of this action as supererogatory. If you don’t sacrifice yourself to save a few others, you’re not a very bad person.
Thus, the objection to the view would have to be based on the fact that it denies precise facts of the matter about supererogation and obligation. But this is pretty plausible — for more on this, see Chappell’s stuff. It turns out that it would a very odd property of reasons if they created precise facts about obligations. This isn’t true of prudential reasons or epistemic reasons — why would it be true of moral reasons.
Finally, I think the point I raised on section 1 is important here — one counterexample isn’t evidence; we’d expect the correct moral view to seem wrong sometimes. Thus, this isn’t evidence because it’s what we’d expect if consequentialism were true.
Conclusion
Overall, I enjoyed Mukerji’s book and thought it raised some interesting objections. I do not, however, think it was an adequate refutation of consequentialism. There were lots of thoughtful arguments, but the arguments presented shouldn’t be very consequential in our assessment of consequentialism.
Though this was just a short section of the book summarizing the motivation, so he can’t really be blamed for this.