Introduction
My friend Emerson Green is a moral particularist. He doesn’t think that there are true, universal moral principles. But his particularism is very close to utilitarianism. He basically thinks utilitarianism is true except in a few exceptional cases, like those that involve torturing people to get rid of 100^100 dust specks — in those cases, it’s false.
Utilitarianism.net also has a section on near-utilitarian alternatives. In it they present several general near-utilitarian alternatives. While it may be tempting to adopt a view that’s very close to utilitarianism, this is, I believe, unwise. As Shelly Kagan points out, the moderate is often in trouble — for they inherit the objections from both sides.
Here, I shall argue that we have decisive reasons to reject near-utilitarian alternatives. We should be all or nothing when it comes to utilitarianism.
Part 1: A Positive Case
Utilitarianism, unlike many theories, is not just an attempt to explain our intuitions. There are powerful, robust justifications for utilitarianism. I’ve detailed many of them , but utilitarianism.net has a more concise explanation and presentation of these arguments. Just to lay out very briefly a positive case, utilitarianism is simpler than other theories, follows from rational action from behind the veil of ignorance, correct application of the golden rule, various other plausible axioms, and describes how you’d act if you lived everyone’s life and then acted rationally and self interestedly. There are very good arguments for why each one of these captures the essence of morality — one’s I’ve detailed in the past.
Thus, we have a convergence of powerful arguments for utilitarianism. For any non-utilitarian view to be true, all of these plausible arguments would have to be wrong. If we think that they’re only sufficiently independent to count as three independent arguments and think that each has a 60% chance of being right, then the odds that they’d all be wrong are only 6.4%. And these are pretty conservative odds.
Thus, we have a strong, proactive reason to be a utilitarian. It’s not just about making sense of our intuitions — there are very plausible judgments that, if true, require us to be utilitarians. Richard Yetter Chappell recently tweeted

But Richard accepts some form of desert-adjusted consequentialism. He thinks that only innocent interests count. Thus, it seems that the full force of the everyone mattering equally intuition has to be weighed against the only innocent interests counting intuition. This is why Richard calls himself a near-utilitarian, rather than a full-on utilitarian.
Now, Richard may object — as he has done elsewhere — that the plausible intuition is not that everyone matters equally, but instead, it’s some closely related intuition; perhaps, all innocent people matter equally. But it seems that everyone mattering equally is a plausible principle — it has more prima facie plausibility than ‘everyone except bad people matters equally.’ But more importantly, as we’ll see, the aforementioned theoretical motivation for being a utilitarian cannot apply to this view — if you experienced everyone equally, you’d maximize interests, not just innocent interests.
Part 2: The Flawed Motivation for Near Utilitarian Alternatives
Generally, the motivation for adopting a near-utilitarian conclusion will be that utilitarianism has some counterintuitive results. Thus, people who hold this view will accept
A) Utilitarianism has some unintuitive conclusions.
They then think
B) The best explanation of utilitarianism being unintuitive sometimes is that utilitarianism is not a correct description of morality across the board.
However, we have reason to reject B. As I explain in my article A Bayesian Analysis Of When Utilitarianism Diverges From Our Intuitions
There are enormous numbers of possible moral scenarios. Thus, even if the correct moral view corresponds to our intuitions in 99.99% of cases, it still wouldn’t be too hard to find a bunch of cases in which the correct view doesn’t correspond to our intuitions.
Our moral intuitions are often wrong. They’re frequently affected by unreliable emotional processes. Additionally, we know from history that most people have had moral views we currently regard as horrendous.
Because of these two factors, our moral intuitions are likely to diverge from the correct morality in lots of cases. The probability that the correct morality would always agree with our intuitions is vanishingly small. Thus, given that this is what we’d expect of the correct moral view, the fact that utilitarianism diverges from our moral intuitions frequently isn’t evidence against utilitarianism. To see if they give any evidence against utilitarianism, let’s consider some features of the correct moral view, that we’d expect to see.
In later sections, we’ll apply this to more specific theories to undermine the motivation.
Part 3: Emerson’s Particularism
I think Emerson’s particularism is the simplest object of this critique. Emerson adopts a view that’s something like the following.
C) Utilitarianism is mostly correct. However, utilitarianism is not correct in scenarios where it tells you to kill someone and harvest their organs, torture someone to keep a sports game going, torture someone to prevent 100^100 people from getting dust-specks in their eyes, or feed people to the utility monster.
Emerson accepts C because he accepts B. However, as I’ve argued, B is false. But, being bayesians, let’s see what we’d expect if utilitarianism were correct vs if Emerson’s preferred particularist near-utilitarian alternative were correct. If utilitarianism were correct, we’d expect our intuitions about the cases Emerson describes to be correct. If they were correct, we’d expect careful analysis to turn up compelling arguments for them. The harder it is to hold on to our initial non-utilitarian judgment, the more evidence it is for utilitarianism.
As I’ve argued, all the intuitions that Emerson appeals to are wrong. I’ll leave links to them: here’s the organ harvesting one, here’s one and here’s a different one about the utility monster, here’s one about the sports game one, and here’s one about torture vs dust specks.
Thus, the case against Emerson’s particularism is strong. Utilitarianism does better in terms of theoretical virtues like simplicity and not being ad-hoc; it’s better justified — there are various ways of deriving it from first principles, all of which Emerson must reject, and Emerson’s theory can’t be derived from first principles; and all of the alleged counterexamples that Emerson uses as evidence for his view aren’t evidence for his view, both because we’d expect utilitarianism to sometimes be counterintuitive and because careful reflection ends up vindicating the utilitarian judgments.
Part 4: Counting Only Innocent Interests
Yetter Chappell has the idea that only innocent interests should count. He writes
As the ongoing pandemic obviously causes immense harms, there are correspondingly immense benefits to vaccinating people sooner. Our actual policies have failed at this in a number of ways (from failing to encourage experimental vaccination, to gratuitous delays in approving successful vaccines even after the trial data were received). Now some countries are suspending use of the AZ vaccine due to (poorly-grounded) fears about rare side-effects, seemingly oblivious to the fact that there's a much more serious (and high-probability) "side-effect" to non-vaccination, namely, COVID-19. This all seems bad enough, on straightforwardly utilitarian grounds. But I now want to argue that it's even worse than that: even if these delays did some good, by reassuring the vaccine-fearful, they would still be wrong.
To see this most vividly, focus on some particular individual -- call her Sophia -- who dies from Covid as a result of being deprived of early access to a vaccine that she strongly (and reasonably) wished to take. (I take it to be obvious that there will be many such individuals as a matter of fact.) Her government's obstructionism is then causally responsible for her death: had they not blocked her access to the vaccine, she would have survived. Moreover, it's entirely foreseeable that people will die as a result of such policies, so it further seems that the government ismorally responsible for her death. They have, in effect, indirectlykilled her (and others), by blocking her (and others') access to life-saving vaccines.
Now suppose that someone seeks to defend the obstructionist policy by arguing that it helps to reassure fearful members of society that the vaccines have been scrupulously investigated and are safe for them to (eventually) use. It strikes me as empirically implausible that this benefit to public acceptance of vaccines would be sufficiently great to outweigh the harms of a slower vaccine rollout. But suppose I'm wrong about that. Suppose, for sake of argument, that delays really would save more lives by winning over more borderline anti-vaxxers. We can still ask: is that worth it? Could you justify that to Sophie?
It would be one thing if we had to explain to Sophie that we couldn't save her without endangering a greater number of innocent people. I'd be on board with that. But that isn't the situation here. Anti-vaxxers aren't "innocent" in the relevant sense, as they're freely choosing to reject the protection that's available (or would be available if not for their unreasonable attitudes). Anti-vaxxers who die of Covid as a result of their own anti-vax attitudes are responsible for this outcome: they ultimately harmed themselves by freely rejecting the available protection. And as a general moral principle, we should not harm innocent people (like Sophie) merely in order to convince benighted fools not to harm themselves.
To further illustrate the principle, suppose that anti-vaxxers constituting 10% of the population became even more hardcore, and threatened to kill themselves en masse unless the government immediately and permanently outlawed all Covid vaccines. Should we appease them, and let the pandemic continue since it wouldn't do anywhere near as much harm as this mob was threatening to self-inflict? Surely not. Even if harms to innocent victims are smaller in magnitude than the threatened self-inflicted harms, the harms to innocent victims matter more. The correct response to the anti-vaxxers is: "Don't be stupid. But if you insist on being stupid, that's your responsibility, not ours."
I agree with Richard on the practical effects of appeasing anti-vaxxers. But I don’t think that this principle ends up being plausible.
First, as section 2 argues, there isn’t a good motivation for this. We’d expect utilitarianism to sometimes be counterintuitive, yadda yadda.
Second, as section 1 argues, there is a plausible justification for utilitarianism from first principles — there is not the same type of justification for Richard’s near-utilitarian alternative.
Third, I think that the principle that “we should not harm innocent people (like Sophie) merely in order to convince benighted fools not to harm themselves” is clearly false. Imagine if everyone except Sophie would kill themselves if Sophie takes the vaccine, because they’d been brainwashed. Sophie’s action to take the vaccine would seem very immoral if she was just one person and would predictably, by taking the vaccine, kill everyone else on earth. Thus, I think a more plausible principle — which I think Richard would endorse, he doesn’t clarify whether he’d just discount anti-vax interests somewhat or entirely — is that innocent interests matter more, but non-innocent interests still matter.
Fourth, this principle violates the Pareto principle and can lead to some obviously undesirable outcomes. To illustrate this, suppose that you can either increase a person’s well-being by 20 from a non-innocent source or by 10 from an innocent source. They’d be better off if you increased it by 20, but on this principle, you should increase it by 10. To illustrate this, on this account, you should not take the vaccine if doing so would somehow dramatically increase the well-being of anti-vaxxers by enough to offset the decreased risks to yourself, but you should take the vaccine even if you know to would lead to an anti-vaxxer killing themself.
To illustrate this further, suppose there are two vaccines that are very worth taking, vaccine A and vaccine B. Half of society wants to take vaccine A and will kill themselves if vaccine B is given out, and half of society wants to take vaccine B and will kill themself if vaccine A is given out. On this account, each side should ignore the non-innocent interests of the other, and thus both vaccines should be distributed, even though this will result in everyone dying. This is not plausible.
Part 5: Nature
The unimpeachably great utilitarianism.net has an article about near-utilitarian alternatives. One near-utilitarian alternative holds that nature is intrinsically valuable. This is not defensible. To quote an earlier article on the subject.
If nature is intrinsically valuable then an infinite amount of non-sentient nature would be infinitely valuable. One has to either except this verdict or accept that at some point, extra nature loses its intrinsic value in virtue of all the existing nature. Neither of these responses will do.
The second response has truly bizarre implications. Why should evaluation of a particular piece of nature hinge on the existence of other, causally distinct, far away nature that doesn’t interact with the first piece of nature. If this were true then, if two people were in a cave and were deciding the importance of conserving the nature in the cave, their judgment would hinge on how much nature there is outside of the cave--nature that they’ll never interact with. This is deeply implausible--how much extra nature devoid of sentient beings there is can’t be the deciding factor in terms of whether or not to preserve a particular piece of nature.
The first response has objectionable implications. If it is truly the case that nature has intrinsic value, then a sufficiently large chunk of nature would have enough intrinsic nature to outweigh any bad thing in human history. A lifeless piece of nature the size of the galaxy could have enough value to more than offset every single person being horrifically tortured to death.
If nature has intrinsic value then if one had to pick on of the following options
Destroy an infinitely large chunk of nature that has no sentient life and will never have sentient life.
Put a billion people in Auschwitz style death camps.
They would have to choose the second option. After all, infinite disvalue will be greater than the immense though finite disvalue of death camps. Yet this verdict is absurd! A billion people being put in death camps would be far worse than any amount of destruction of nature.
We have additional reasons to discount this intuition. For one, nature is instrumentally valuable. Given that nature in the real world has value it’s hard to, for the purpose of thought experiments, separate out its instrumental value from all things considered value.
Additionally, as Chappell and Meisner argue, the utilitarian has a reasonable reply in such cases. Destroying nature does tend to be evidence for viciousness of character. Given the close connection between our judgment of the wrongness of acts and our judgments of the character of those who partake in those acts, character judgments about the hypothetical people who would destroy nature can undermine our intuition about those cases.
This view also faces the difficult problem of defining what nature is and why it’s good. It does not seem intuitive that, for example, a distant star system with no life has intrinsic value. The only instances of nature that seem to possess intrinsic value seem to be the ones that have a close connection to positively impacting the happiness of conscious creatures.
Under what conditions is nature intrinsically valuable? If everything in nature was painted blue, that would be an unnatural process. However, if everything was painted blue by paint that had no negative effect on anything, that doesn’t seem like it would undermine the value of nature.
Thus, the view that nature is intrinsically valuable rests on two things--a confusion of instrumental value and intrinsic value, and falsely assuming that things must have intrinsic value merely in virtue of looking nice to us. In order for it to be a coherent, complete view, proponents of such a view must give a working definition of nature--or at least describe which of its features ought to be preserved in virtue of their intrinsic value. I suspect they shall be unable to, for our intuitions about nature are too vague, ill-defined, ambiguous, and shallow to withstand rigorous codification.
To see that nature being beautiful is behind much of our belief that nature is intrinsically valuable, imagine a hideous pile of slime, filth, and vomit. Additionally, imagine that this was natural and couldn’t have any life. It seems hard to imagine that this would have intrinsic value. Our intuitions about the intrinsic value of nature are largely dependent on whether the plot of nature looks nice.
Conclusion
This article is pretty long — I might make a part 2 to address more utilitarian theories. But I think the basic point is pretty clear. Near-utilitarian alternatives tend to generate implausible verdicts. On top of this, we should reject them in general unless given good reason to accept them. The main reason to be a utilitarian is because of the plausible arguments for it — arguments that, if true, prove utilitarianism, not a near utilitarian alternative. Thus, the near-utilitarian is in the unfortunate position of having to reject the good reasons to be a utilitarian — the arguments from first principles — on the basis of limited explanatory power.
When is P2 coming out?