Michael Huemer Should be a Utilitarian
He convinced me of the primary thing he argues for--phenomenal conservatism--this is my attempt to return the favor
Michael Huemer is one of my ten favorite philosophers. He has a delightful blog called fake nous—wherein he limits the fake news that he spreads to his discussion of utilitarianism, where he is generally unsympathetic. His writing is always direct, clear, and usually persuasive. And he’s been able to make a truly enormous range of contributions on enormous numbers of topics—in metaphysics, ethics, and much more. This article of his is one of my favorites—an easy go to whenever anyone claims that vegans are preachy or moralistic.
Huemer is best thought of as a philosopher of common sense. He appeals to obvious principles—or at least obvious-sounding ones—and then uses them to draw controversial conclusions. For example, in his excellent case against meat titled Dialogues on Ethical Vegetarianism, Huemer argues that eating meat is severely immoral. This is a conclusion that I think is clearly right—though it diverges from common sense. Most people think there’s nothing wrong with eating meat. The common man sees vegans as ridiculous histrionic extremists. In arguing for this, Huemer appeals to principles deeply embedded within common sense—for example, the principle that it’s wrong to inflict vast amounts of suffering for trivial reasons. He then goes on to show that eating meat does inflict vast amounts of suffering for trivial reasons—so it’s wrong. Common-sense shows this.
Huemer has talked about how to revise the beliefs of common sense in his excellent paper Revisionary Intuitionism. As Huemer says “intuitionists may, and probably should, adopt revisionary ethical views, rejecting a wide range of commonly accepted, pre-philosophical moral beliefs.” Huemer argues—quite persuasively—that it often makes sense to revise our starting intuitions, because our starting intuitions, while the best source of evidence we have in the moral domain, are quite imperfect, and frequently in tension. He proposes the following non-exhaustive criteria for deciding whether to abandon intuition.
1. Seek a substantial body of ethical intuitions that fit together well, rather than placing great weight on any single intuition.
2. Eschew intuitions that are not widely shared, that are specific to one’s own culture, or that entail positive evaluations of the practices of one’s own society and negative evaluations of the practices of other societies.
3. Distrust intuitions that favor specific forms of behavior that would tend to promote reproductive fitness, particularly if these intuitions fail to cohere with other intuitions.
4. Distrust intuitions that differentially favor oneself, that is, that specially benefit or positively evaluate oneself, as opposed to others
He presents these cases after presenting various sources of moral error; stemming from being self-interested, cultural indoctrination, evolutionary debunking, and more. He also points out that our intuitions often pick out morally non-salient features, and thus give us reason for revision.
The lessons of the skeptics regarding the unreliability of certain kinds of intuitions are rarely heeded, and were almost never heeded prior to the twentieth century. To this extent, revisionary intuitionism represents a new approach to normative ethics.
He then uses this to motivate a greater focus on abstract intuitions—intuitions like that one should maximize good outcomes, rather than more concrete intuitions, like that one should save drowning children. Heuemer argues the following
Intuitions of different levels of generality differ in their susceptibility to various kinds of error. Concrete and mid-level intuitions are particularly susceptible to the kinds of biases discussed in section III. One reason for this is that we typically have stronger emotions about concrete cases and mid-level generalizations than about very abstract principles. Compare the emotional impact of the statement, “Killing deformed human infants is acceptable” to that of the statement, “A being has a right to x only if that being is capable of desiring x.”xxix The latter, abstract principle is much less susceptible to emotionally-based bias. In addition, concrete intuitions are more likely to be influenced by biological programming, because the biases with which evolution is most likely to have endowed us are biases favoring relatively specific forms of behavior that would have promoted our ancestors’ inclusive fitness. Biological evolution is unlikely to have endowed us with biases towards embracing very abstract principles, since our biological ancestors probably engaged in little abstract reasoning. For instance, attitudes towards incest, human offspring, and social hierarchies are more likely to be influenced by biology than are intuitions about principles of additivity in axiology.xxx Finally, culturally generated biases are more likely to affect specific and mid-level judgments than highly general ethical judgments, because our culture has a complex set of relatively specific rules—rules governing who is allowed to marry whom, how one should greet a stranger, how one should interact with one’s boss, and so on. What rules, if any, our society accepts on the most abstract level is extremely unclear—does our culture endorse the categorical imperative? What general criterion of rights does it endorse? The obscurity of the answers to these questions prevents cultural conditioning from directly determining our intuitions about the categorical imperative or the general criterion of rights. In contrast, it is perfectly clear how our culture might bias judgments about, for example, the acceptability of polygamy
However, Huemer worries that more abstract intuitions fall prey to the threat of overgeneralization—we’ll find a plausible sounding principle, be unable to think of a counterexample, and then assume that it’s true. He gives an example of a simple model of counterfactual causation which seems like a decent generalization, but it turns out to be false.
He then proposes a further class of intuitions—one that he regards as the most reliable.
All three types of intuitions, then, have their own problems. This does not mean that no intuitions can be relied upon. What it means is that we must consider more than an intuition’s level of generality. As indicated in section IV, we must consider an intuition’s content to determine whether it is a plausible candidate for being a product of one of the common types of biases discussed in section III. In addition to this, however, there is a particular species of abstract ethical intuitions that seems to me to be unusually trustworthy. These are what I call formal intuitions—intuitions that impose formal constraints on ethical theories, though they do not themselves positively or negatively evaluate anything. The following are examples of such formal ethical intuitions:
1. If A is better than B and B is better than C, then A is better than C.
2. If A and B are qualitatively identical in non-evaluative respects, then A and B are morally indistinguishable.
3. If it is permissible to do A, and it is permissible to do B given that one does A, then it is permissible to do both A and B.
4. If it is wrong to do A, and it is wrong to do B, then it is wrong to do both A and B.
5. If two states of affairs, A and B, are so related that B can be produced by adding something valuable to A, without creating anything bad, lowering the value of anything in A, or removing anything of value from A, then B is better than A.
6. The ethical status (whether permissible, wrong, obligatory, etc.) of choosing (A and B) over (A and C) is the same as that of choosing B over C, given the knowledge that A exists/occurs.
These kinds of intuitions are particularly plausible candidates for being products of rational reflection. They are not plausibly regarded as products of emotional bias, cultural or biological programming, or self-interested bias. What of the threat of overgeneralization? It seems to me that these principles are not the result merely of considering some typical kinds of cases and then evaluating just those cases. Rather, we seem to be able to see why each of these things must be true in general; these principles seem to be required by the nature of the “better than” relation, the nature of permissibility, the nature of ethical evaluation, etc. Accordingly, if someone were to describe a proposed counterexample to one of these principles, our reaction would not be, as with the Preemption Case discussed above, to simply give up the principle in question without protest; rather, our reaction would probably be to call the case a “paradox.” For example, many find Stuart Rachels’ proposed counterexamples to the transitivity of “better than” paradoxical; but no one finds the Preemption Case paradoxical. This manifests the fact that our acceptance of the transitivity principle derives from an apparent insight into the nature of the “better than” relation as such, rather than merely a survey of typical cases. It seems to me, then, that formal ethical intuitions should be given special weight in moral reasoning. These formal intuitions are not sufficient to generate any substantive ethical system. Nevertheless, they rule out some otherwise attractive (combinations of) ethical views and are for that reason useful in resolving some ethical disputes.
So, the best types of intuitions are the ones that have two functions. First, they generalize and explain lots of cases. Second, one grasps their internal logic. The reason they believe them is not just that they explain a lot of cases, but it is that, through reason, one can see their truth, can see their veridicality.
But I think that when we take this seriously, this ends up dramatically favoring utilitarianism. Here are several fundamental deep moral principles that entail consequentialism—and from consequentialism, it’s just a narrow foray across the river to utilitarianism.
Perfectly moral beings shouldn’t hope you do the wrong thing.
If X is wrong and Y is wrong conditional on X, then X and Y are wrong.
If you do the wrong thing and you can undo it before it’s affected anyone, you should.
All of these seem really obvious. In fact, these are all kinds of quasi-rational intuitions. And yet these—occasionally combined with a few other premises, none of which are very controversial—are each sufficient to entail some rough form of consequentialism. These are, of course, far from the only counterexample. But we have tons of principles—the types of principles that Huemer should defer to—that all point in the same direction; they all point towards consequentialism.
Maybe there’s a way to be a non-consequentialist while accepting some of these, but each of these is sufficient to single-handedly defeat the refute the concept of rights—even the richer concept that isn’t technically rights, that Huemer endorses.
So, in one direction, we have all sorts of very plausible, fundamental, rational principles, the most trustworthy types of principles. And then on the other side we have a bunch of cases where it appears to get the wrong results—cases like organ harvesting.
Let’s take the organ harvesting example specifically. Huemer gave this example in his article where he talks about his revisionary intuitionism paper—coincidentally, I started writing this article before it came out.
Basically, the utilitarian would say
a. Some intuitions are much more reliable than others, esp. the sort of intuitions mentioned in sec. 4 above.
b. All of these most reliable intuitions cohere with utilitarianism. (Notice how that’s true of all the ones I listed. And the list could be extended.)
c. The leading alternative ethical theories all clash with one or more of these highly reliable intuitions.
d. The leading alternative ethical theories also directly fall under suspicion from the skeptical arguments of sec. 2 above, in a way that utilitarian intuitions do not. I.e., there are explanations of how those alternatives would be produced by cultural biases, evolution, etc., which do not also apply to utilitarianism.
That’s the most reasonable case for utilitarianism. (It’s much better than just saying “intuitions are bad” and biting the bullet on every objection, as some utilitarians do.) On some days, I feel sympathetic to that (before I remember things like the organ harvesting doctor).
My first-ever articles were spent replying to Huemer’s linked article, I won’t rehash the entire dispute. I’ll just make two points.
First, as I’ve argued before, we should expect the correct moral view to be unintuitive in lots of cases. If our intuitions are accurate 95% of the time, then we should expect them to produce the wrong result in 5% of cases. Thus, given that we should expect the correct moral view to seem wrong quite a bit, the fact that utilitarianism seems wrong quite a bit shouldn’t be evidence against utilitarianism. If something is expected on a theory, it’s not evidence against a theory.
Now one who is an intuitionist might get very worried here. If the fact that a theory sounds unintuitive isn’t evidence against a theory, then how can we ever disprove a theory?
Well, there are lots of ways. First, we shouldn’t expect the wrong theory to often be wildly out of accords with the most reliable intuitions. There aren’t that many of the intuitions that we reach through deep reflection, that just seem right. Thus, we should expect that the correct moral theory wouldn’t run afoul of deep dictates of reason that we arrive at—the ones that Huemer trusts most. Well, fortunately, literally all of these intuitions cohere perfectly with utilitarianism. This is very good evidence.
The next thing that we should expect is for the right theory to be supported by lots of arguments for it—arguments that are persuasive. It would be surprising if a wrong theory had lots of different proofs from obvious premises—but this wouldn’t be surprising at all for the right theory.
The final thing we should expect is that careful reflection about the cases will move people more in the utilitarian direction. After all, if utilitarianism were correct, then you really should harvest people’s organs. If this is true, then we should expect careful reflection about the organ harvesting case to make people much more sympathetic to harvesting organs.
This is exactly what we see. There are lot of cases where common-sense moves people closer to utilitarianism—the repugnant conclusion, headaches vs human lives, egalitarianism being non-valuable. There are, to the best of my knowledge, no cases of things going the other way—of careful reflection about a case moving people away from utilitarianism—of the best arguments on other issues that ought to move us relative to our intuitive starting points moving us away from utilitarianism.
Huemer is an ethical vegetarian. He thinks that factory farming is the worst thing in the world. His argument for this is that it’s wrong to cause enormous amounts of pain and suffering for trivial benefits. I happen to agree. But this results in lots of unintuitive conclusions—I’ll just list several.
You should kill someone if it had a one in ten million chance of causing everyone to eschew meat.
Deaths might not be very bad and anti-natalism might be right—all because people eat lots of meat.
Most people cause more harm than the average child rapist.
All of these sound weird. But we should expect any sweeping ethical claim to produce lots of results that sound weird. The fact that it does won’t be evidence against the ethical theory.
I think transitivity is an even better example—Huemer accepts transitivity. For those who don’t know, transitivity is the idea that if A is better than B and B is better than C, then A is better than C. This is really obvious. But it’s come under fire from a lot of people—e.g. Temkin. The criticisms are that it produces lots of unintuitive results. For example, you can use transitivity combined with very plausible principles to show that
We should accept the repugnant conclusion.
Some number of dust specks are worse than a torture.
It’s better to be a slightly happy oyster for infinite years than to be the happiest person ever for 10^100 years.
And there are a lot more examples. But I don’t think these are good reasons to reject transitivity, because these are what we’d expect if transitivity were true—we’d expect it to say things that often sound wrong, because it’s a sweeping principle and our intuitions are often wrong. This is especially so when transitivity is supported by very strong arguments—as well as just seems self-evident. Utilitarianism is similar—but utilitarianism is far more sweeping than transitivity, so we should expect it to appear wrong in a lot of cases, even if it were right.
Now, let’s just address the organ harvesting objection specifically, just to show that our intuitions about specific non-utilitarian cases are unreliable. Well, the early considerations should put a lot of pressure on the organ-harvesting-friendly intuitions about rights and such. But it’s also the case that when we carefully reflect about the organ harvesting case, it stops seeming at all obvious that it’s wrong to harvest the person’s organs.
I’ll just present three brief arguments for this. The first involves quoting an earlier article of mine.
The pareto principle, which says that if something is good for some and bad for no one then it is good, is widely accepted. It’s hard to deny that something which makes people better off and harms literally no one is morally good. However, from the Pareto principle, we can derive that organ harvesting is morally the same as the trolley problem.
Suppose one is in a scenario that’s a mix of the trolley problem and the organ harvesting case. There’s a train that will hit five people. You can flip the switch to redirect the train to kill one person. However, you can also kill the person and harvest their organs, which would cause the 5 people to be able to move out of the way. Those two actions seem equal, if we accept the Pareto principle. Both of them result in all six of the people being equally well off. If the organ harvesting action created any extra utility for anyone, it would be a Pareto improvement over the trolley situation.
Premise 1 One should flip the switch in the trolley problem
Premise 2 Organ harvesting, in the scenario described above, plus giving a random child a candy bar is a pareto improvement over flipping the switch in the trolley problem
Premise 3 If action X is a pareto improvement over an action that should be taken, then action X should be taken
Therefore, organ harvesting plus giving a random child a candy bar is a action that should be taken
Second, I’ll quote Richard.
There’s a general recipe that underlies the organ-harvesting case and similar standard “counterexamples” to utilitarianism:
(1) Imagine an action that violates important (utility-promoting) laws or norms, and—in real-world circumstances—would be disastrous in expectation.
(2) Add some fine print stipulating that, contrary to all expectation, the act is somehow guaranteed to turn out for the best.
(3) Note that our moral intuitions rebel against the usually-disastrous act. Checkmate, utilitarians!
This is a bad way to do moral philosophy. Our intuitions about real-world situations may draw upon implicit knowledge about what those situations are like, and you can’t necessarily expect our intuitions to update sufficiently based on fine print stipulating that actually it’s an alien situation nothing like what it seems. If you want a neat, tidy, philosophical thought experiment where all else is held equal—unlike in real life—then it’s best to avoid using real-life settings. Rather than asking us to assess an alien situation masquerading as a real-life one, it would be much clearer to just honestly describe the alien situation as such. Literally.
Consider Martian Harvest:
Martians, it turns out, are rational, intelligent gelatinous blobs whose sensory organs are incapable of detecting humans. They generally live happy and peaceful lives, except for practicing ritual sacrifice. Upon occasion, a roving mob will impound six of their fellow citizens and treat them as follows. Five have a portion of their “vital gel” removed and ritually burned. As a result, they die within a day, unless the sixth chooses to sacrifice himself—redistributing the entirety of his vital gel to the other five. Whoever is alive at the end of the day is released back into Martian society, and lives happily ever after.
You find yourself within the impounding facility (with a note on your smartwatch indicating that you will be automatically teleported back to Earth within an hour). Five devitalized Martians are wallowing in medical pods, while the sixth is studiously ignoring the ‘sacrifice’ button. You could press the button, and no-one would ever be the wiser. If you do, the Martian medical machinery will (quickly and painlessly) kill the sixth, and use his vital gel to save the other five.
Should you press the button?
This is a much more philosophically respectable and honest thought experiment. Believers in deontic constraints will naturally conclude that you should not press the button, as that would kill one unconsenting (Martian) person as a means to saving five others. Still, in this situation where it’s clear that all else truly is equal, I find it intuitively obvious that one ought to press the button, and expect that many others will agree. It’s not any sort of costly “bullet” to bite.
That is to say, Martian Harvest is not a “counterexample” to utilitarianism, but simply a useful test case for diagnosing whether you’re intuitively drawn to the utilitarian account of what fundamentally matters.
Now, given that Martian Harvest more transparently describes the structural situation that the familiar Transplant scenario aspires to model, and yet our intuitions rebel far more against killing in the Transplant case, it seems safe to conclude that extraneous elements of the real-world setting are distorting our intuitions about the latter. (And understandably enough: as even utilitarians will insist, real-world doctors really shouldn’t murder their patients! This is not a verdict that differentiates utilitarians from non-utilitarians, except in the fevered imaginations of their most rabid critics.)
That is, Transplant is, philosophically speaking, an objectively worse test case for differentiating moral theories. It’s an alien case masquerading as a real-life one, which builds in gratuitous confounds and causes completely unnecessary confusion. That’s bad philosophy, and anyone who appeals to Transplant as a “counterexample” to utilitarianism is making a straightforward philosophical mistake. Encourage them to consider a transparently alien case like Martian Harvest instead.
Finally, I’ll quote Julian Savulescu.
The most straight forward case of impermissible killing, and the one in which she and many others have a clear intuitions, is Transplant.
In Transplant, a doctor contemplates killing one innocent person and harvesting his/her organs to save 5 people with organ failure. This is John Harris’ survival lottery.
But this is a dirty example. Transplant imports many intutions. For example, that doctors should not kill their patients, that those with organ failure are old while the healthy donor is young, that those with organ failure are somehow responsible for their illness, that this will lead to a slippery slope of more widespread killings, that this will induce widespread terror at the prospect of being chosen, etc, etc
A better version of Transplant is Epidemic.
Epidemic. Imagine an uncontrollable epidemic afflicts humanity. It is highly contagious and eventually every single human will be affected. It cases people to fall unconscious. Five out of six people never recover and die within days. One in six people mounts an effective immune response. They recover over several days and lead normal lives. Doctors can test people on the second day, while still unconscious, and determine whether they have mounted an effective antibody response or whether they are destined to die. There is no treatment. Except one. Doctors can extract all the blood from the one in six people who do mount an effective antibody response on day 2, while they are still unconscious, and extract the antibodies. There will be enough antibodies to save 5 of those who don’t mount responses, though the extraction procedure will kill the donor. The 5 will go on to lead a normal life and the antibody protection will cover them for life.
If you were a person in Epidemic, which policy would you vote for? The first policy, Inaction, is one in which nothing is done. One in six of the world’s population surives. The second policy is Extraction, which kills one but saves five others. There is no way to predict who will be an antibody producer. You don’t know if you will be one of the six who can mount an immune reaction or one of the five in six who don’t manage to mount an immune response and would die without the antibody serum.
Put simply, you don’t know whether you will be one who could survive or one who would die without treatment. All you know for certain is that you will catch the disease and fall unconscious. You may recover or you may die while unconscious. Inaction gives you a 1 in 6 chance of being a survivor. Extraction gives you a five in 6 chance.
It is easy for consequentialists. Extraction saves 5 times as many lives and should be adopted. But which would you choose, behind the Rawlsian Veil of Ignorance, not knowing whether you would be immunocompetent or immunodeficient?
I would choose Extraction. I would definitely become unconscious, like others, and then there would be a 5 in 6 chance of waking up to a normal life. This policy could also be endorsed on Kantian contractualist grounds. Not only would rational self-interest behind a Veil of Ignorance endorse it, but it could willed as a universal law.
Consequentialism and contractualism converge. I believe other moral theories would endorse Extraction.
Since Extraction in Epidemic is the hardest moral case of killing one to save 5, if it is permissible (indeed morally obligatory), then all cases of killing one innocent to save five others are permissible, at least on consequentialist and contractualist grounds.
When we consider these cases—both Julian’s and Richard’s—it seems very obvious that a significant chunk of our intuitions about the organ harvesting case come merely from overgeneralizing—a well-known bias—from the principle that it’s bad when doctors kill their patients. When we make the cases better, without changing any of the underlying principles, our intuitions stop being clear at all.
So join your colleague Norcross, Michael. Abandon your outrageous paradoxical deontological ways, and embrace the mantle of truth, justice, virtue, and goodness. You can even keep believing objective list theory.