Contra Caplan on Conscience
"I think that the real nail in the coffin for this defense of the claim that insects don’t matter, though, involves polarized sunglasses and the many worlds interpretation of quantum physics."
Introduction
Prefacing this by saying that I’m wearing glasses while writing this: That this is indicative of great virtue will become apparent later.
I just read Scott’s convincing reply to Caplan’s views about mental illness, and it reminded me that I was going to write a post explaining why Caplan was wrong about something a while ago. Well, I already did that, but there was a different post I was going to write that I just sort of forgot about.
In the Huemer Caplan debate about whether it’s okay to eat meat, Huemer’s argument is relatively simple: It’s wrong to inflict vast amounts of pain and suffering on others for the sake of slight personal benefit. Caplan disagrees, claiming that, just like one sentence moral theories like utilitarianism and Kantianism, the principle “don’t torture others for slight pleasure,” is one of those that sounds good at first but ends up opening up obvious counterexamples. I disagree, obviously.
What does Caplan think are the obvious counterexamples to the principle that you need a really good reason to torture others? Bugs! It’s just obvious to Caplan that it’s okay to painfully kill bugs for slight benefit—meaning that it must be fine to torture others for slight benefit, sometimes. Not only is it obvious, he claims he has good reason to believe it, based on the fact that even morally conscientious ethical vegans like Huemer kill bugs often—by driving, for example—and if it were really wrong, they wouldn’t do it. He calls this the argument from conscience—if inflicting tons of suffering on bugs were really wrong, then it would tickle the conscience of the good, morally conscientious people who profess the principles entailing its wrongness. He relatedly has the argument from hypocrisy, summarized as:
1. Lots of people say that X is wrong.
2. But these people almost always do X.
2.1. An individual’s sincere moral beliefs have some effect on the individual’s behavior. All else equal, the probability of doing X increases as the moral evaluation of X rises from morally unthinkable to morally impermissible to morally permissible to morally good to morally right to morally obligatory.
2.2. People are pretty selfish, but they’re rarely self-conscious villains. So an extreme disparity between behavior and moral beliefs usually indicates an absolutely low level of confidence in the moral beliefs. Consider people who claim that we are morally obliged to give all our surplus wealth to the poor. If, on average, they give just 3% of their surplus wealth to the poor, this is a strong sign that few sincerely find their official position convincing.
3. Therefore, even the opponents of X don’t really believe X is wrong.
3.1. If even the opponents of X have little confidence that X is wrong, X probably isn’t wrong. Why? For starters, because humans have strong “myside bias.” If even the faithful can’t talk themselves and each other into believing that X is wrong, X is probably isn’t wrong.
4. So X probably isn’t really wrong.
I usually like Caplan, I think he’s a smart guy and has interesting contributions. In fact, in one of my debate cases, I cited a piece of evidence which I summarized as the claim that “Caplan is always right;” I was citing Caplan for another claim, so establishing that he’s usually right was relevant. But when it comes to animals, he just veers off the rails, such that I think these arguments are not just wrong but indicative of unusually poor thinking on behalf of Caplan. I don’t think that any of what he says is relevant to whether it’s okay to eat animals—even if you adopt the principle that it’s sometimes okay to inflict lots of torture on others for small benefit, if they’re sufficiently stupid, the bar for sufficient stupidity should be far below where the animals we eat are. Otherwise, you have to think it’s okay to torture actually existing humans, terminally ill babies, and dogs. But I’ve already defended that view.
Here, I’ll explain why none of these arguments should move anyone one iota. Not only is it true that even if it’s sometimes okay to inflict lots of suffering on others for small benefits, it’s not true that one should sometimes inflict lots of suffering on others for the sake of small benefits. Let’s tackle Caplan’s arguments one by one.
The argument that it’s okay to inflict vast amounts of suffering on bugs because it’s just intuitive that it is is not at all convincing
Really reflect on the intuition that it’s okay to destroy some bugs in your house. Do you really have the intuition that this is so even if it inflicts vast amounts of suffering for the sake of small benefits? The idea that how smart one is determines how bad their pain is seems utterly bizarre—my ability (or lack thereof) to do calculus, sudokus, and write articles has nothing to do with the badness of my pain. On Caplan’s view, if there were galaxies full of beings being tortured that experienced pain 100,000 times as vividly as your or I, that wouldn’t be bad at all, as long as they were sufficiently dumb. This is a crazy view.
The reason I think that it’s okay to swat a fly is that I don’t think it does inflict vast amounts of suffering for the sake of small benefits. The impact fly-swatting has on insect welfare is deeply uncertain, especially given that adult flies pump out oodles of babies who live mostly miserable lives, so it’s not at all clear whether squishing them is good or bad. Even if it is bad, flies are barely conscious, so it’s only a bit bad. But if it turned out that some action like driving to the store involved inflicting enormous amounts of suffering on others for the sake of small benefits then of course it would be wrong to do.
Given that bugs are barely conscious, if they’re conscious at all, which they’re probably not, in order for something that produces trivial benefits to be severely wrong, it has to harm a lot of bugs. But it’s not at all obvious that harming enormous numbers of bugs for the sake of small benefits is okay, if they really can suffer. For example, suppose that you found out that if you played some video game you would horrifically maim 450 conscious bugs. Does anyone really have the intuition that it’s very obvious that doing so would be okay—if it would cause them enormous suffering and bring you only a bit of pleasure? The notion that it’s fine to inflict vast amounts of suffering on others for the sake of small benefits because you have the intuition so strongly that it’s okay to maim hundreds of bugs for the sake of small benefits is quite absurd.
Now, Caplan might object by saying that if this is true it would mean you can’t drive—after all, driving kills tons of bugs. I think driving plausibly has a positive impact on insect welfare, but Caplan might object that whether driving is okay shouldn’t hinge on the impact on insect welfare. If we found out that insects were very conscious and harmed by driving, most people would not think that it’s wrong to drive.
But I think that this is a super unreliable intuition. We know from history that most people are willing to tolerate horrifying atrocities if they’re carried out by their society. If driving really were wrong, most people would think it’s fine because it’s thoroughly normalized. In addition, when the victims are small creatures that we can’t really empathize with, with fully alien internal lives, even if they actually mattered, we wouldn’t think they did.
I think most people have the abstract ethical intuition that you shouldn’t inflict unimaginable torture on insects for the sake of minor benefits. They just generally don’t think that driving does that. If you knew that every time you drove your car, you consigned 100,000 insects to the microwave, you probably wouldn’t drive, or you’d at least think there’s some decent reason not to unless you really need to. The reason the conclusion is surprising is not that the moral claim is counterintuitive but that the claim about the world is counterintuitive. When the world is weird, obvious moral claims will produce weird results.
If we discovered that every time a person goes to the bathroom it kills 500 people in a faraway galaxy, most people would not kill themselves, and it would seem weird that it’s wrong to go to the bathroom. But that’s only because the world is weird, not because the moral claim is weird.
I think that the real nail in the coffin for this defense of the claim that insects don’t matter, though, involves polarized sunglasses and the many worlds interpretation of quantum physics.
Erik Schwitzgebel has an interesting article. He ends the argument by declaring “I feel so bad about making this argument that I just donated to Oxfam.” But putting aside Schwitzgebel’s misgivings about the argument, so extreme that he thinks he needs to give indulgences to wash away his sins, what is this mysterious argument? The basic idea is that utilitarians should stop talking so much about charity and start going to the beach.
Why is this? Well, according to some versions of quantum physics, each time a quantum mechanical event occurs, the universe splits—a new copy appears. Each time a photon passes through polarized sunglasses, that causes a new quantum mechanical event to occur. This means that when someone goes to the beach, a huge number of universes are created. And when I say a huge number, we’re talking actually mindboggling numbers: Schwitzgebel suggests that about 10^18 photons pass through one’s sunglasses every second, and each creates a new universe.
For what it’s worth, I don’t think this argument works. I don’t think that utilitarians have especially significant reasons to wear sunglasses. But the reason why it doesn’t work is non-obvious—it seems straightforwardly like, on a mainstream view of quantum physics, going to the beach generates more utility than doubling everyone’s happiness would’ve if that view of quantum physics were wrong.
Every remotely plausible view about population ethics will say that wearing these sunglasses is good, assuming the world has positive utility. It is good to generate entire worlds full of value—full of thriving, happy relationships. If you think new universes are bad though, just invert the calculation, and the result is that it’s extremely wrong to go to the beach, which is just as weird.
But you could make the same argument as Caplan makes here against the idea that it’s important to create happy people. We know that Caplan wouldn’t accept this argument—he’s an avowed pro-natalist. But one could say something like the following:
Lots of people claim that they care a great deal about creating happiness. But none of them have seriously investigated whether it’s possible to easily create 10^18 worlds every second, generating mind-boggling amounts of happiness. The fact that they’re hypocrites and that they don’t feel any great guilt about failing to go to the beach means that these people aren’t actually very serious about the goodness of creating happy people. They like virtue signaling and talking about philosophy, but none of these people are actually looking for effective ways to create lots of universes. They say creating happy people is good, but they’re not even trying to do create 10^18 universes per second, which means they don’t actually take it that seriously.
The correct response to this is severalfold.
First, people who think creating happy people is good shouldn’t actually think that going to the beach is of deep cosmic importance. This response is similarly open to the person who thinks you shouldn’t cause lots of unnecessary suffering but thinks that it’s okay to drive your car.
Second, if this were right, it would not be an objection to the view. Yes, it seems weird that going to the beach is more important than saving hundreds of lives. But the reason it seems weird is that we generally think that going to the beach is not super valuable—if it turns out that every time you go to the beach you generate more joy than has ever existed before in the history of our world, then that changes the calculation. When the world is weird, our moral verdicts should sound weird—plausible moral judgements will have weird implications for our world if our world happens to have bizarre features like that driving kills thousands of sentient beings in painful ways or that going to the beach creates oodles of good universes. But if you’re convinced by this argument against the claim that beaches disprove the goodness of creating happy beings, then you should also be convinced by the argument that bugs don’t disprove the wrongness of inflicting lots of pain and suffering on others.
Third, the basic principle is more plausible than the counterexamples. That creating lots of valuable worlds is worthwhile is more plausible than the judgment that we have no especially strong reason to go to the beach if that creates lots of valuable worlds.
Fourth, we have some reason to distrust our intuitions about going to the beach. There’s social desirability bias at play here—no one thinks that it’s socially desirable to jettison charity in favor of the beach, even if that does more good. There’s also the absurdity heuristic—the reason most people wouldn’t be convinced that people have strong obligations to go to the beach with glasses on is that it sounds weird, and most people are conformists. But it sounds just as weird to say that you shouldn’t drive because it might kill a few bugs.
And with polarized sunglasses in hand, we can stamp out the last vestiges of Caplan’s argument.
The argument from hypocrisy is equally unconvincing
Caplan’s argument from hypocrisy claims that if the espousers of some view don’t do what they would do if they took the view seriously, that means they don’t actually believe the view, and if not even its proponents believe the view, you shouldn’t either. Notably, I don’t think that Caplan’s argument from hypocrisy applies to a lot of people—Dustin Crummett, for example, runs an insect welfare organization, and lots of EAs take insect welfare seriously. Huemer has written a book about the wrongness of eating meat, and he doesn’t eat meat, which is what one would expect him to do if he thought eating meat was wrong. I donate to effective animal charities, write articles where I encourage people not to eat meat, and abstain from meat myself.
But perhaps there are things that Huemer and I could be doing better. Perhaps seriously investigating whether driving is worth it would be a better use of our times. I don’t think this is plausible, but let’s grant that we’re all filthy hypocrites for the sake of argument.
Even if we are hypocrites, the claim that we don’t really believe that eating meat is wrong is very implausible. I write articles saying that factory farming is the worst thing ever—I annoy the people around me talking about the horror that is factory farming. I don’t eat animals and I express horror at the fact that others do. What’s the story here? Am I just pretending? That doesn’t seem at all plausible.
And what’s the deal with Huemer? He’s written a book about the horror of factory farming, claiming that it’s the worst thing ever. He’s said that it’s the most obvious issue ever, perhaps after the truth of moral realism. Is Huemer just pretending?
I think Caplan here would probably give an analogy to revealed preferences. You might think you prefer A to B, but if you always pick B over A, then that shows that’s not your true preference. You think you prefer A to B, but you’re wrong.
But I don’t think that revealed preferences are a good way of gauging moral beliefs. It’s really hard to spend all of one’s time trying to make the world a better place. So generally people will only do some of the things they think are worth doing, ignoring the others. If some view implies that you should do A, B, and C, but you only do A, that doesn’t mean the view is false, it just means that you have limited time and mental energy to invest in investigating the weird implications of your views.
Now, perhaps there are ways I could be helping animals a lot more. Perhaps if I were sufficiently moral, I’d do those things. But if this is the claim then the argument from hypocrisy is extremely underwhelming. If the argument is “you claim that X is severely wrong, and you do a lot to stand up against X, but you could be doing a bit more to stand up against X, which means you don’t actually think X is bad,” then it’s extremely unconvincing. We have no reason to think that if people are convinced of a moral claim, they’ll be maximally efficient instruments in the pursuit of their morality; everything we know about human nature says the opposite. So this gives us no reason to think that it’s sometimes okay to cause others to suffer a lot for the sake of small benefits. This is especially true if the things that Caplan thinks we’d do if we actually took Huemer’s principle seriously are really weird things that require never driving everywhere and withdrawing from industrial life altogether. An argument of the form “if you really believed that you’d be a hermit, but you’re not, so you don’t believe the things that you write lots of books and articles about, and that you send your friends lots of angry text messages about,” is very unconvincing.
Revealed preferences show what people care about, but they don’t share people’s morality. There are lots of people who think eating meat is wrong but don’t do anything about it. Given this, the mere persistence of hypocrisy does not show that people don’t have some particular moral belief—it just means they don’t follow their morality in all circumstances.
And finally, polarized sunglasses are really the nail in the coffin. Lots of people, Caplan included, think that it’s important to create happy people. But none of them have investigated whether one should spend all their time smashing their polarized sunglasses against the nearest light source. But I don’t think this means that Caplan doesn’t take pro-natalism seriously—it just means that either
He thinks that the basic argument for polarized sunglasses shouldn’t even appeal to pro-natalists;
He doesn’t take his morality that seriously when it gives him weird, radically life-altering verdicts;
He’s a bit of a hypocrite;
Or he mostly ignores these weird implications, as a result of lots of biases and heuristics that are at play.
But the same responses are available to the person who disagrees with the hypocrisy argument for eating meat.
The conscience argument goes the way of the labor theory of value, the modal ontological argument, and Caplan’s other two arguments
Caplan’s argument from conscience is something like the following.
The main argument for veganism implies that it’s wrong to inflict lots of pain and suffering on others for small benefits.
But this implies that it’s wrong to drive and squish bugs.
But there are conscientious vegans who wouldn’t squish bugs if it were really wrong.
But those conscientious vegans do squish bugs.
So squishing bugs isn’t really wrong.
So the main argument for veganism is wrong.
Now, for reasons I’ve already explained, I think 2 is false. Most insects live horrible lives, so preventing them from having a million new offspring is good, or at least unpredictable in expected value.
But the real problem is with 3. I don’t think that ethical conscientiousness applies across the board. Lots of people who are ethically heroic in one domain are amoral in another. There are very plausible explanations of why vegans wouldn’t take insects interests seriously even if their ethical principles implied that they should. Insects are small, die constantly, are killed constantly by even ordinary action, etc. For the basic reasons explained before, even conscientious people would be unlikely to shut down their life to speak for the bees.
And like the argument from hypocrisy, I think the argument from conscience is disproved by polarized sunglasses. We can construct an equally convincing, false, parallel argument:
Pro-natalism implies that you should bring people with good lives into existence.
But this implies that one has a strong moral obligation to go to the beach while wearing polarized sunglasses.
But there are conscientious pro-natalists who would go to the beach with polarized sunglasses if it were really obligatory.
But they don’t go to the beach with polarized sunglasses.
So pro-natalism isn’t right.
I think that this argument is just as (un)convincing. Most people don’t follow their moral reasoning down weird rabbit holes, but this doesn’t say anything about their arguments or about whether they believe them. In a conflict between people’s ethical reasoning and their actions, you should believe their reasoning over their actions—most people are hypocrites, and you’d expect people to be biased in favor of their actions, so if abstract reasoning is enough to overcome their strong desire to defend themselves, you should take their reasoning seriously, not dismiss it on grounds of alleged hypocrisy. And when, in this case, the abstract reasoning just involves appealing to the world’s most self-evident principle—that you shouldn’t cause others to suffer immensely for small benefits—it should be taken seriously. If we took Caplan’s reasoning seriously, we’d never conclude that widely shared moral practices are wrong, for the principles that we appeal to in doing so often imply things that we don’t accept.
If you liked this post, please share it and give it a like. This increases the number of people who read it and makes me happy!
Correction: I very stupidly, when quoting Caplan’s argument from conscience, quoted the version that, he says, “seems laughable,” instead of the “fleshed-out version.” This was a stupid error—I think I just copied and pasted the first one by accident and forgot to check. Sorry!
The position that moral rightness is scalar and bidirectionally infinite is one of the things that seems so incredibly obvious to me, that I can't wrap my mind around arguments from hypocrisy. They make no more sense than arguing that because someone doesn't maximize their physical fitness, therefore their views on what fitness consists of or which methods are better or worse for getting fitter, are wrong.
Hypocrisy isn't failing to maximize some desirable trait according to one's own standard. Hypocrisy is socially criticizing someone else who's doing no worse than you.
Caplan's argument here isn't the best (and he also applies this argument elsewhere), but this doesn't make him wrong. People's actual preferences about animals are very confused, inconsistent, and emotion-filled, but what does it matter?