The Moral Risk Argument Against Meat Eating
Eating meat might be like killing lots of people, even if the animals are treated well
I think meat eating is almost certainly wrong. But worryingly, it might be much worse than even I think.
Lots of people argue that eating meat would be about as bad as if you killed and ate a random human—or hired an assassin to kill and eat a random severely cognitively enfeebled human. These people’s arguments are not crazy. Animals have roughly the cognitive capacities of a baby or severely mentally enfeebled person, so if species isn’t relevant, it’s probably roughly as bad.
Now, I’m a weird utilitarian, so I don’t really think animals have rights. I think that meat eating is only wrong because it causes lots and lots of suffering for comparatively minor benefit. But I’m not certain of that view—a view rejected by lots of smart people. For this reason, I think there’s at least a 1% chance that the animals that we eat have rights at roughly the level that mentally disabled people do (maybe 10% chance that anyone has rights, and then conditional on that, I’d give around a 60% chance to animals having rights comparable to certain cognitively disabled humans).
The average meat eater eats about 7,000 animals, 2,500 of which are land animals. Let’s be very generous to the meat eater and ignore the fish—as people always seem to—and just look at the harms to other animals. We’ve assumed very conservatively that there’s a 1% chance that an animal has morally significant rights that are roughly as significant as a severely disabled person. So then that means that eating meat over the course of your life gives you about a 1% chance of being as bad as eating 2,500 mentally disabled people (note: when you eat an animal, it causes about one extra animal death).
A helpful way to see how bad something is—potentially subject to some rare exceptions—is to multiply the probability that you’re doing something bad by how bad it would be. For instance, if there’s a 50% chance that you kill two people, that’s as bad as killing one person. If we apply this here, we get that meat eating, just for moral risk reasons, is about as bad as killing 25 people—even if we ignore fish (not that we should).
Even if you’re really, really confident that a mentally disabled human has more rights than a non-human animal, and think there’s only a 1/2500 chance that animals have the same sorts of rights, then eating meat over the course of your life is about as bad as killing one person. Surely if over the course of the average meat-eaters life, they would accidentally kill and eat a person (human flesh looks like chicken, you know) then eating meat would be wrong! So then even by these extremely generous assumptions that wholly ignore the suffering aspect, it’s wrong to eat animals.
This doesn’t rely any controversial notion of expected value reasoning. If doing something over the course of your life makes there be a non-trivial chance that you’re ~150 times worse than what Jeffrey Dahmer did—such that on average it makes you worse than Jeffrey Dahmer—then you obviously shouldn’t do it.
The only way around this is to be almost totally certain that animals don’t have the same rights as mentally enfeebled humans. But it’s hard to see what could justify this. Those who argue for animals rights don’t make crazy arguments. They note that humans have rights and there are many animals who possess similar capacities to certain humans. If rights have to do with what capacities a person has—surely not a crazy notion—then as long as we think that it would be wrong to eat even the cognitively impaired, we must think the same about animals.
What is it that gives rights to humans and not animals? It can’t be intelligence, for some animals are more intelligent than some humans. It can’t be species, because species doesn’t seem morally relevant (if you transferred a human brain into a cow body, it wouldn’t then be okay to kill and eat it—also, something extrinsic like species doesn’t seem relevant to a person’s intrinsic moral worth). There really isn’t a very plausible candidate for such a trait. Even if there’s one that makes sense to you, you shouldn’t be certain that your trait is right. To get around the risk argument, you must be confident at above 99% odds that animals don’t have rights. But it’s hard to imagine that you could be that confident, especially when the position is rejected by many smart and competent philosophers, and there’s little agreement in this area.
Remember, if these people are right, you are much worse than Jeffrey Dahmer. You are much worse than Ted Bundy (less blameworthy, but doing a lot more killing). You are worse even than the most prolific serial killers in American history. You have to be extremely confident, at well above 99.9% odds, to justify not taking such concerns seriously. You should not risk doing something much worse than the things done by the worst serial killers in history for the sake of the taste of steak.
This is not a case of buying Pascal’s wager. It’s not about taking seriously vanishingly low probability. It’s about taking seriously the risk, supported by powerful philosophical argument, that you might be killing lots of people—doing something every year as bad as a school shooting. This is especially worth taking seriously given that bias to discount low risks and self-interested biases will likely lead you to underestimate the strength of the case.
If playing with a bomb had a risk of detonating and killing everyone in your neighborhood, you shouldn’t do it. If eating meat has a moral risk of killing lots of beings that it’s super wrong to kill, you shouldn’t do it either!
And the flesh you so fancifully fry
Is not succulent, tasty or kind
It's death for no reason
And death for no reason is MURDER
—The Smiths “Meat is Murder”
I am always quite skeptical of these sorts of moral risk arguments, as they seem to have completely untenable consequences. Surely you are not 100% certain that a strong form of deontology is correct--one where duties have lexical priority over utility. Well, in that case you should just adopt that view in your practical reasoning, since any non-zero chance of such a view automatically trumps any considerations of utility. You say that this isn't about taking seriously vanishingly low probabilities, but I just don't see why we shouldn't. What reason is there for taking seriously pretty low probabilities, but not vanishingly low ones? What is the cutoff point for when we should no longer take things seriously? It seems like not acting according to the lexical priority deontological view is just choosing to be irrational and ignoring the view without good grounds for doing so, if we take the view you propose.
I am not completely sure what model to use instead, but I tend towards something like this: We reason in two steps. First we reason on the normative-ethical level, and figure out which view is most plausible. When we have decided on a view, we go to the practical-ethical level and figure out what the chosen theory tells us to do.
To use a less extreme example: Suppose you are in the footbridge variation of the trolley problem. You have a 80% credence in utilitarianism and a 20% credence in some sort of deontology where pushing the man is seriously wrong (worse than letting 150 people die or something). It looks like you should still push the man, given your commitments, since you are quite certain of utilitarianism, even if it would be very wrong conditional on deontology. But on your view, you should almost certainly not push the man. This seems to hold for almost all ethical dilemmas: You should act as a deontologist, even if you are quite certain that utilitarianism is true.
If we instead take my view, we first figure out our normative-ethical credences, which are 80%/20%. From here, we figure out what each view tells us to do. Let's just say that given utilitarianism, you are 90% sure you should push, and 10% you shouldn't. And on deontology, you are 80% sure you shouldn't and 20% you should (these are just made up figures). We then calculate your actual credence that you should push like this:
P(Utilitarianism)*P(Push|Utilitarianism)+P(Deontology)*P(Push|Deontology)
Given the numbers I made up, this works out to: 0.8*0.9+0.2*0.2=0.76 So you should be 76% sure you should push. This seems much more reasonable, I think.
Sorry, but you still wouldn't be worse than a serial killer, because those serial killers also ate meat. Say you eat x meat per year, and kill 0 humans. Jeffrey Dahmer also eats roughly x meat per year, and kills maybe 2 humans per year. x + 2 > x for all values of x, and I don't think most serial killers have abnormally low meat consumption.