Rebutting Every Objection To Giving To The Shrimp Welfare Project
The argument from marginal crayfish
I recently wrote an article making the case for giving to shrimp welfare. Lots of people were convinced to give (enough to help around half a billion shrimp avoid a painful death), but because the idea sounds silly, many people on Twitter, hacker news, and various other places made fun of it, usually with very lame objections, mostly just pointing and sputtering. Here, I’ll explain why all the objections are wrong.
(Note: if any of the people making fun of shrimp welfare want to have a debate about it, shoot me a dm.)
Before I get to this, let me clarify: this is not satire. While shrimp welfare sounds weird, there’s nothing implausible about the notion that spending a dollar making sure that tens of thousands of conscious beings don’t experience a slow and agonizing death is good. If shrimp looked cute and cuddly, like puppies, no one would find this weird.
The basic argument I gave in the post was relatively simple:
If a dollar given to an organization has a sizeable probability of averting an enormous amount of suffering and averts an enormous amount of expected suffering, it’s very good to donate to the organization.
A dollar given to the shrimp welfare project has a sizeable probability of averting an enormous amount of suffering and averts an enormous amount of expected suffering.
So it’s very good to give to the shrimp welfare project.
The second premise is very plausible. A dollar given to the shrimp welfare project makes painless about 1,500 shrimp deaths per year—probably totaling around 15,000 per dollar. It looks like the marginal dollar is even better, probably preventing around 20,000 shrimp from painfully dying. The most detailed report on the intensity of shrimp pain concluded, on average, that they suffer about 19% as intensely as we do, and as I’ve argued recently at considerable length, that’s probably an underestimate. This means that the average dollar given to the shrimp welfare project averts about as much agony as making painless ~2850 deaths per dollar, and the marginal dollar probably averts as much agony as making painless ~3,800 deaths per dollar.
If a dollar made it so that almost 4,000 people were spared an excruciating death by slowly suffocating, that would avert an extreme amount of suffering. But that’s the average estimate of how much agony a dollar given to the SWP averts. Even if you think shrimp agony only matters 1% as much as human agony, it’s about as good as making painless 40 human deaths per dollar. So even by absurdly conservative estimates, it prevents extreme amounts of suffering.
The main objection anyone gave to premise 1) was saying that the RP report is too handwavy and that it’s hard to know if shrimp feel pain at all. As I’ve argued recently, we should think it’s very likely that shrimp feel pain, and quite likely that they feel intense pain. But even if you’re not sure if they feel pain or how much pain they feel, a low probability that they feel intense pain makes giving to the shrimp welfare project extremely high expected value. If you think there’s a 20% chance that they feel intense pain and that the 19% estimate is too much by a factor of 10, a dollar given to the shrimp welfare project still averts as much agony as giving painless deaths to 76 humans.
Saying “we don’t know how much good this does, and it’s hard to be precise, therefore we should ignore it,” is deeply illogical (read
’s excellent article about this). The fact we don’t know precisely how much good something does doesn’t mean we shouldn’t try to quantify. It’s more rational to rely on rough estimates than to ignore all estimates and then make fun of people using estimates to justify funding things that sound weird.People also objected by suggesting that lots of small pains don’t add up to be extreme agony. But I already addressed that in the post—first of all, I’m doubtful of the ethical claim, and second of all, even if lots of tiny bads don’t add up to be one extreme bad, shrimp painfully dying is most likely above the threshold of mattering significantly. It’s at least likely enough to be above the threshold that preventing it has very high expected value. If a shrimp painfully dying is on average 19% as painful as a human painfully dying, then preventing it is a very good bet.
The main objections have been to premise one which says that it’s good to spend a dollar if it has a sizeable chance of huge amounts of pain and suffering and averts a large amount of expected pain and suffering. The main counterargument people gave has been simply reiterating over and over again that they don’t care about shrimp.
Here’s how I see this. Imagine someone was savagely beating their dog to the point of near death because they don’t consider their dog’s interests. You argue they should stop doing this; dogs are capable of pain and suffering, so it’s hard to see what justifies mistreating them so egregiously. It would be wrong to hurt a human with dog-like cognitive capacities, so it should also be wrong to hurt a dog. “You don’t understand,” they reply, “I don’t care about dogs at all. I would set them on fire by the millions if it brought me slight happiness.”
Merely reiterating that you have some ethical judgment is not, in fact, a defense of the ethical judgment. If someone gives an argument against some prejudice, simply repeating that you have the prejudice is not a response. In response to an argument against racism, it wouldn’t do for a racist to simply repeat “no, you don’t understand, I’m really racist—I have extreme prejudice on the basis of race.”
In my article I argued:
When you consider the insane scale of effectiveness, giving to the SWP is not that counterintuitive. If there were 20,000 shrimp about to be suffocated in front of you, and you could make their deaths painless by using a dollar in your pocket, that would seem to be a good use of a dollar.
Intuitively, it seems that extreme suffering is bad. When we reflect on what makes it bad, the answer seems to be: what it feels like. If you became much less intelligent or found out you were a different species, that wouldn’t make your pain any less bad. But if this is right, then because shrimp can feel pain, their suffering matters. If what makes pain bad is how it feels, and shrimp can feel pain, then shrimp suffering matters. Ozy has a good piece about this, reflecting on their experience of “10/10 pain—pain so intense that you can’t care about anything other than relieving the pain”:
It was probably the worst experience of my life.
And let me tell you: I wasn’t at that moment particularly capable of understanding the future. I had little ability to reflect on my own thoughts and feelings. I certainly wasn’t capable of much abstract reasoning. My score on an IQ test would probably be quite low. My experience wasn’t at all complex. I wasn’t capable of experiencing the pleasure of poetry, or the depth and richness of a years-old friendship, or the elegance of philosophy.
I just hurt.
So I think about what it’s like to be a chicken who grows so fast that his legs are broken for his entire life, or who is placed in a macerator and ground to death, or who is drowned alive in 130 degrees Fahrenheit water. I think about how it compares to being a human who has these experiences. And I’m not sure my theoretical capacity for abstract reasoning affects the experience at all.
When I think what it’s like to be a tortured chicken versus a tortured human—
Well. I think the experience is the same.
There’s a long history of humans excluding others that matter from their moral circle because they don’t empathize with them. Thus, if you find yourself saying “I don’t care about group X in the slightest,” the historical track record isn’t kind to your position. Not caring about shrimp is very plausibly explained by bias—shrimp look weird and we don’t naturally empathize with them, so it’s not surprising that we don’t value their interests.
It’s very unclear what about shrimp is supposed to make their suffering not bad. This is traditionally known as the argument from marginal cases (or, as it were, from marginal crayfish); for the criteria that are supposed to make animals’ interests irrelevant, if we discovered humans having those traits, we would still think their interests mattered.
People proposed a few things that supposedly make shrimp pain irrelevant. The first was that they were very different from us. Yet surely if we came across intelligent aliens of the sort that occur in fiction that were very different from us, their extreme suffering would be very bad, and it would be wrong to egregiously harm them for slight benefit. Whether something is similar to us seems morally irrelevant. It wouldn’t be justified for aliens very different from us to hurt us just because we’re different.
Second, people proposed that shrimp are very unintelligent. But if there were mentally disabled people who were as cognitively enfeebled as shrimp, we wouldn’t think their suffering was no big deal. How smart you are doesn’t seem to affect the badness of your pain; when you have a really bad headache or are recovering from a surgery, the badness of that has nothing to do with how good you are at calculus and everything to do with how it feels.
Third people proposed that the thing that matters is that they aren’t our species. But surely species is morally irrelevant. If we discovered that some people (say, Laotians) had the same capacities as us but were aliens and thus not our species, their pain wouldn’t stop being bad.
Fourth, people propose that what matters morally is being part of a smart species. But if we discovered that the most mentally disabled people were aliens from a different species, their pain obviously wouldn’t stop being bad. How bad one’s pain is depends on facts about them, not about other members of their species (if it turned out that humans were mostly about as unintelligent as cows, but that the smartest ones had been placed on earth, the pain of mentally disabled humans wouldn’t stop being a big deal). The reason that the pain of mentally disabled people is bad has to do with what it’s like for them to suffer, not other members of their species—if an alien came across mentally disabled people or babies, to decide whether or not it would be bad to hurt them, they wouldn’t need to know about how smart other people are.
Even if you’re not sure that pain is bad because of how it feels, rather than something about our species, as long as there’s even a decent probability that it’s bad because of how it feels, the shrimp welfare project ends up being a good bet.
The last objection, and potentially the most serious, is that money given to the shrimp welfare project is very valuable but less good than other charities. Now, doing a detailed cost benefit analysis between the SWP and other animal charities is above my paygrade, though I mostly agree with Vasco’s analysis. So I’ll just explain why I think that the shrimp welfare project is better than longtermist organizations. Longtermist organizations are those that try to make the future go better—the argument for prioritizing them is that the future could have so many people that the expected value of longtermist interventions probably swamps other things.
Imagine that you could spend a dollar either giving to longtermist organizations or making 4,000 people’s deaths painless. Intuitively it seems like giving the people painless deaths is better—a thousand dollars given would prevent nearly 4 million painful human deaths. At some point, short term interventions become so effective that they’re worthwhile given that:
It just intuitively seems like they are. There seems to be something obviously wrong about giving 1,000 dollars to a longtermist org rather than giving painless deaths to 4 million people.
Preventing tons of terrible things has lots of desirable longterm ramifications. Perhaps the shrimp welfare project will prevent shrimp farming from spreading to the stars and torturing quadrillions of shrimp.
In the future, there might be many simulations run of the past. If this is right, then a past in which there was lots of shrimp farming going on will cause unfathomable amounts of suffering that scales with the size of the future. Similar longtermist swamping considerations apply here. There are lots of other speculative ways that preventing bad things like painful shrimp deaths can have way more benefits than one would expect.
One should have some decent normative uncertainty. For this reason “prevent lots of terrible things from happening,” is generally a good bet, given that the shrimp welfare projects is thousands of times more neglected than longtermism.
Over time I’ve come to think it’s less obvious that the future is good in expectation. We might spread wild animal suffering across the universe and inflict unfathomable suffering on huge numbers of digital minds. I’d still bet it’s good in expectation, but it makes it less of a clear slam dunk.
For this reason, until convinced otherwise, I’m giving to the shrimp.
Every year 25 trillion wild shrimps are killed https://rethinkpriorities.org/publications/shrimp-the-animals-most-commonly-used-and-killed-for-food-production.
Please check out https://www.abolitionistapproach.com/about/the-six-principles-of-the-abolitionist-approach-to-animal-rights/.
Welfarist approach to animals is not good. It will promote even more animals being killed because the welfarist approach makes people feel like animal agriculture is morally justified. We should focus on the animals' right to be not treated as a commodity.
Your writing in the SWP sequence has convinced me to roughly double the SWP allocation to around 1/6 of my annual donation. I have two problems, though.
One is direct. The numbers look *really perverse* if you run them in the other direction. To wit: it's morally good to make one human's death agonizing if you can thereby stop a small trashbag of shrimp being dumped onto ice. Even if it's two or three humans is probably still net good. I find that really hard to swallow! Even if "a small trashbag" is replaced with "a shipping container," boy would I not feel justified murdering someone to stop them from brutalizing all those shrimp. You have to get up to lots and lots of shrimp before I can even entertain this. (A similar objection applies to other measures of animal-suffering-moral-worth, but those numbers feel closer to intuitively workable.)
The second is: in general, if invertebrates have nontrivial moral worth, and if you aren't allergic to population ethics, then it is incredibly morally impactful to increase or decrease the number of invertebrates that will live. Good if invertebrate lives are mostly good, bad if they're mostly bad.
If that's so---if the planet is crawling with a quintillion bundles of mute agony, then this world is a horrid carnival, a concentrated universe of *pain*, the worst thing in the galaxy, a torture chamber, and the kindest thing you could do would be: destroy it. The best thing we've done so far as humans is decimate the biosphere, and the best thing we could do---before we jet off to another planet, or before we wink off ourselves---would be to blow the whole thing up, to prohibit life from miserably teeming ever again.
And I don't buy this! I think the existence of the world is good. I think the existence of the natural world is good. I think blowing up the planet---even if we learned "yep, no more progress, humans go extinct soon but the biosphere rolls on in its way until the sun goes boom, just life as normal for another billion years"---I still think blowing up the planet is, to put it lightly, supervillain shit. But under the view "invertebrate life matters and is mostly bad," it would be the only really big good thing ever done.
This is something I wonder about with Brian Tomasik, in particular. For him, is nature a horrid mistake? Must if be abolished? And if we'll never have the power to David-Pearce-out everything that lives, should we pull the plug?
This is implausible to me. I can *slightly* more believe the view "oh bug lives are good, so we should step back as humans to let more bugs live"----but only slightly. And it seems to me that any view saying "bugs (i got tired at the end of typing 'invertebrates,' so pretend they're synonyms) are significant moral patients, plus population ethics is not forbidden" goes one way or another. (Forbidding population ethics still makes things weird.)
What's the solution? Is there none? Is the secret slogan of Utilitarianism, "abolish nature by any means necessary"?
I don't think so! Even if there's a real "nature bad" view among the tescreal-y (yes i know i know) parts of the blog world, the significant attention given to factory farming (rather than the way more tractable "pave over the amazon" makes me think that isn't the mainstream "most important thing."
Mostly I'm confused. What do you believe? What do y'all believe? Have I erred in these inferences?