>They respond to anesthetic, make tradeoffs between pain and reward, get addicted to drugs, display anxiety, and so on.
So do bacteria. Bacteria that absorb anesthetic will have some sort of characteristic reaction to it due to interactions between the bacteria's and the anesthetic's chemical properties. They will move away from stimuli that attempts to breach their membrane, which is similar to getting stabbed. They will display addiction behavior towards substances that they can helpfully absorb. Anxiety is a bit more awkward to think of but you can imagine a bacterium oscillating back and forth between two sets of stimuli under a microscope, characteristic of indecision. With these poorly construed analogies, we can prove anything feels pain, because we're not actually discovering the building blocks of pain, but wantonly extending manifest-level pain behavior metaphors to weird circumstances and taking it for granted that the building blocks of pain are present there, without doing the work of actually figuring out whether they're there.
I think this is a very fair point to push on, but it's worth saying that one very respectable philosophical theory of what pain *is* is an internal state that is caused by bodily damage and causes the superficial behaviours stereotypically associated with pain. This is just the original version of functionalism from the 50s and 60s as applied to pain, and in my view the original superficial version of functionalism has never really been refuted. And it fits better than the "must match what the human brain does when we're in pain" in detail definition of pain with the (to me) obvious fact that there could be intelligent aliens with very different internal cognitive information processing architectures to humans who nonetheless still felt pain.
Probably the superficial functionalist definition of pain is still not quite right because to count as pain a state must be "conscious", and so in addition the internal state must have the right properties to count as conscious. But then while it's not clear that the shrimp internal states that cause pain behaviour meet the criteria, it's not clear that bacteria states do either. So whilst requiring consciousness weakens the case of shrimp pain it also weakens your objection that if shrimp meet pain criteria so will bacteria.
>If I was given the choice between preventing a human from experiencing some painful experience or preventing some number of shrimp from experiencing that painful experience, I’d be indifferent at about six shrimp, assuming it would leave no lasting trauma. If the experience would leave the humans with lasting trauma, then it would be some bigger number—it’s probably worse, for instance, to sexually abuse one human than even hundreds of shrimp.
I’m sorry but this is absolutely insane, to the point where unfortunately I am starting to question the rest of your logic and seriousness. If you think that the suffering of 1 human is equal to that of 5 shrimp, you have simply lost the plot. As annoying and silly as some of Lyman Stone’s arguments are, this is way worse than anything he’s said. You do yourself and your cause such a disservice by making this claim because it will alienate 99% of reasonable people who would otherwise be sympathetic for arguments of taking animal suffering more seriously.
The trauma caveat does a lot of work. But I really would prefer to save 6 shrimp from being flicked by a rubber band pretty hard than save a human from the same.
Bentham, if I accepted the worldview implied by this piece, I'd have to grant that the man who rapes my sister yet donates a dollar to the shrimp welfare project is morally superior to me, who has not donated a dollar to the shrimp welfare project.
Presumably you care about getting people to donate to the shrimp welfare project. It might be to your rhetorical advantage to not explore some of the most insane implications of the argument, because it becomes impossible not to reject the argument out of hand, even if it makes perfect sense. In fact, it may be morally wrong of you to write this article with less-than-maximally-effective rhetoric, considering all the shrimp suffering that would be averted by whoever would be convinced by a piece that didn't establish a moral exchange rate between shrimp suffering and rape.
You'd also have to grant that the man who rapes your sister but donates $3000 to a malaria net charity is morally superior to you, all else being equal, under the worldview propagated by Effective Altruism/Peter Singer. But those views are clearly very influential and attractive to lots of people. So I'm not sure that the existence of this kind of conclusion is likely to drive people away.
Well, this kind of conclusion isn't usually mentioned, is it? I'm sure you could quite effectively drive people away from E.A by pointing out the ways its ethics can seem to redeem rapists on the basis of small charitable expenditures. I'm arguing that it's a rhetorical boner to even mention this because of how impossible it is for a person to believe, how contrary to all of their intuitions it runs, how infuriating it is.
I mean, that's just a problem with any moral philosophy where you can offset rape with any magnitude of good deeds. The solution is not to deny that good deeds are actually good but just to not care about whether someone is "morally superior" or not but to care about actual results and hard numbers.
I'm not denying any of that, but mentioning the comparative moral weight of rape is supremely bad politics. If Bentham and the E.As are trying to lead people down the moral path, they shouldn't inflame their emotions by claiming that the things they care supremely about (sexual assault) are of less importance than the things they couldn't care less about (shrimp welfare), even if there's no flaws in the reasoning that leads them there.
I'm a flawed and emotional human being who can't help but think of moral agents as "better" and "worse" than each other, and as soon as I think of the sister-rapist placidly smiling astride his mosquito net & shrimp stunner empire my whole head just shuts down. Other people will react the same way.
Does anyone actually think like that? It never crossed my mind that one could hear "one can save lives with only a few thousand of dollars worth of mosquito nets" and think of it as a method of redeeming a rapist, nor have I encountered anything similar to it as a criticism of EA.
No, I agree! No one thinks like this naturally! But Bentham points this out directly in the post, he draws a moral equivalence between shrimp suffering and the suffering involved in sexual assault. And I think that was a bad idea, because it makes people think like this.
What do you mean by “morally superior” here? It’s not a concept that matters too much to most consequentialists. But if it does matter, it’s usually about what we can expect from the person going forward, especially once they become more aware of the consequences of their actions.
The person who hasn’t thought about the consequences of their actions may cause more net harm, but can be expected to improve. But the person who has considered the consequences of their actions but goes ahead with rape anyway can’t be expected to improve, no matter how much donation they’re doing on the side.
I think you’re mistaking the total amount of good someone does for the way we should judge their character. Someone who is very well aware of the harms they are causing is worth judging more harshly than someone who inadvertently causes lots of harm. The person who rapes and donates seems like someone who is considering the harms they do very carefully and choosing to cause them nonetheless. The ordinary person who eats shrimp once in a while just isn’t considering what they’re doing. It’s the difference between someone who intentionally steps on your toe while giving you a birthday present, and the person who accidentally hits you while riding their bike without having got you a birthday present. Most of us would judge the first as being worse even though the second clearly was more of a net negative.
But I've read this piece and the arguments for shrimp welfare and found no fault with them, and yet at the time had not donated to the shrimp welfare project. Most of my friends are in a similar position w/r/t veganism. That's awareness of harm done -- harm done at a distance, to shrimp and animals, but the whole premise of Singer-style utilitarianism is to disregard distance and species in moral calculations. This sort of consequentialism gives one no moral basis to hold us superior to rapists, because more harm is done to the animals we knowingly pay for the deaths of than is done to the rape victims.
The rapist is much more likely to knowingly cause harm *to humans*, but is that a relevant consideration in a utilitarian context?
I have no formal argument against Bentham's conclusions on shrimp and animal ethics, but this project is fundamentally a rhetorical one, and doing differential analysis on harms from sexual assault and harms from shrimp consumption is a poor rhetorical strategy even for an audience of philosophy wonks and utilitarians, i.e, the people who can be convinced by argument to donate to charities.
I generally don't think it’s a particularly worthwhile project to try to figure out which one of two people is better or worse, or to identify people as good or bad, except insofar as this is rhetorically effective at engaging people to do better things. I’m attempting to diagnose why we tend to think better of one person than another- but I suspect it involves lots of hard to reason out, but easy to intuitively grasp, issues about how much people likely feel the force of particular arguments and yet resist them. Someone like you who has read a lot of these arguments and can’t find flaws in them, and yet isn’t motivated to act on them, is not doing something unusually bad, unlike someone who commits a single forceful rape.
I think we agree, actually. My only claim is that Bentham's digression on the moral weight of rape was a poor idea, rhetorically, in this piece about shrimp welfare.
I think that’s right. And I think there was someone else in the thread who disagreed that I thought I was replying to.
I also think it’s substantively wrong to be confident about a number like 19% as the fraction of human suffering that shrimp suffering is. And more importantly, rhetorically ineffective.
Curious how you can say we likely live in a morally weird world (agreed) and still identify as a woke conservative. If so many of our institutions are so morally faulty, it seems that our default disposition should be radicalism across the board, or at least radical skepticism.
>in some cases humans have lost significant numbers of neurons without their experiences becoming much less intense.
It triggers me when you make statements like these that are so imprecise and unmeasureable. What exactly are the units of "experience intensity" supposed to be and which neuroscientists have studied them and produced conclusive results?
Of course the only experience anybody can “measure” is yours. Anything else is extrapolation from a sample with N=1. There is a pretense of “science” that is surprisingly anti philosophical.
>Of course the only experience anybody can “measure” is yours.
I deny this and reject metaphysics telling us anything meaningful about the world more broadly. My problem is that the methodology here is somewhere between nonexistent and incoherent and so there's no way that e.g. the Rethink Priorities report approximates pain feelyness percentages. It's like trying to explain what a program is doing by measuring absurd proxies like the number of lines it contains or how much the CPU exhaust heat warms up the temperature in the room when the program runs. Absent something like functional decomposition, there won't be anything interesting to learn.
There is no way to infere from external observation if me or a bat (or yourself) is conscious. The true experts in consciousness agree and the most advanced theory to the date is axiomatic: integrated information theory.
You postúlate, deduce, compare reported human experience with recorded neurological measures, etc.
On the other hand, rethink priorities is based in checklists.
I completely disagree: their brains are very simple neural networks, and their degree of consciousness is likely to be in the same range as electronic devices.
All arguments based on behavioral similarity only proof we all come from evolution: "we are neural networks trained by natural selection. We avoid destruction and pursue reproduction, and we are both effective and desperate in both goals. The (Darwinian) reinforcement learning process that has led to our behavior imply strong rewards and penalties and being products of the same process (animal kingdom evolution), external similarity is inevitable. But to turn the penalty in the utility function of a neural network into pain you need the neural network to produce a conscious self. Pain is penalty to a conscious self. Philosophers know that philosophical zombies are conceivable, and external similarity is far from enough to guarantee noumenal equivalence."
Now, regarding how much information is integrated, supperativity implies that the ammount of resources devoted to the shrimp shall be propotional to their number, but (at most!) to their brain mass:
"As a rule, measures of information integration are supper additive (that is, complexity of two neural networks that connect among themselves is far bigger than the sum of the original networks), so neuron count ratios (Shrimp=0.01% of human) are likely to underestimate differences in consciousness. The ethical consequence of supper additivity is that ceteris paribus a given pool of resources shall be allocated in proportion not to the number of subjects but (at most!) to the number of neurons. "
>My view is that a shrimp’s life is not very valuable—they’re short and don’t have much that’s good. Likely they have negative lives, so I’d probably save one human over infinite shrimp because it’s likely good when shrimp die. Though, of course, I’d give it a lot of thought before making the decision. But the shrimp welfare project doesn’t save shrimp from death but instead from agony.
That's a really strong point and it cleared up a lot of my former objections to your SWP arguments.
I'm realizing that when people talk about "comparing suffering" between animals and humans my brain immediately jumps to saving a (implicitly largely positive) human life vs. slightly reducing the suffering in an already-awful animal existence -- e.g. the cute girl about to get rammed by a trolley, as opposed to the teeming mass of shrimp on the other lane of trolley tracks. But a better comparison is probably (a) giving a megadose of morphine to a near-comatose elderly person in the final few hours of their life, after they've already suffered for years from a horribly painful disease, or (b) stunning some large number of shrimp before they're frozen. Thinking about it in that light makes the size of the delta in suffering feel more comparable between the (one) human and the (very many) shrimp.
My only concern (and maybe you've addressed this elsewhere) is the question of enabling: does focusing on averting farmed shrimp suffering make it harder to ultimately end (or vastly reduce) factory farming of shrimp generally? To pick a human analogy...suppose an EA time-travels to 1785 Alabama. Should they work with the slavery industry to, for example, use less-painful whips on their slaves? Would this kind of effort make the end of slavery more likely or less likely?
That isn't a gotcha question; I'm genuinely unsure. I think leftists have a good point about "propping up structures of oppression"; on the other hand, I can also see a narrative where the net effect is to raise people's consciousness about the importance of the suffering of others, which would be long-term beneficial.
You may want to look into the work of Gary Francione, who advocates in favor of abolishing (not regulating) animal exploitation. He has spent much of his career arguing that welfare reforms are ineffective and counterproductive. I haven't heard him discuss the Shrimp Welfare Project, but I'm certain he'd be against it. https://en.wikipedia.org/wiki/Gary_L._Francione
The 19% number ("that shrimp suffer about 19% as intensely as humans") comes up a lot. That Rethink Priorities (RP) study is estimating across 9 welfare models, giving equal weight to each, and there's one in particular that greatly skews the numbers.
First, 19% is the mean for shrimp, while the median is 2.9%. The authors seem to weakly prefer the median, but it should be acknowledged that the choice between mean and median introduces a difference in moral weight greater than 6x. No other species in the study has such a large percentage point difference between median and mean, and only silkworms have a larger ratio between mean and median.
I think your response would be that with 440 billion shrimp killed, a 6x difference isn't a lot of difference. But if you'd prefer one human suffering to six shrimp because of the 19% number, using the median instead of the mean suggests you should prefer one human suffering to 35 shrimp, in the absence of lasting trauma. This might help you persuade some who recoil at the idea that 6 shrimp are worth one human.
But really, we should care why there's such a big difference between median and mean, and we can see in RP's linked spreadsheet that this mostly comes from one model that weights shrimp experience as *always more intense* than human experience. That's the "undiluted experience" model, which gives a median shrimp:human ratio of 2.246—that is, you should prefer the suffering of two humans to that of one shrimp. This is such an odd result and skews the mean so much that I think using the 19% number requires a positive argument for why the undiluted experience model should be included at all.
This should also weaken your overall certainty in these results—they clearly are not robust to small changes in assumptions or methodology, so they should be considered to have a high chance of being wrong. In particular, it's probably worth it to first decide which of the 9 models you think are reasonable, then just use that one or those ones rather than averaging over all of them.
And on a totally different note, the shrimp estimate is only sorta half shrimp. Shrimp were not included in the sentience study, so they use the crab data for shrimp instead. I get that you've got to use the data you have, but that does introduce another layer of uncertainty to the whole equation.
(RP also reports results for just 8 models, removing the neuron count model. The numbers are slightly different, but everything I've said above applies to the 8-model mix as well.)
Not that I disagree that shrimp welfare isn't a worthy cause, but still, if shrimp welfare was equivalent per capita to 19% of human welfare, then shouldn't this mean that welfare states should aggressively shrink their social safety nets and spend that money on shrimp welfare instead?
No, because 19% is an average across nine models of what counts as animal welfare, including the "undiluted experience" model that severely penalizes cognition to get the result that shrimp experience is more intense than human experience. Taken seriously, the undiluted experience model would suggest welfare states should aggressively shrink every program that spends money on humans in favor of whatever conscious beings have even less cognition than shrimp.
> If you were acting behind the veil of ignorance, equally likely to be born into the world as any conscious being, you’d regard shrimp suffering as very significant.
Imagine you’re in Hilbert’s hotel, and there are an infinite amount of beds. Every room # that is a multiple of 1,000,000,000,000 has a bed fit for humans. Every other room has a bed made for shrimp. Now, there are an infinite quantity of both shrimp and human beds, so you would expect that your chances of getting either one are 50/50.
However! You also know that God deeply cares about you, and obviously wants you to have a human-sized bed. You would therefore feel confident going into the hotel that you’ll get a bed made for humans.
So there’s no reason to believe that an all-powerful benevolent God would do something silly like made you equally likely to get into every room, which would make your odds of getting a very uncomfortable shrimp-sized bed far too high.
What I find interesting is that if God is genuinely all-good and compassionate as Bentham claims, then why exactly would he bother creating a system of evolution where everyone did not evolve to become vegans, or even was naturally made that way starting from the very beginning? AFAIK, not even vegetarian Indian castes have evolved to be vegan, at least not historically.
6:1 is an insane ratio for shrimp:human suffering. You would seriously rather torture one human being than *seven* shrimp? That’s completely absurd. I do not believe that you would actually make that decision if you had a choice.
You see a guy tied to a chair, about to have his eyes torn out by ISIS or something, and then in the next room there’s seven shrimp about to get their eyestalks ablated—you’re choosing to save the shrimp? Get out of here. If you’d said six million or something I’d be like “wow this guy takes utilitarianism very seriously.” But six is just unserious.
Given your beliefs that shrimp/animal welfare is a moral atrocity several times worse than the Holocaust, that zombies/psychophysical disharmony are possible, and that God exists, it really starts to look like He missed a trick by not suppressing pain qualia or otherwise taking steps to improve their welfare without massively intervening in the causal order.
The world would be immeasurably improved, our moral intuitions would be much more accurate, and we'd still get to enjoy delicious shrimp cocktails and chicken wings. It wouldn't even create the problems that messing with human perceptions does. Everybody wins!
>On Christianity, the near-exclusive determinant of how good a world is is how many people are saved.
I do not want to get in the middle of this particular p*ing match, but this is incorrect. The exclusive determinant of how good a world is, by the standard of Christianity, is the judgment of God.
Presuming to know the mind of God is considered a sin. Christians are expected to do their best to follow the will of God, but to do so with humility and the understanding that people have an imperfect understanding of His will.
Really really enjoyed this article. Great work, Matthew!
A few parts of minor disagreements, though (though it is almost all very well argued and persuasive).
1) "If I was given the choice between preventing a human from experiencing some painful experience or preventing some number of shrimp from experiencing that painful experience, I’d be indifferent at about six shrimp, assuming it would leave no lasting trauma."
I think you have to recognize the radical-ness of this claim. The issue I think some people take with this (I don’t but I dont see this as obvious either) is that meta-intuitions (pain is bad) can only fight against object-level intuitions (we shouldn’t donate to poor people because of meat eater problem) to some degree. At some point, we should do moral curve fitting (see my blogpost: https://irrationalitycommunity.substack.com/p/issues-in-meta-normative-ethics), or we should reject our meta-principles. This is even true, I think, if we know that meta-principles are less subject to bias (cognitive bias, EDAs, etc).
When I make the case for shrimp act (which I often do), I would not (at least not initially) put all my conversational marbles into the Moral Weights Project (at the very least for optics/ pragmatic reasons), if not for true philosophical concerns (which I see both!).
2) "Humans evolved from beings very much like shrimp. If we assume each generation has a non-infinite exchange rate with the one before it, then this entails that the exchange rate between human welfare and shrimp welfare is not infinite.”
There definitely are cases in evolution where we have an infinite exchange rate -- say, the case where we go from non-consciousness to consciousness (which happens unless you are a panpsychist or something like that). If one has high enough probability that shrimp aren’t conscious - which I agree should be implausible based on the data - then it would be the case that we have an infinite exchange rate. I think you probably buy that this happens at some point in evolution but that you’re unsure which part.
Overall, though, this is an excellent post (though I didn’t think Lyman’s case was compelling to begin with).
I’d also be interested in seeing you write an article making the case for (or against) digital minds s-risk stuff being the most important cause area, as I think there are strong arguments in a similar vain to this one (albeit lower probability and much higher values). Similar for astronomical waste stuff.
There's nothing in this piece that disputes that shrimp welfare is strange. Indeed it's more or less impossible to get humans to care about it because of the strangeness. If we cared indiscriminately about the suffering of conscious beings, we should care that shrimp are being tortured en masse.
You're conflating two senses of the word strange. It's strange in that most people would consider it abnormal. But it's not strange in that it follows logically from the premises. (Or so BB wants to say.)
No, he thinks it's deeply strange and that people should care about it in spite of its being deeply strange. He freely admits it's deeply strange, there's a large section of the piece devoted to the argument of why we should not reject deeply strange conclusions about morality. Also, as he states, if he wanted engagement he would write about politics, not shrimp.
Why? You can sort the newsletter posts by most popular. If you do, you see the first shrimp welfare post (which couldn't possibly have been made with intent to grift, because it was the first) and then numerous political posts. The newsletters on this platforms that deal with the antics of the Trump administration are vastly more popular than the wonkish philosophical ones. Arguing at length about shrimp welfare is simply not the way to grift for audience engagement.
Even if it were a ploy to boost online engagement, that wouldn't imply that it's not a worthy cause. Bentham meticulously and painstakingly makes the case for why shrimp have moral value. What is your objection to his argument? Just asserting that's it's strange or ridiculous is not an argument.
Not that I disagree that shrimp welfare isn't a worthy cause, but still, if shrimp welfare was equivalent per capita to 19% of human welfare, then shouldn't this mean that welfare states should aggressively shrink their social safety nets and spend that money on shrimp welfare instead?
Not that I disagree that shrimp welfare isn't a worthy cause, but still, if shrimp welfare was equivalent per capita to 19% of human welfare, then shouldn't this mean that welfare states should aggressively shrink their social safety nets and spend that money on shrimp welfare instead?
>They respond to anesthetic, make tradeoffs between pain and reward, get addicted to drugs, display anxiety, and so on.
So do bacteria. Bacteria that absorb anesthetic will have some sort of characteristic reaction to it due to interactions between the bacteria's and the anesthetic's chemical properties. They will move away from stimuli that attempts to breach their membrane, which is similar to getting stabbed. They will display addiction behavior towards substances that they can helpfully absorb. Anxiety is a bit more awkward to think of but you can imagine a bacterium oscillating back and forth between two sets of stimuli under a microscope, characteristic of indecision. With these poorly construed analogies, we can prove anything feels pain, because we're not actually discovering the building blocks of pain, but wantonly extending manifest-level pain behavior metaphors to weird circumstances and taking it for granted that the building blocks of pain are present there, without doing the work of actually figuring out whether they're there.
I think this is a very fair point to push on, but it's worth saying that one very respectable philosophical theory of what pain *is* is an internal state that is caused by bodily damage and causes the superficial behaviours stereotypically associated with pain. This is just the original version of functionalism from the 50s and 60s as applied to pain, and in my view the original superficial version of functionalism has never really been refuted. And it fits better than the "must match what the human brain does when we're in pain" in detail definition of pain with the (to me) obvious fact that there could be intelligent aliens with very different internal cognitive information processing architectures to humans who nonetheless still felt pain.
Probably the superficial functionalist definition of pain is still not quite right because to count as pain a state must be "conscious", and so in addition the internal state must have the right properties to count as conscious. But then while it's not clear that the shrimp internal states that cause pain behaviour meet the criteria, it's not clear that bacteria states do either. So whilst requiring consciousness weakens the case of shrimp pain it also weakens your objection that if shrimp meet pain criteria so will bacteria.
>If I was given the choice between preventing a human from experiencing some painful experience or preventing some number of shrimp from experiencing that painful experience, I’d be indifferent at about six shrimp, assuming it would leave no lasting trauma. If the experience would leave the humans with lasting trauma, then it would be some bigger number—it’s probably worse, for instance, to sexually abuse one human than even hundreds of shrimp.
I’m sorry but this is absolutely insane, to the point where unfortunately I am starting to question the rest of your logic and seriousness. If you think that the suffering of 1 human is equal to that of 5 shrimp, you have simply lost the plot. As annoying and silly as some of Lyman Stone’s arguments are, this is way worse than anything he’s said. You do yourself and your cause such a disservice by making this claim because it will alienate 99% of reasonable people who would otherwise be sympathetic for arguments of taking animal suffering more seriously.
The trauma caveat does a lot of work. But I really would prefer to save 6 shrimp from being flicked by a rubber band pretty hard than save a human from the same.
Bentham, if I accepted the worldview implied by this piece, I'd have to grant that the man who rapes my sister yet donates a dollar to the shrimp welfare project is morally superior to me, who has not donated a dollar to the shrimp welfare project.
Presumably you care about getting people to donate to the shrimp welfare project. It might be to your rhetorical advantage to not explore some of the most insane implications of the argument, because it becomes impossible not to reject the argument out of hand, even if it makes perfect sense. In fact, it may be morally wrong of you to write this article with less-than-maximally-effective rhetoric, considering all the shrimp suffering that would be averted by whoever would be convinced by a piece that didn't establish a moral exchange rate between shrimp suffering and rape.
You'd also have to grant that the man who rapes your sister but donates $3000 to a malaria net charity is morally superior to you, all else being equal, under the worldview propagated by Effective Altruism/Peter Singer. But those views are clearly very influential and attractive to lots of people. So I'm not sure that the existence of this kind of conclusion is likely to drive people away.
Well, this kind of conclusion isn't usually mentioned, is it? I'm sure you could quite effectively drive people away from E.A by pointing out the ways its ethics can seem to redeem rapists on the basis of small charitable expenditures. I'm arguing that it's a rhetorical boner to even mention this because of how impossible it is for a person to believe, how contrary to all of their intuitions it runs, how infuriating it is.
I mean, that's just a problem with any moral philosophy where you can offset rape with any magnitude of good deeds. The solution is not to deny that good deeds are actually good but just to not care about whether someone is "morally superior" or not but to care about actual results and hard numbers.
I'm not denying any of that, but mentioning the comparative moral weight of rape is supremely bad politics. If Bentham and the E.As are trying to lead people down the moral path, they shouldn't inflame their emotions by claiming that the things they care supremely about (sexual assault) are of less importance than the things they couldn't care less about (shrimp welfare), even if there's no flaws in the reasoning that leads them there.
I'm a flawed and emotional human being who can't help but think of moral agents as "better" and "worse" than each other, and as soon as I think of the sister-rapist placidly smiling astride his mosquito net & shrimp stunner empire my whole head just shuts down. Other people will react the same way.
Does anyone actually think like that? It never crossed my mind that one could hear "one can save lives with only a few thousand of dollars worth of mosquito nets" and think of it as a method of redeeming a rapist, nor have I encountered anything similar to it as a criticism of EA.
No, I agree! No one thinks like this naturally! But Bentham points this out directly in the post, he draws a moral equivalence between shrimp suffering and the suffering involved in sexual assault. And I think that was a bad idea, because it makes people think like this.
What do you mean by “morally superior” here? It’s not a concept that matters too much to most consequentialists. But if it does matter, it’s usually about what we can expect from the person going forward, especially once they become more aware of the consequences of their actions.
The person who hasn’t thought about the consequences of their actions may cause more net harm, but can be expected to improve. But the person who has considered the consequences of their actions but goes ahead with rape anyway can’t be expected to improve, no matter how much donation they’re doing on the side.
I think you’re mistaking the total amount of good someone does for the way we should judge their character. Someone who is very well aware of the harms they are causing is worth judging more harshly than someone who inadvertently causes lots of harm. The person who rapes and donates seems like someone who is considering the harms they do very carefully and choosing to cause them nonetheless. The ordinary person who eats shrimp once in a while just isn’t considering what they’re doing. It’s the difference between someone who intentionally steps on your toe while giving you a birthday present, and the person who accidentally hits you while riding their bike without having got you a birthday present. Most of us would judge the first as being worse even though the second clearly was more of a net negative.
But I've read this piece and the arguments for shrimp welfare and found no fault with them, and yet at the time had not donated to the shrimp welfare project. Most of my friends are in a similar position w/r/t veganism. That's awareness of harm done -- harm done at a distance, to shrimp and animals, but the whole premise of Singer-style utilitarianism is to disregard distance and species in moral calculations. This sort of consequentialism gives one no moral basis to hold us superior to rapists, because more harm is done to the animals we knowingly pay for the deaths of than is done to the rape victims.
The rapist is much more likely to knowingly cause harm *to humans*, but is that a relevant consideration in a utilitarian context?
I have no formal argument against Bentham's conclusions on shrimp and animal ethics, but this project is fundamentally a rhetorical one, and doing differential analysis on harms from sexual assault and harms from shrimp consumption is a poor rhetorical strategy even for an audience of philosophy wonks and utilitarians, i.e, the people who can be convinced by argument to donate to charities.
I generally don't think it’s a particularly worthwhile project to try to figure out which one of two people is better or worse, or to identify people as good or bad, except insofar as this is rhetorically effective at engaging people to do better things. I’m attempting to diagnose why we tend to think better of one person than another- but I suspect it involves lots of hard to reason out, but easy to intuitively grasp, issues about how much people likely feel the force of particular arguments and yet resist them. Someone like you who has read a lot of these arguments and can’t find flaws in them, and yet isn’t motivated to act on them, is not doing something unusually bad, unlike someone who commits a single forceful rape.
I think we agree, actually. My only claim is that Bentham's digression on the moral weight of rape was a poor idea, rhetorically, in this piece about shrimp welfare.
I think that’s right. And I think there was someone else in the thread who disagreed that I thought I was replying to.
I also think it’s substantively wrong to be confident about a number like 19% as the fraction of human suffering that shrimp suffering is. And more importantly, rhetorically ineffective.
Curious how you can say we likely live in a morally weird world (agreed) and still identify as a woke conservative. If so many of our institutions are so morally faulty, it seems that our default disposition should be radicalism across the board, or at least radical skepticism.
This presupposes that radicalism consistently makes those institutions better. As the Trump administration has demonstrated, it does not.
>in some cases humans have lost significant numbers of neurons without their experiences becoming much less intense.
It triggers me when you make statements like these that are so imprecise and unmeasureable. What exactly are the units of "experience intensity" supposed to be and which neuroscientists have studied them and produced conclusive results?
Of course the only experience anybody can “measure” is yours. Anything else is extrapolation from a sample with N=1. There is a pretense of “science” that is surprisingly anti philosophical.
https://forum.effectivealtruism.org/posts/3nLDxEhJwqBEtgwJc/arthropod-non-sentience
>Of course the only experience anybody can “measure” is yours.
I deny this and reject metaphysics telling us anything meaningful about the world more broadly. My problem is that the methodology here is somewhere between nonexistent and incoherent and so there's no way that e.g. the Rethink Priorities report approximates pain feelyness percentages. It's like trying to explain what a program is doing by measuring absurd proxies like the number of lines it contains or how much the CPU exhaust heat warms up the temperature in the room when the program runs. Absent something like functional decomposition, there won't be anything interesting to learn.
There is no way to infere from external observation if me or a bat (or yourself) is conscious. The true experts in consciousness agree and the most advanced theory to the date is axiomatic: integrated information theory.
You postúlate, deduce, compare reported human experience with recorded neurological measures, etc.
On the other hand, rethink priorities is based in checklists.
I completely disagree: their brains are very simple neural networks, and their degree of consciousness is likely to be in the same range as electronic devices.
https://forum.effectivealtruism.org/posts/3nLDxEhJwqBEtgwJc/arthropod-non-sentience
All arguments based on behavioral similarity only proof we all come from evolution: "we are neural networks trained by natural selection. We avoid destruction and pursue reproduction, and we are both effective and desperate in both goals. The (Darwinian) reinforcement learning process that has led to our behavior imply strong rewards and penalties and being products of the same process (animal kingdom evolution), external similarity is inevitable. But to turn the penalty in the utility function of a neural network into pain you need the neural network to produce a conscious self. Pain is penalty to a conscious self. Philosophers know that philosophical zombies are conceivable, and external similarity is far from enough to guarantee noumenal equivalence."
Now, regarding how much information is integrated, supperativity implies that the ammount of resources devoted to the shrimp shall be propotional to their number, but (at most!) to their brain mass:
"As a rule, measures of information integration are supper additive (that is, complexity of two neural networks that connect among themselves is far bigger than the sum of the original networks), so neuron count ratios (Shrimp=0.01% of human) are likely to underestimate differences in consciousness. The ethical consequence of supper additivity is that ceteris paribus a given pool of resources shall be allocated in proportion not to the number of subjects but (at most!) to the number of neurons. "
>My view is that a shrimp’s life is not very valuable—they’re short and don’t have much that’s good. Likely they have negative lives, so I’d probably save one human over infinite shrimp because it’s likely good when shrimp die. Though, of course, I’d give it a lot of thought before making the decision. But the shrimp welfare project doesn’t save shrimp from death but instead from agony.
That's a really strong point and it cleared up a lot of my former objections to your SWP arguments.
I'm realizing that when people talk about "comparing suffering" between animals and humans my brain immediately jumps to saving a (implicitly largely positive) human life vs. slightly reducing the suffering in an already-awful animal existence -- e.g. the cute girl about to get rammed by a trolley, as opposed to the teeming mass of shrimp on the other lane of trolley tracks. But a better comparison is probably (a) giving a megadose of morphine to a near-comatose elderly person in the final few hours of their life, after they've already suffered for years from a horribly painful disease, or (b) stunning some large number of shrimp before they're frozen. Thinking about it in that light makes the size of the delta in suffering feel more comparable between the (one) human and the (very many) shrimp.
My only concern (and maybe you've addressed this elsewhere) is the question of enabling: does focusing on averting farmed shrimp suffering make it harder to ultimately end (or vastly reduce) factory farming of shrimp generally? To pick a human analogy...suppose an EA time-travels to 1785 Alabama. Should they work with the slavery industry to, for example, use less-painful whips on their slaves? Would this kind of effort make the end of slavery more likely or less likely?
That isn't a gotcha question; I'm genuinely unsure. I think leftists have a good point about "propping up structures of oppression"; on the other hand, I can also see a narrative where the net effect is to raise people's consciousness about the importance of the suffering of others, which would be long-term beneficial.
You may want to look into the work of Gary Francione, who advocates in favor of abolishing (not regulating) animal exploitation. He has spent much of his career arguing that welfare reforms are ineffective and counterproductive. I haven't heard him discuss the Shrimp Welfare Project, but I'm certain he'd be against it. https://en.wikipedia.org/wiki/Gary_L._Francione
The 19% number ("that shrimp suffer about 19% as intensely as humans") comes up a lot. That Rethink Priorities (RP) study is estimating across 9 welfare models, giving equal weight to each, and there's one in particular that greatly skews the numbers.
First, 19% is the mean for shrimp, while the median is 2.9%. The authors seem to weakly prefer the median, but it should be acknowledged that the choice between mean and median introduces a difference in moral weight greater than 6x. No other species in the study has such a large percentage point difference between median and mean, and only silkworms have a larger ratio between mean and median.
I think your response would be that with 440 billion shrimp killed, a 6x difference isn't a lot of difference. But if you'd prefer one human suffering to six shrimp because of the 19% number, using the median instead of the mean suggests you should prefer one human suffering to 35 shrimp, in the absence of lasting trauma. This might help you persuade some who recoil at the idea that 6 shrimp are worth one human.
But really, we should care why there's such a big difference between median and mean, and we can see in RP's linked spreadsheet that this mostly comes from one model that weights shrimp experience as *always more intense* than human experience. That's the "undiluted experience" model, which gives a median shrimp:human ratio of 2.246—that is, you should prefer the suffering of two humans to that of one shrimp. This is such an odd result and skews the mean so much that I think using the 19% number requires a positive argument for why the undiluted experience model should be included at all.
This should also weaken your overall certainty in these results—they clearly are not robust to small changes in assumptions or methodology, so they should be considered to have a high chance of being wrong. In particular, it's probably worth it to first decide which of the 9 models you think are reasonable, then just use that one or those ones rather than averaging over all of them.
And on a totally different note, the shrimp estimate is only sorta half shrimp. Shrimp were not included in the sentience study, so they use the crab data for shrimp instead. I get that you've got to use the data you have, but that does introduce another layer of uncertainty to the whole equation.
(RP also reports results for just 8 models, removing the neuron count model. The numbers are slightly different, but everything I've said above applies to the 8-model mix as well.)
Not that I disagree that shrimp welfare isn't a worthy cause, but still, if shrimp welfare was equivalent per capita to 19% of human welfare, then shouldn't this mean that welfare states should aggressively shrink their social safety nets and spend that money on shrimp welfare instead?
No, because 19% is an average across nine models of what counts as animal welfare, including the "undiluted experience" model that severely penalizes cognition to get the result that shrimp experience is more intense than human experience. Taken seriously, the undiluted experience model would suggest welfare states should aggressively shrink every program that spends money on humans in favor of whatever conscious beings have even less cognition than shrimp.
What makes a shrimp sentient and a plant non sentient?
Shrimp have brains
I don't think a brain is required for an animal to elicit a pain response.
Certainly there's a chance.
There’s a chance that rocks feel pain.
Rocks aren't alive.
Yes, but so what? There’s still a *chance* as he said. Panpsychism could be true.
> If you were acting behind the veil of ignorance, equally likely to be born into the world as any conscious being, you’d regard shrimp suffering as very significant.
Imagine you’re in Hilbert’s hotel, and there are an infinite amount of beds. Every room # that is a multiple of 1,000,000,000,000 has a bed fit for humans. Every other room has a bed made for shrimp. Now, there are an infinite quantity of both shrimp and human beds, so you would expect that your chances of getting either one are 50/50.
However! You also know that God deeply cares about you, and obviously wants you to have a human-sized bed. You would therefore feel confident going into the hotel that you’ll get a bed made for humans.
So there’s no reason to believe that an all-powerful benevolent God would do something silly like made you equally likely to get into every room, which would make your odds of getting a very uncomfortable shrimp-sized bed far too high.
What I find interesting is that if God is genuinely all-good and compassionate as Bentham claims, then why exactly would he bother creating a system of evolution where everyone did not evolve to become vegans, or even was naturally made that way starting from the very beginning? AFAIK, not even vegetarian Indian castes have evolved to be vegan, at least not historically.
6:1 is an insane ratio for shrimp:human suffering. You would seriously rather torture one human being than *seven* shrimp? That’s completely absurd. I do not believe that you would actually make that decision if you had a choice.
You see a guy tied to a chair, about to have his eyes torn out by ISIS or something, and then in the next room there’s seven shrimp about to get their eyestalks ablated—you’re choosing to save the shrimp? Get out of here. If you’d said six million or something I’d be like “wow this guy takes utilitarianism very seriously.” But six is just unserious.
Given your beliefs that shrimp/animal welfare is a moral atrocity several times worse than the Holocaust, that zombies/psychophysical disharmony are possible, and that God exists, it really starts to look like He missed a trick by not suppressing pain qualia or otherwise taking steps to improve their welfare without massively intervening in the causal order.
The world would be immeasurably improved, our moral intuitions would be much more accurate, and we'd still get to enjoy delicious shrimp cocktails and chicken wings. It wouldn't even create the problems that messing with human perceptions does. Everybody wins!
>On Christianity, the near-exclusive determinant of how good a world is is how many people are saved.
I do not want to get in the middle of this particular p*ing match, but this is incorrect. The exclusive determinant of how good a world is, by the standard of Christianity, is the judgment of God.
Presuming to know the mind of God is considered a sin. Christians are expected to do their best to follow the will of God, but to do so with humility and the understanding that people have an imperfect understanding of His will.
> we should expect people to be wrong about subject
I think this should say "about _this_ subject".
Really really enjoyed this article. Great work, Matthew!
A few parts of minor disagreements, though (though it is almost all very well argued and persuasive).
1) "If I was given the choice between preventing a human from experiencing some painful experience or preventing some number of shrimp from experiencing that painful experience, I’d be indifferent at about six shrimp, assuming it would leave no lasting trauma."
I think you have to recognize the radical-ness of this claim. The issue I think some people take with this (I don’t but I dont see this as obvious either) is that meta-intuitions (pain is bad) can only fight against object-level intuitions (we shouldn’t donate to poor people because of meat eater problem) to some degree. At some point, we should do moral curve fitting (see my blogpost: https://irrationalitycommunity.substack.com/p/issues-in-meta-normative-ethics), or we should reject our meta-principles. This is even true, I think, if we know that meta-principles are less subject to bias (cognitive bias, EDAs, etc).
When I make the case for shrimp act (which I often do), I would not (at least not initially) put all my conversational marbles into the Moral Weights Project (at the very least for optics/ pragmatic reasons), if not for true philosophical concerns (which I see both!).
2) "Humans evolved from beings very much like shrimp. If we assume each generation has a non-infinite exchange rate with the one before it, then this entails that the exchange rate between human welfare and shrimp welfare is not infinite.”
There definitely are cases in evolution where we have an infinite exchange rate -- say, the case where we go from non-consciousness to consciousness (which happens unless you are a panpsychist or something like that). If one has high enough probability that shrimp aren’t conscious - which I agree should be implausible based on the data - then it would be the case that we have an infinite exchange rate. I think you probably buy that this happens at some point in evolution but that you’re unsure which part.
Overall, though, this is an excellent post (though I didn’t think Lyman’s case was compelling to begin with).
I’d also be interested in seeing you write an article making the case for (or against) digital minds s-risk stuff being the most important cause area, as I think there are strong arguments in a similar vain to this one (albeit lower probability and much higher values). Similar for astronomical waste stuff.
There's nothing in this piece that disputes that shrimp welfare is strange. Indeed it's more or less impossible to get humans to care about it because of the strangeness. If we cared indiscriminately about the suffering of conscious beings, we should care that shrimp are being tortured en masse.
But we don't, and we won't.
You're conflating two senses of the word strange. It's strange in that most people would consider it abnormal. But it's not strange in that it follows logically from the premises. (Or so BB wants to say.)
No, he thinks it's deeply strange and that people should care about it in spite of its being deeply strange. He freely admits it's deeply strange, there's a large section of the piece devoted to the argument of why we should not reject deeply strange conclusions about morality. Also, as he states, if he wanted engagement he would write about politics, not shrimp.
Why? You can sort the newsletter posts by most popular. If you do, you see the first shrimp welfare post (which couldn't possibly have been made with intent to grift, because it was the first) and then numerous political posts. The newsletters on this platforms that deal with the antics of the Trump administration are vastly more popular than the wonkish philosophical ones. Arguing at length about shrimp welfare is simply not the way to grift for audience engagement.
Even if it were a ploy to boost online engagement, that wouldn't imply that it's not a worthy cause. Bentham meticulously and painstakingly makes the case for why shrimp have moral value. What is your objection to his argument? Just asserting that's it's strange or ridiculous is not an argument.
Not that I disagree that shrimp welfare isn't a worthy cause, but still, if shrimp welfare was equivalent per capita to 19% of human welfare, then shouldn't this mean that welfare states should aggressively shrink their social safety nets and spend that money on shrimp welfare instead?
You wouldn’t need very much money from the welfare state to nearly eliminate inhumane shrimp slaughter.
Yeah probably
ok, phew, the calibre of your response makes me much more confident in the correctness of my views about shrimp welfare
Not that I disagree that shrimp welfare isn't a worthy cause, but still, if shrimp welfare was equivalent per capita to 19% of human welfare, then shouldn't this mean that welfare states should aggressively shrink their social safety nets and spend that money on shrimp welfare instead?