79 Comments

$30 per month to help a shrimp? In THIS economy? I'd have to have the brain of a shrimp NOT to take that offer!

https://www.farmkind.giving/confirmation?id=44cfdc9e-8f37-48d4-b2cc-e3008d25fb15

1 paid subscription, please

Expand full comment

Done!

Expand full comment

"Verily I say unto you, Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me."

"Not shrimp though. That's cheating."

Expand full comment

I think Vasco Grilo’s cost-effectiveness analysis makes some pretty controversial assumptions. Most notably, he assumes excruciating pain is 10,000x as intense as disabling pain. This seems plausibly 2-3 OOMs off to me, and if you use a lower factor, chicken welfare campaigns win out on the cost-effectiveness calculation. E.g., Rethink Priorities uses a lower estimate, having used ~600x recently and even 33x before (latter seems implausible to me in the other direction), which would change who wins out.

I like the Shrimp Welfare Project a lot and often donate to it (although I’m super uncertain about phenomenal consciousness and think there’s some nonnegligible chance shrimp aren’t conscious). My current guess is that it’s plausibly even more cost-effective to donate to chicken welfare campaigns.

Expand full comment

Yeah Grilo’s may be an overestimate. But I think slight changes to the numbers still make it win out.

Expand full comment

This on top of the somewhat dubious 20 minutes estimate (the best evidence I can find, including SWP statements, says 2-3 minutes, not to mention some of the counterfactual is salt baths) makes welfare campaigns more appealing to me. OTOH, these corporate campaigns take a while to see impact, whereas stunning machines seem quicker.

I'm excited to see the SWP's welfare campaign work estimates, give some portion of each marginal donation goes to shrimp welfare campaigns AFAIK.

Expand full comment

I think a lot of people have a tribal kind of moral attitude. We all have serious moral obligations, sure, but only to beings in our tribes; or so it is implicitly thought. A being is in my tribe when they are a family member, close friend, and so on. Some people have big tribes; others have tiny ones. In mafia movies, mafia people are depicted as having small tribes and a tribal morality. I have a strong moral obligation towards my family and closest friends, but fuck everyone else. Everyone else should have their own damn tribes that look out for each other. It’s up to them, not us, to defend themselves. Think of *Goodfellas* or *The Godfather*.

If this is psychologically accurate—and all I have is a suspicion that tribal morality is commonly implicated adopted; I have no real evidence—then I think a lot of people would say we have no moral obligations towards shrimp, no matter how much they suffer.

I’m not saying that tribal morality is right! I’m just speculating about how some people might respond to your post—which was fantastic.

Expand full comment

There is but one truly serious philosophical problem and that is sui- I mean, shrimp.

Stuff like this is why I respect utilitarian / EA frameworks despite my Camus sympathies. I wasn't even aware about their sentience as I equivocated crustaceans with mollusks (if I recall correctly, Peter Singer made a point about bivalves); thank you for the post!

Expand full comment

Donated $10. I appreciate the empathy for another species and form of intelligence’s suffering.

Expand full comment

Awesome!

Expand full comment

Great Article. Just donated 🦐

Expand full comment

Awesome! I'm trying to figure out how much people gave as a result of the article--if you don't mind me asking, how much?

Expand full comment

I will share my honest, emotional, totally unjustifiable reaction to this argument.

I think it would not be a good idea at all to move to the Bay Area, start heavily using psychedelic drugs, join strange "human potential" spiritual communities, and become polyamorous.

But I feel somehow on a deep irrational level that accepting this shrimp argument, would begin to obligate me to accept that polyamorous DMT Bay Area scene lifestyle is the optimal (therefore obligatory) way to live life.

I may still donate to the shrimp, but it feels to me like taking one step down a path I don't want to go down and that I have seen end in disaster. Similar feelings (or different but similarly irrational resistances) may be an explanation for why some people seem to reject, dodge, or ignore this argument rather than engaging.

Expand full comment

As of an hour ago, I donate $30 per month to shrimp, and I don't use psychedelics, am a committed Christian, and find polyamory vaguely yucky and unromantic. I also have no intention of moving to the Bay Area, though I do need to visit San Fransisco, simply because I haven't yet witnessed its art museums.

So, from personal experience, I promise you that donating to shrimp won't be that first catastrophic step down a slippery slope.

Update, one month later: nothing has changed.

Expand full comment

Same as Forrest - donated to them, and I've never done drugs in my life, am in France, and I don't even know what "human potential" spiritual communities" are.

So rest assured!

Expand full comment

Why do you say "/the/ optimal"? You seem to be injecting Plato unnecessarily by assuming that there must be exactly one optimal way for all people to live.

Expand full comment

While I strongly believe lexical priority is true, nevertheless the amount of good being done so efficiently is incredible. 30 per month without question! And a subscription is nice too :)

https://www.every.org/@ibrahim.dagher/farmkind

Expand full comment

Through a comedy of errors involving PayPal failing to load I ended up signed up for _three_ recurring donations, then cancelled the two "extra" ones, only to find they are listed in a _different order_ on the two different pages and so the one still going may not be the one where they actually have my payment information. I am pretty sure they are at least getting my $30 donation this month though. And assuming I remember next month I can try again. Can I get a passing grade for effort?

Expand full comment

Sure, what's your email (dm me receipts)?

Expand full comment

It is unlikely they are more sentient than an IPhone.

https://forum.effectivealtruism.org/posts/FjiND3qJCvC6CtmxG/super-additivity-of-consciousness

Both in neuron counts and brain mass they are in the 0.1% of human levels; and as commented before everything suggests consciousness is supperadditive.

Expand full comment

This is not what the report says. They recognize that neuron counts are a main ingredient for their own weights. Additionally the ratio between human and arthropods neuron number are like 200.000 for a cockroach vs 16,340,000,000 for humans (0.01 %) for brain only neurons.

But remember that Rethink Priorities does not even engage with Integrated Information Theory. And complexity measures tend to be super additive (often massively).

Additionally when you use the authority of “neuroscience”, I have to remind you that we have no science of sentience beyond the epistemic circle of us and those who are very similar to us and whose reporting we can trust.

Nobody really knows “how is to be a bat”, so I don’t know what science are you speaking about: the article of rethinking priorities is full nuance, of course. They know we don’t (scientifically) know; we have intuitions and complexity theory but nothing close to observations. Consciousness is noumenal, except your own.

Using what we have, for me is quite likely that shrimp is not more sentient than my iPhone.

Expand full comment

It's true that they don't agree with IIT but that's for a good reason: IIT is almost surely false. https://scottaaronson.blog/?p=1799

RP uses neuron counts a bit in their final estimate, but says they're a very poor general proxy for moral worth and even for calculating the amount of information processed.

The science I'm speaking of is described in the RP report: there isn't a correlation between neuronal activation and sentience, either in a particular region or in general, and often in regions the correlation goes the other way.

Expand full comment

First of all, Tononi provided in 2016 the first “consciousness predictive” model:

https://www.amazon.es/Sizing-Consciousness-objective-capacity-experience/dp/0198728441

Their model was able to use neurological information to perfectly distinguish between deep (non conscious) slept and (vigil or dreams). The Scott Aronson criticism refers to a concrete measure of integrated information, but he is the first to recognize that in terms of attacking the “pretty hard” problem of consciousness, IIT is the only game in town. Perhaps not the right game, but for the time being, the only.

So we are back in attributing more than 0.01 % human consciousness to a neural network that is 0.01% the human neural network size. This is pointless if complexity causes consciousness. And what else can cause consciousness?

.

Expand full comment

There are loads of other theories of consciousness--digital workspace, for instance, and this one that I find promising https://philarchive.org/archive/MCFTCE#:~:text=The%20cemi%20field%20theory%20proposes,its%20influence%20on%20the%20world.

False theories can often make a few right predictions, but implying a giant matrix is conscious is a reductio.

Expand full comment

But at the end when you reject IIT because “a large matrix” shall not be conscious, you are recognizing that integrated information flows and complexity shall be core to consciousness.

At the end if some Scott Aronson builds some sort of simple physical system that is supposed to be conscious under your preferred CEMI theory, it would treated as a counterexample.

So we are back to the beginning: complexity and information integration is what we demand, and the shrimp is 0.01% of 1 human by that measure.

Expand full comment
Comment deleted
Nov 15
Comment deleted
Expand full comment

While this claim seems valid on the face of it, I don't think it actually stands up to a more rigorous dissection.

Note: I am not a vegan but I empathize with their views, so perhaps I may be misrepresenting their beliefs, but in general my idea is that vegans hope to reduce suffering.

If that's the case, then pro-choice veganism still holds up in most cases. And this is because abortion reduces suffering. Yes, it does involve the death of a potential human being, but that death is quick and happens with something that's not particularly conscious yet. Compare this to the suffering that would happen if that baby were born to parents who did not want it, or who could not support it. That outcome increases the suffering of both the parents and the now living and suffering child. While some parents could obviously provide a good life to their child, there are is also a significant number of people who would not be able to feed, or to educate, or to provide a good life for their child.

I'm sure this argument is not good enough for all vegans, but most that I know are just trying to find the best outcome in a deeply unfair world, and by that metric I think it's plausible.

Expand full comment

Why? The embryo is quite unlikely to be conscious, at least in the first 3 months. I understand that you can be against torturing an adult pig for one year, and do not care about a human embryo that still has a rudimentary nervous system.

.

Expand full comment
Comment deleted
Nov 15
Comment deleted
Expand full comment

Yes, of course the animalists who value insects as relevant conscious beings shall be against abortion, but my experience is that the majority of animalists are mostly focused on industrial animal farming.

.

Expand full comment

James Rachels

Created from Animals: The Moral Implications of Darwinism is a good book on animal rights.

Expand full comment

Thank you, Matthew.

Expand full comment

Great article! Shared.

Expand full comment

If you were given the choice between saving the lives of 1,000 shrimp and one human being, would you regard it as difficult? If not, is it because you believe it is obvious that we should save the shrimp or the human?

Expand full comment

Humans live longer and have many more great goods in their lives. But if it was, say, burning with a poker a human or 1,000 shrimp, I'd rather burn the human,

Expand full comment

"That means it’s equivalent to making a human death painless every year for only two cents!" The word equivalent is doing some pretty heavy work here.

Expand full comment

It may be doing "heavy work", but the word "equivalent" is accurate, at least in respect to moral equivalency.

Expand full comment

It’s a majorly debatable point that there *can* be moral equivalence between human and non human beings

Expand full comment

Sure, it has been and it still is widely debated and a lot of people disagree with this point -- but this tells us nothing about the soundness of the "equivalency thesis". Whether students should be obligated to read Shakespeare in English classes or not is also something "majorly debatable" and a lot of people disagree with it, but you still can take a categorical stance on that. Something being "debatable" or "controversial" is irrelevant when it comes to philosophy/ethics.

What should move us in one direction or another are the arguments. And, after more than 4 years reading and researching about this very topic, I've found no reasonable arguments against the (in principle) moral equivalency of humans and non-humans -- much to my dismay at first, for I was an avid meat-eater and did not want to concede that. The view that there cannot be in principle any moral equivalency between humans and non-humans (i.e. speciesism) is as much arbitrary and unjustifiable as the view that cannot be in principle any moral equivalency between blacks and whites, men and women, etc. Read the first chapter of Peter Singer's Animal Liberation for a brutal philosophical takedown of speciesism.

Expand full comment

"after more than 4 years reading and researching about this very topic, I've found no reasonable arguments against the (in principle) moral equivalency of humans and non-humans"

The word reasonable is doing some pretty heavy work here.

Expand full comment

Naturally, it’s a pretty vexing philosophical problem, but I’m not convinced that shrimp have sentience/subjectivity/consciousness/ whatever you want to call it. Phylogenetically it seems like a vertebral development, and it doesn’t seem like any of findings or behaviors that are listed in that article would require sentience, but that it is just an add on. Can’t know for sure though, and so I understand the precautionary principle!

Expand full comment

But if there's evolutionary reason to think they feel pain and they behave in every expected way like they're in pain (tradeoffs, responding to anasthetic, avoiding areas where they were hurt, etc), then it seems like a good bet to think they feel pain. Even low credence in pain makes SWP absurdly high impact.

Expand full comment

Yes, I hear that. The skeptical argument would be that you can develop a "pain" system or essentially aversive system where sentience is not actually a necessary component. It is one thing to have a pain system, and it is another thing for an organism to feel pain. The phylogenetic/evolutionary argument is that neurological correlates of consciousness (Midbrain, PAG, RAS) are absent in shrimp, and their ecological niche is not complex enough to require the teleology of affective states.

Expand full comment

It would sure be a coincidence if creatures that weren't in pain but had every evolutionary reason to developed pain hasd some other system that just happened to behave like pain.

Expand full comment

Great article, really thought provoking stuff! But I worry you may be missing the point of the question above. I'd say it's not obvious that the visible behaviors which we assume correlate with subjective, phenomenological pain in humans necessarily correlate with subjective, phenomenological pain in other organisms. Plants also respond to stimuli, but do not have a nervous system - are these plants conscious? Maybe, maybe not. Also, it's not a coincidence that shrimp avoid unpleasant, painful stimuli, and there would be evolutionary pressure to do so, because these stimuli tend to damage the organism rather than benefit it. The relevant question here is if this observed behavior correlates with first-person suffering from the shrimp's point of view. Would love to hear your thoughts on this!

Expand full comment

Yes, I agree that it's possible. But it's also possible that you aren't conscious and just behave like you are. So long as we're not epiphenomenalists, we should think it's pretty unlikely that creatures act in many diverse ways like they're in pain but don't feel any pain.

For instance, it would be unlikely that mere nociception would produce longstanding aversion, response to anesthetic, and so on.

Expand full comment

Thanks for your reply. I'll have to continue reading through some of the linked literature and try to form a fuller picture. However, as far as I can tell, this boils down to differences in opinion around items 4-8 on the framework for evaluating sentience. You and the authors of that paper seem to think that it's unlikely that these behaviors can be instantiated in a non-conscious system, but I don't see the support for this. What gives us any confidence that these are good criteria for evaluating sentience? I guess I don't see any problems with imagining that these properties could be implemented in a non-conscious system or organism.

Expand full comment