33 Comments
User's avatar
JoA's avatar
4dEdited

I think many pro-animal EAs aren't keen on the term "speciesism", even though it was popularized by Peter Singer - and I find it pretty imperfect too. But here, a concept that highlights how flippant we tend to be when it comes to the experiences of animals that we find weird, boring, or too small to matter, seems important to make sense of the "discourse" (even though it should never be used as an insult to shut down conversations).

Similar feelings about critics of the RP Moral Weights project who come frighteningly close to saying "the project was conducted by people that had the intuition that animals matters somewhat, so it must be massively biased". I don't assume RP's welfare ranges to be perfect, yet I can only agree with the final image!

Expand full comment
JerL's avatar

I broadly agree with you, but this rubbed me the wrong way a little:

>>"Ethics should not be done by PR agencies. We shouldn’t revise our assessments of how many people die from Tuberculosis because it sounds offensive to say that tuberculosis is hundreds of times worse than 9/11."

While you obviously shouldn't revise your estimates about tuberculosis deaths, I do think it's important to worry about being offensive when saying things like this, especially when you're not doing academic philosophy but rather public advocacy for a cause.

Not to say that you've struck a bad balance, or your critics are right, necessarily, but you do sometimes affect a... certain adversarial mode in these debates, and I think it's not unreasonable to take the initiative a little more to make it clear that caring about insect suffering isn't meant to diminish human suffering and blah blah blah: avoid directly saying that insect suffering is "worse" than human suffering, and just use the large numbers to point out that, however it compares to human suffering it's probably immense--to use human suffering more to set a sense of scale than to imply a prioritization.

Expand full comment
nandwich's avatar

"Prioritization of pain response: if an organism seems to prioritize pain response over other important evolutionary functions, then that seems to indicate it is feeling more intense distress—enough to overcome, in various contexts, its evolutionary pre-programming."

This seems confused to me.

Pain response is not something that "overcomes evolutionary pre-programming," pain response *is* something that's pre-programmed by evolution. The extent to which a creature prioritizes pain response (which is importantly different from the strength of a creature's pain) is going to be exactly whatever level of prioritization was the most adaptive in the ancestral environment. It more or less can't be otherwise (unless implementing the optimal response would be too expensive or require moving through a local maxima), because any deviation from the optimal level of pain response prioritization would have already been bred out of the animal (regardless of how strongly the animal does or doesn't suffer).

For example, if humans prioritized their pain response to an unhelpful degree, that would have been bred out of them when the least stoic were eaten by lions. If humans neglected their pain response to an unhelpful degree, *that* would have been bred out of them when the most stoic healed poorly, developed a limp, and then were eaten by lions.

Expand full comment
Vikram V.'s avatar

> I thus have to look at the published literature on the subject.

This should have embarrassed you even before the italics.

I suppose the only criticism you'll accept is a full response to a report that you reference by incorporation?

The report you rely on is nonsense. The most important question, whether certain animals are conscious, is severely underexamined. You don't list the criteria here. I found a list of them in the Rethink Github.

Let's take a look at the 44 criteria:

1. physiological responses to nociception or handling

Not predictive of consciousness. There are plenty of reflexive responses to stimuli that involve no conscious thought. It's trivial to think of non-conscious things that would respond to damage. All of the chemical muscle movements and instinctive memory storage could run with the intelligence of a computer, with the same level of conscious thought (zero).

2. Protective behavior

Unclear what rethink means here. Assuming that it's referring to the protection of others, this offers at best limited evidence for consciousness. Small animals forming up or taking actions when exposed to danger could well be caused by instinct rather than any conscious desire to protect. Evolution could absolutely cause that to happen for self-preservation reasons. Same thing with chemically marking predators (or perception of them) as enemies and orienting behavior against them. Now, I concede that more complex decisions about who to protect, long-term protection, and expressions of grief could be some evidence for consciousness.

3. Defensive behavior/fighting back

Also no evidence for consciousness. Basic flight or attacking responses once a threat is processed as such can totally happen without conscious information processing. Consciousness may even hinder fighting back, since instinct can operate faster and with less variance.

4. Noxious stimuli related vocalizations

Reacting to unusual or harmful stimuli with vocalizations seems pretty ordinary among any species that can vocalize in the face of those stimuli. Absnet any communicative content to those vocalizations, which I am sure some animals do exhibit, this means very little.

5. Movement away from noxious stimuli

Any basic information processing system would do this. Why does this imply consciousness?

6. Paying a cost to receive a reward

My Minecraft trading bots do this. If they are conscious, I am definitely going to hell. Even if the knowledge of what leads to a reward has to be trained, all that shows is that memory exists and changes behavior. That is not sufficient for consciousness either.

7. Paying a cost to avoid a noxious stimulus

See above. Once again, avoiding stimuli remembered to be harmful just shows that memory exists and is connected to stimuli. Unless the claim is that memory storage intrinsically leads to consciousness, this is not evidence for consciousness.

8. Self-control

"Self-control" is just making a payment to avoid a harmful stimuli. These criteria are duplicative.

9. Predator avoidance tradeoffs

See above.

10. Selective attention to noxious stimuli over other concurrent events

This just shows that "noxious stimuli" are, for whatever reason, generating more pain/stimulation than another event. A system designed to respond to the largest source of stimulation may prioritize that source, but that system does not need to be conscious. All it needs are chemicals to direct activity (like movement) away from a source of stimulation deemed negative. Heck, it seems like focusing on a larger task and pushing through painful stimuli would be more effective evidence of consciousness, since it relies on willpower and disregarding biological imperative for a goal...

11. Classical conditioning

Only shows memory. Also works on my Minecraft bots.

12. Operant conditioning

Also shows memory. Slightly more memory, to be sure, but still only that. Maybe there is some threshold at which advanced recall supports cognition, but it seems like that has to be paired with some kind of meta-level reasoning, like recognizing that the phenomenon of operant conditioning is happening. That takes you beyond mere memory and implies some understanding of purpose.

13. Operant conditioning with unfamiliar action

Not sure what this refers to. If it's what I described above, then it is some evidence of consciousness.

14. Sensitization

There are plenty of purely physical explanations for increased sensitivity. Sensitization could also be caused by memory and reinforcement learning rather than any deeper consciousness. I also don't think sensitization AND habituation can both be evidence for consciousness. That would imply that any change in behavior is evidence, which is wildly overinclusive.

15. Habituation

We know that there are neurological and physical reasons for habituation. Maybe this is slightly stronger evidence since it demonstrates some resistance to normal biological responses. But, again, if those biological responses are to just reduce chemical receptivity, then this is 0.

16. Contextual learning

Not sure what contextual learning means in the animal context. If the study analyzed whether animals can be taught certain causal relationships and then generalized the principles behind those relationships to apply to things in their own life, then that is some (weak) evidence of consciousness. Such activity would imply that enough information processing was occurring to pair two different external stimuli together for some purpose. If the information was different enough, maybe that implies that there's some meta-level internal sense of purpose and information beyond physical memory.

17. Observational or social learning

Memorization by observation or socialization is still only memory. If animals are communicating information to each other, maybe that could be evidence of consciousness. It would depend on whether the language is more than a set list of instinctively defined signals.

18. Taste aversion behavior

Aversion to bad/painful/poisonous tastes has an obvious chemical or instinctual explanation. Rethink's claim that the naturally obvious explanation for taste reactions is disgust is nonsense. The purpose of taste is to separate foods worth eating from those not worth eating. If you eat food identified as bad or even lethal, the body would naturally respond. If I ate a vial of salt, I am not consciously choosing what happens next...

Truncated by max length. I'll finish this later today...

Expand full comment
Daniel Greco's avatar

What background assumptions do you think we have to make to think bee experience intensity and human experience intensity can be measured on the same scale?

I can certainly see if you think phenomenal consciousness is metaphysically fundamental, it's plausible. Planets and shovels both have mass in just the same else, so they can be straightforwardly meaningfully compared for massiveness. If you think intensity of pain is like mass or charge, which is how I tend to understand dualists, then this is much the same.

But if you're a physicalist, I think it's less obvious. Bees and humans might be different enough that trying to measure the intensity of our experiences on the same scale is like comparing how democratic Bhutan is to how endangered the snow leopard is. If you told me that Bhutan was 70% more democratic than the snow leopard is endangered, even if you measured a lot of stuff relevant to democracy and endangerment, I'd still probably be skeptical about the meaningfulness of the comparison.

Expand full comment
Bentham's Bulldog's avatar

I agree there might be some degree of vagueness if physicalism is true. Like, take something else physical--eyesight. How much better is human eyesight than shrimp eyesight. The answer is likely to be vague, but nonetheless, we can get rough approximations. There are a range of reasonable ways to measure eyesight, and none of them produce the result that our eyesight is 10 trillion times better, for example.

(Also, physicalism is false, but that's another story).

Expand full comment
Daniel Greco's avatar

Eyesight is a good example; it's clearly non-fundamental, and also clearly comparable across species (albeit potentially with some vagueness, if they see in different spectra, and complete different visual discrimination tasks). I guess the reason I'm more comfortable with measuring eyesight across species is that I have a a clearer sense of how you measure "goodness of eyesight" in a potentially species-neutral way. (e.g., if humans can distinguish which box has the prize from 20 feet away, while eagles can do it from 2 miles, that's a great basis for a quantitative comparison). I have less a sense of how you do it in the case of pain intensity.

Expand full comment
Bentham's Bulldog's avatar

What do you make of the following: the intensity of pain is a function of how much you would reasonably prudentially not want to experience some pain. If you had to live the life of a shrimp, and you'd value them not being tortured 1% as much as you not being tortured, then their torture is 1% as significant.

Expand full comment
Daniel Greco's avatar

My knee jerk reaction is positive--that sounds like the right sort of thing to say.

My second reaction is skeptical; maybe there are deep, metaphysics of personal identity reasons why I could not live the life of a shrimp. E.g., suppose you've got a kind of psychological view of personal identity, so that some future individual only counts as me if it's sufficiently psychologically connected/continuous with me (it's got enough of my memories, personality, etc.) Then there's no way I could have to live the life of a shrimp. I could imagine being really small and in the ocean and stuff like that, but I'm still imagining something that's mentally like me, and superficially like a shrimp, leading a shrimpy life.

If we go full shrimp--it's not like myself being shrunk down and put in the ocean, but with the same memories, mental life, etc.--then I'm not sure the "imagine yourself leading its life, and how much you'd value avoiding this or that experience" approach gets much traction.

Expand full comment
Bentham's Bulldog's avatar

I guess I'd want to say that even if the scenario is strictly impossible, this seems like one of the just barely impossible counterexamples that can still be a legitimate counterexample.

If you're a physicalist (type-A, right?) then presumably you won't think consciousness is irreducibly private. In theory, then, you could ask how much an agent would rationally want to avoid shrimp experiences if they falsely believed they'd have them (knowing, as they did, what they are like).

Expand full comment
Bentham's Bulldog's avatar

Oh other question: do you grant that pain within humans can be compared? If so, I don't see the difference between pain across humans or across humans and other species.

Expand full comment
Daniel Greco's avatar

Mostly yes, but I probably think it's vaguer than most people would. E.g., imagine two people experiencing the same normally painful procedure. One is outwardly reacting--grunting, squirming, etc.--much more than the other. Two hypotheses:

1. It's equally painful for both of them, but the less reactive one is more stoic; they're suppressing their reactions.

2. The less reactive one really is experiencing less pain.

I think there are probably plausible, well-motivated ways you can flesh out a functional story about pain in humans that really does distinguish these hypotheses. E.g., Maybe if it's hypothesis 2, then being distracted shouldn't make a difference. But if hypothesis 1 is true--maybe the less reactive one is exercising mental discipline not to cry out, etc.--then if they become distracted, or is tired, then they'll react more.

But I don't think this is easy/obvious. And I think the more different the potential manifestations of pain get across the individuals whose pains you want to compare, the harder it becomes to draw distinctions like this.

Expand full comment
Bentham's Bulldog's avatar

Even if it's vague, so long as there are often some imprecise comparisons, I think you should say the thing about shrimp.

Expand full comment
JerL's avatar

This is my main objection to BB's article too, but I think your analogies overstate the case:

Bees and humans share enough evolutionary history, and (if the research BB cites is accurate, which I've made no effort to verify but seems reasonable) exhibit "pain-type" behavioural responses that are similar enough that we can can recognize them in bees for me to think that the democracy of Bhutan vs the endangerment of the snow leopard is too extreme.

Maybe: is Bhutan more Democratic than a bee hive? Still probably not well-posed enough for a precise, definitive answer, but with enough similarity of structure to make some useful comparisons.

Moreover, note that even in physics, it is not always obvious what can and can't be compared: historically, it might have seemed nonsensical to ask "am I more massive than the energy of a bomb detonating?" but in fact mass and energy, despite being seemingly completely different quantities, are not, and can be interconverted.

Which is just to say, it may well turn out that a physicalist specification of how consciousness works might reveal that yes, there is an unambiguous way to compare intensity of conscious experience--I wouldn't have huge confidence in this, but I think the fact that many people (including myself, a physicalist!) have intuitions based on introspection that this is the case might be weak evidence for it.

Expand full comment
Daniel Greco's avatar

Yeah I think I agree with all that. What I'm not comfortable with is effectively presupposing that the question is meaningful and that our ignorance is just quantitative, so that once we've got a point estimate with error bars (based on a detailed, published report), we have all we need to start doing expected value calculations and acting on the results.

Expand full comment
JerL's avatar

Yeah, like I say, it's my biggest beef with BB. But, on the other hand, I think _if_ true, it's important to estimate, and even if our estimates are pretty bad, given the scale of insect farming, even some basic sensitivity analysis can be useful.

Like, if you treat this as an EV calculation, yeah, sure... But if you treat this as some light quantitative support for the first and third premises in the argument

Bees suffer

There are lots of bees being made to suffer

Even if the amount that individual bees suffer is small, there are so many suffering bees that it's still an important issue

Therefore, you should care somewhat about bee welfare

then I think it's correct.

Expand full comment
K.'s avatar
3dEdited

Bees and other arthropods would probably be very upset about the ethicists trying to exterminate them, even when theyare in dire situations. Most bees would probably treat the ethicist as the enemy and even commit suicide in trying to sting him not minding the varroa destructor mite infestation on her. (Similarily most patients in the organ transplant case would likely be abhored to receive the organs of someone being (humanely) butchered for them and would maybe even try to fight the doctor. In both cases they probably wouldn't want the ethicist's help

which seems paradoxical. Is there even any research on the (prevalence of the) wish to never have been born?)

By the way, bees may even count double, as through metamorphosis there could be two individuals in one body: one before and one after metamorphosis.

Expand full comment
Joe's avatar

I agree that the possibility (even if relatively unlikely) of high levels of consciousness results in expected values worth caring about. On the discourse side of things, I'm wondering whether discussions here might be struggling because of humans being bad at quantifying stuff. I often hear people say "I could believe animals have 10% the consciousness of a human, but I don't think 10 animals matter as much as a human, because I don't think you can just multiply it like that", and I wonder if people are actually imagining a lower consciousness-level than 10% but aren't good at identifying the number. Consider the (logarithmic) earthquake magnitude scale – maybe that same person might say "yeah I could believe that if human suffering is a magnitude 10 earthquake, then animal suffering is a magnitude 2.5 earthquake", which would be a belief that *does* imply their overall beliefs. I don't have a particular point here or anything, just thought it was interesting (tbh it's not even particularly relevant to the comments you included, but I do think in some discussions it's worth emphasising that the point of disagreement might be "even though this percentage sounds low, it's actually radically high, but here's why", whereas sometimes people seem to go along with it but then reject "that you're allowed to multiply" which seems like a more harmful belief)

Expand full comment
TheKoopaKing's avatar

>It is a somewhat concerning epistemological failure that lots of people reject the results of the most comprehensive report done on valenced experiences as obvious nonsense because it conflicts with their unreflective intuitions,

Agreed, but them being wrong doesn't mean you're right to trust the methodological assumptions of the Rethink Priorities report

Expand full comment
Tyler Kolota's avatar

You had a proper argument against honey which pushes me to avoid honey flavoring in almond milk, but then I started to worry about the pollinated almonds in the almond milk. For anyone with similar concerns…

800 almonds per kg

1 bee pollinates 20 almonds

Avg factory farmed bee lifespan 18 days

Assume 40% days are suffering

Bees feel pleasure/pain at about 7-12% of humans

36 days suffering per kg almonds

20-40 almonds per half gallon almond milk

1 days suffering per half gallon almond milk

Regular milk is 2kg per half gallon

.2 days suffering per half gallon of milk

Almond milk may be 5x worse than regular milk

I may switch to all banana milk, regular milk, & some coconut milk.

Expand full comment
Bentham's Bulldog's avatar

Oat milk is also fine.

Expand full comment
Tyler Kolota's avatar

Part of this preference likely comes from an article from many years ago talking about negative human health impacts of lectins from different grains

https://tim.blog/2010/09/19/paleo-diet-solution/

Expand full comment
Tyler Kolota's avatar

Personal preference, I don’t really like oat milk

Expand full comment
Bentham's Bulldog's avatar

:(

Expand full comment
Alex Power's avatar

Your defense of the estimated welfare range for bees relies heavily on a form of Pascal's mugging. When you argue that we should assign 3% odds to "all conscious creatures have equal welfare ranges" and then let that drive your expected value calculations, you're allowing extreme tail scenarios to dominate your moral reasoning.

You seem to work backward from a desired conclusion (caring about trillions of insects matters enormously) rather than accurately grappling with uncertainty. Why shouldn't we assign non-trivial welfare weights to computers, plants, paper clips, or any system we can't definitively rule out as conscious?

Expand full comment
Akber Khan's avatar

I find the idea of trying to justify consuming a product you don't need because you enjoy how it tastes and because the suffering it causes probably isn't *that* bad to be pretty awful, and that is more or less what every criticism of the honey article I have seen is saying. Just don't eat honey. It's not hard.

Expand full comment
Hugh's avatar

It would be cool to hear your take on the two envelopes problem for moral weights (I’m still trying to get my head around it and don’t have an opinion yet myself).

Expand full comment
Humble Inquiries's avatar

Thank you for another deeply interesting writing and the pro-anima work you doing 🩵

I thought about sharing some thoughts here on pain and consciousness from my own neuroscience and neurobiology based perspective but instead I decided to add something else that seemed to me more urgent today.

Just a few hours after reading your piece—which got me thinking deeply about consciousness and our relationship to small animals — I opened one of my research alerts and saw shocking news: a massive die-off has in last months wiped out nearly 60% of U.S. honey bee colonies. 😭

I can’t help but wonder: is this collapse not just a tragedy, but a warning from nature? A signal that we can no longer ignore the needs of these sentient, social creatures we rely on so heavily?

There are many thoughts swirling in my mind in connection with your writings and mission and this tragedy, but what struck me most is how the measures being proposed in response to this crisis say absolutely nothing about the welfare of these animals and our own role in contributing towards this massive die-off.

Instead, the conversation is dominated by talk of developing “next-generation pesticides” to control the mites and viruses blamed for the collapse.

Do you /I /we have any way to influence this way of thinking here? To open people’s eyes to the fact that doing the right thing for bees—the ethical thing—also turns out to be the most rational and sustainable thing for ourselves and our food systems?

This one of the biggest ( as far as I checked ) honey bee die-off in U.S. history isn’t just about viruses or mites—it’s about an industry built on harm and exploitation.

Commercial colonies are subjected to relentless stress: trucked across thousands of miles in suffocating boxes, force-fed sugar syrups instead of their own honey, and doused in chemicals to keep them alive in an unnatural system. These practices don’t just exploit and hurt bees—they break them. Stressed, malnourished colonies are defenseless against parasites like Varroa destructor and the deadly viruses it spreads.

Can this tragedy make us to change the harmful agricultural practices into ethical ones that respect the welfare of bees?

🌸🐝

Expand full comment
West Coast Philosopher's avatar

First, I should say: I'm much less intelligent than you, which makes arguing with you like playing Stockfish. I know I'm going to lose, but it's interesting to see how. But don't be flattered: you've never met anyone as stupid as I am.

Second, imagine I argued like this: (1) on your view, we should reprioritize how we organize society so that we try to optimize non-human animal welfare over human welfare (there are many more life-years in the non-human members of the animal kingdom than the human members); (2) thinking that reducing the moderate suffering of all the bees is more important than preventing all humans from having a migraine is absurd; (3) therefore, there is something wrong with your view.

Now, you could argue against (1); you could point out that, unlike all the non-human animals, only humans have a chance of getting rid of all suffering in the future. Or you could argue against (2); you could point out that it's just a weird dogmatism to think that there's anything absurd about it. But in both cases, I end up reaching my native state, which is loud confusion.

First, if you argue against (1) in the way I imagined, then I don't see why veganism is important at all. Eventually, humans will come up with lab-grown meat, at which point we won't even need to argue for veganism, and the problem will basically take care of itself. Sure, a lot of bad stuff happens in the interim, but that's going to be true about why we shouldn't reprioritize how we organize society right now, too.

Second, if you argue against (2) in the way I imagined, then you're basically saying, yes, we should reorganize society in that way, and what's weird about that? This would be in keeping with your point about how it's just dogmatism to think that we can intuit that bees don't feel 7-15% of what we feel (and btw, my ChatGPT thinks it's probably more like 25-30%). But at some point, I want to ask: aren't there *any* claims where it's ok to say, "look, I can't really argue for X, but it seems extremely obvious to me that X is true, and if your view ends up with the conclusion that X is false, then I see that as a strike against your view rather than against X." Like, can't we ever Moore shift? Or at least Moore shart?

Expand full comment
TheKoopaKing's avatar

>Perhaps my critics have some quasi-mystical faculty that enables them to directly intuit how intensely bees suffer, but I am bereft of such a faculty. I thus have to look at the published literature on the subject.

If computer scientists were trying to figure out how LLMs were generating responses by studying the ambient heat produced by the CPU, the appeal to authority would fail for the same reason yours does.

Expand full comment
Alex C.'s avatar

I agree with your analysis. Just curious about a couple of things.

Where do we draw the line between organisms whose potential suffering carries moral weight and those whose responses to stimuli remain beneath the threshold of ethical concern? I assume that single-cell organisms (bacteria, amoebas, etc.) aren't conscious at all, but I'm willing to consider evidence to the contrary.

Also, I'm curious to know how your behavior has changed in light of some of the consciousness-related findings you discuss in your blog.

Expand full comment
Bentham's Bulldog's avatar

It's made me more concerned about small animals.

I mean, I just don't think any bacteria behavior gives us reason to think they're conscious.

Expand full comment
JerL's avatar

Fwiw, I read once (I can try dig up the paper if you want) that a number of single-celled organisms are susceptible to anesthesia; obviously this is pretty weak, but I think it's not totally crazy to basically say, if a bacterium can be knocked _unconscious_ by an anesthetic, then maybe it must be, to some small degree, _conscious_ otherwise.

Even if you buy this though, I don't know that it suggests they can feel pain without anything like a nervous system.

Expand full comment