61 Comments
User's avatar
Jessie Ewesmont's avatar

>The virtuous person cares about the interests of others even if he cannot care for them emotionally. His moral response is not contingent on his having a strong emotional reaction.

This is a bit of a nitpick, but virtue ethicists tend to think it is very important to have the right emotional reactions. Aristotle says that someone who doesn't want to do what is right, but does it anyway out of a sense of duty, is a "continent" person, one rung below genuinely virtuous people on the goodness ladder. Genuinely virtuous people feel great pleasure in being virtuous, and they enjoy and cherish the exercise of their virtues deeply. Hursthouse says that virtues are dispositions to act *and feel* a certain way. And so forth.

But this doesn't have much bearing on the conclusion of your post, other than to suggest that people who care about shrimp welfare for virtue-ethics reasons should probably consider psychologically training/conditioning themselves to genuinely feel bad for shrimp and genuinely feel good about saving them. Or they could write a bunch of blog posts about the plight of shrimp - that probably conditions their emotions too. :)

Expand full comment
Bentham's Bulldog's avatar

Even if you ought to have the right emotional reactions if you don't you still ought to do the right thing.

Expand full comment
Aegis's avatar

I am against donating to Shrimp Welfare for a few reasons. If someone could persuade me in the other direction I would be open to donating money.

After reading this post I paused and attempted to predict the amount of neurons the average shrimp has. I told myself if that a shrimp has at least ~10 million neurons I could buy much of the reasoning in this post. I looked it up and it turns out that the average shrimp has..... 100k neurons.

For reference, Ants have ~250k neurons. There is estimated to exist QUINTILLIONS of ants (1,000,000 times bigger than the 1 trillion shrimp that die a year). The average human likely kills millions or even billions of ants a year indirectly through food consumption (pesticide use kills many trillions of ants a year), living in homes constructed on former colonies, buying consumer products, etc.

Would you pay 1 dollar to prevent the suffering of 15,000 ants? Would you cut back on all the pleasures of life to save millions or billions of ants? I think even the most extreme animal welfare people wouldn't do this.

From GiveWell you can save a human life for ~$5,000. Are you really telling me you would rather anesthetize 75,000,000 shrimps, who have 2x less neurons in their brains then ants, than save a human life?

And lets say you use a pure utilitarian argument, you claim that shrimps might suffer 1/1,000,000 so spending $5,000 preventing the suffering of 75 million shrimp is 75x better than saving one human live for the same amount of money. My response is what about ants? Why care more about shrimp suffering than ant suffering, when ants have a higher neuronal count and likely quadrillions die a year to human's instead of trillions of shrimp.

Expand full comment
John's avatar

Comparing to GiveWell is not the right comparison regarding the amount of suffering averted. The right analogy would be more like this: assume there's a charity called DieWell and they will painlessly anesthetize terminally ill humans in their final hours of life, giving them a painless death instead of an agonizing one. Under that kind of comparison it does seem more plausible that there is some amount of shrimp suffering that would be better to avert, on the margin, than a human in an equivalent situation (sure to die, but faced with either agonizing or painless death).

Givewell *is* the right comparison if you're talking about "where should I donate $5000" but any sort of cause comparison faces some version of the extremism problem: why not donate EVERYTHING to the VERY BEST cause? Shrimp vs. bednets isn't intrinsically any trickier of a comparison than fish farming vs bednets, or even deworming vs bednets. I think a portfolio approach is most reasonable, under some model of moral uncertainty, and while I would be suspicious of someone who gave EVERYTHING to the shrimp, it seems very reasonable to allocate some modest percentage of total giving to shrimp. But I think of it under a virtue ethics perspective, so moderation and practical wisdom is part of the package!

Expand full comment
Aegis's avatar

I think at the margin it's better to give to a more effective charity rather than a less effective charity. Even if I was giving 80% of my charity money to spreading malaria nets, I think the other 20% is still better spent doing something highly effective versus Make-A-Wish or something.

Under a strict maximizing EV mindset, you would donate to a portfolio of causes is because there's uncertainty on which is the more effective one. So I might donate to spreading malaria nets AND Vitamin A shots. But in cases where there is one clear winner I think your money is better spent on the more effective charity.

I don't see how the suffering of ~trillion shrimp a year, creatures with 100,000 neurons, matters anywhere near the suffering of the hundreds of billions of fish which are farmed a year (with many millions of neurons), or ~80 billion chickens (with 200 million neurons) factory farmed a year.

Expand full comment
Aegis's avatar

Also let me clarify, I don't mean to say that donating to Shrimp Welfare is bad in itself, but that there are many more effective options you can donate to. By choosing to donate to Shrimp Welfare, you are choosing not to donate your money to causes that save human lives as effectively as possible, or fight for fish welfare, or fight against factory farming cows, pigs, and chickens.

Expand full comment
blank's avatar

Children and teenagers think the priest is more virtuous. Sane adults realize he is not. Strict adherence to consistent laws satisfies the conscious of the priest, but it does not make for good living. The villagers have a better sense for what should be done.

Shrimp do not attract much sympathy because they are bugs. Bugs are small things that nature deigned to die by the millions. Whales crush and bludgeon and suffocate billions of shrimp each year. This does not motivate me to kill whales to save shrimp. A whale is just more valuable.

Expand full comment
Abe's avatar

You're never going to be motivated to save shrimp. The point, hopefully, is that you're motivated to uphold a consistent moral framework, rather than simply acting according to capricious emotions. The author admits he feels nothing for shrimp. Nature inevitably dictates what you care about, but if you agree that the suffering of sentient beings is bad, then nature itself is bad, because it kills and tortures sentient beings, and has made our minds in such a way that we only care about that under rare circumstances. We should embrace our capacity for reason and stand against suffering in all its forms because it is so incontrovertibly and self-evidently bad.

Expand full comment
blank's avatar

Alternatively, use a moral framework that does not enshrine suffering as the root of badness so you don't waste your time obsessing over shrimp.

Expand full comment
Bentham's Bulldog's avatar

But suffering is obviously bad. Any person who has experienced intense suffering has become acquainted with this fact.

Expand full comment
blank's avatar

Suffering can impart a lesson. Suffering can be inflicted on something else to achieve a goal.

Expand full comment
Bentham's Bulldog's avatar

Yes but torturing shrimp doesn't do that! Suffering can be good it it leads to something else good, but it is by itself bad.

Expand full comment
blank's avatar

A pure absence of suffering results in feebleness. It does not seem at all desirable.

Expand full comment
Gumphus's avatar

Suffering is, in some cases, a price worth paying for some greater payoff - but it is always the price, never the thing purchased! And when nothing is offered in exchange, it is simply a loss.

Expand full comment
Abe's avatar

Not wanting to obsess over shrimp isn't a reason to discard a moral framework. Sometimes good faith reasoning leads you to places that seem absurd, because your emotions are there to help you survive, not to help you do good. If you think suffering shouldn't be the basis of badness, then you need a better reason.

Expand full comment
TheBlackReiter's avatar

How do you define 'good living?' What are the characteristics of a 'good life,' what makes it good, and how did you come to that conclusion?

Expand full comment
Ari Shtein's avatar

> You get a lot more virtue points if you do the right thing but don’t get credit for doing it—if you’re willing to risk seeming like a weirdo for the sake of doing the right thing.

Doesn’t this have some really silly implications? Simon Laird proposed the theory of Cheesecaketarianism—a view that says we should try to maximize the amount of blueberry cheesecake in the universe—when I argued with him. Even though this sounds like a less plausible value than “torturing things is bad,” it’s certainly much weirder and much less likely to earn you any credit. So wouldn’t we then appraise a devoted Cheesecaketarian to be pretty virtuous, maybe even more virtuous than a shrimp welfarist?

At the very least, there has to be some theory that’s so weird, but still plausible enough, that its followers would be considered extremely virtuous in expectation (it we accept weirdness counting toward virtuousness), despite endorsing crazy things like cheesecake-maxing.

Expand full comment
Abe's avatar

If there's a strong argument to be made that a cause is worthwhile, advocating for it in spite of social pressures confers virtue. The worth of cheesecaketarianism is not supported by a strong argument, and anything supported by a strong argument cannot be crazy. For this objection to hold, you'd have to come up with a conclusion that is well-supported by argument but so absurd that it discredits the entire enterprise. I don't think this is possible.

Expand full comment
Ari Shtein's avatar

I'm not sure that I need to give you a specific example. It's a reductio, so all I have to say is that you can make a theory weirder without making it quite as much more implausible. Then at some limit, there's a theory that's extremely weird and also extremely implausible, but much more weird than it is implausible, and so very virtuous.

It seems pretty obvious to me that you can get weirdness without a comparable increase in implausiblity—in fact, that's what Bentham's entire argument rests on. Otherwise the weirdness of some conclusion *couldn't* actually count for anything; it'd always be perfectly offset by an update against the theory itself. But if we think shrimp welfarism's weirdness should count toward its virtue, we have to hold that it isn't perfectly balanced by an associated increase in implausibility.

So crank up the weirdness for a while, and eventually you'll find an extremely implausible theory that's also extremely weird, and so fairly virtuous despite being insane. Valuing weirdness will always have this problem.

Expand full comment
Abe's avatar

I see, that makes sense. I understood Bentham to be suggesting that the weirdness only begins to count for virtue once a specific threshold of plausibility is passed. Or that's how I read it anyway.

Expand full comment
Ari Shtein's avatar

Is there any non-arbitrary threshold? And do we mean objective plausibility or sincere personal belief? Virtue seems intuitively much more closely tied to the latter, but that’s extremely susceptible to hacking by psychopaths and deceived people and so on.

Expand full comment
Abe's avatar

We can assign virtue to someone championing a weird cause that passes their threshold for plausibility (we all have an intuitive threshold past which a view begins to seem plausible to us). They may be mistaken in thinking its plausible, but that does not detract from their virtue. If someone believed in the plausibility of cheesecake maximizing and then spent all their time trying to maximize cheesecake, I would call that virtuous behaviour -- they are acting like a virtuous person, they are just mistaken about the world. This is part of why I am not a virtue ethicist; I think utilitarianism makes more sense. But if one is a virtue ethicist, one ought to commend shrimp-welfare donations.

To use another example from this thread, I think a lot of the LessWrong people working on AI alignment are virtuous, but mistaken about the world and wasting their effort. It still speaks of virtue to go against the grain and fight hard for a cause you believe will improve the world.

Expand full comment
Ari Shtein's avatar

Yeah, I mean in that case, see the thread with Urban Shirk… Basically, why is a strongly but incorrectly held belief compelling immoral action made more virtuous by its being socially shunned?

Expand full comment
blank's avatar

AI is said to be capable of providing infinite utility. If utility is good, then anything that assists in creating AI therefore is worth a percentage of infinite utility.

This is a dumb argument, like the shrimp one. But lots of people believe in it anyway!

Expand full comment
Abe's avatar
Mar 15Edited

My whole point is that the argument shouldn't be dumb; the conclusion should be dumb. This argument is dumb because it is premised on so many unknowns -- Is superintelligent AI even possible to build? If so, is it even possible for humans to align it? What are the relevant steps? etc. -- that the worth of devoting one's time to it is devalued by all the uncertainty. No premise of infinite value can create an infinite fraction of prospective value in the real world unless we have any reason at all to suspect that such a premise may be true. There are no good reasons for believing in the forthcoming emergence of superintelligent value-optimizing AI, despite what the Lesswrongians will tell you, and in any case they all believe alignment is functionally impossible.

If you can come up with a strong argument that supports an absurd conclusion, let me know.

Expand full comment
Silas Abrahamsen's avatar

I don't know whether any virtue ethicist would hold this, but I wonder whether we might think that what confers virtue is caring about what is *actually* right despite it being weird--not just caring about what you *think* is right.

That would of course make it somewhat impossible to ever asses whether someone (including yourself) is virtuous, but it might be worth it.

Expand full comment
User's avatar
Comment deleted
Mar 15
Comment deleted
Expand full comment
Jessie Ewesmont's avatar

A sincere Cheesecaketarian would probably be extremely vicious, because if they only care about cheesecake, they'd be willing to rob, cheat, murder, etc. in order to get their hands on more cheesecake (or produce more cheesecake).

Expand full comment
Ari Shtein's avatar

Sure, that’s reasonable, but how does this cash out for theories that have more obviously terrible consequences? Orthodox Jews also believe in extremely strict sex-segregation—is it more virtuous to behave that way if you really zealously believe it? They also don’t like vaccinating their children—is that more virtuous because society tells them it’s wrong?

I mean obviously there are other unvirtuous features of these behaviors. But why is it they’re made more virtuous by general social rejection? It seems extremely unintuitive that a reprehensible behavior could be made more ethical by broad reproach for the people who practice it.

Expand full comment
Ash's avatar

Nitpick: Orthodox Jew here, and we vaccinate. Around 90 percent of Orthodox Jews vaccinate. Its hary fair to declare that we don't like to vaccinate based on ten percent.

In contrast. 100 percent of Orthodox Jews eat kosher.

Expand full comment
User's avatar
Comment deleted
Mar 15Edited
Comment deleted
Expand full comment
Ari Shtein's avatar

Yeah, you make good points.

Maybe this is just a breaking point for me with virtue ethics, then. It seems insane that, even if X is an obviously bad/unvirtuous thing, doing X in a society that abhors X is ethically better than doing X in a society that accepts and promotes X. Seemingly, it would imply that society ought to adapt itself (insofar as it can, through government or "influencers" or whatever) to always reject and shun whatever the majority of people want to do.

Of course, maybe that reasoning is too consequential already. But this all is just so damn silly. I dunno, I'm sticking to utility.

ETA: And I think it also implies that, insofar as I can choose what I believe, I should, ceteris paribus, always choose to believe the less popular thing. Which seems silly, given that an idea's popularity is (usually) a weak signal that it's a good thing to believe. Why would moral virtue work *against* epistemic virtue like that?

Expand full comment
User's avatar
Comment deleted
Mar 15Edited
Comment deleted
Expand full comment
Ari Shtein's avatar

This seems susceptible to the same reductio! Can't we imagine a moral belief that's so extremely unpopular that the bravery of advocating it outweighs the relative irrationality of holding it?

I mean, really, Cheesecaketarianism is wildly unpopular. Most people don't spend *any* of their money on cheesecake ingredients or *any* of their time baking it. The lack of cheesecake in the world is clearly a massive moral emergency! As much as you might think it's strange for me to care so much about cheesecake, the neglectedness of this cause is so extreme and the looks I'd get for advocating it so dirty that I really just absolutely have to stick up for what I think is right.

Expand full comment
User's avatar
Comment deleted
Mar 15Edited
Comment deleted
Expand full comment
Ari Shtein's avatar

Yeah, I agree that it makes sense in the slavery situation. But I think my intuitions only say "whoa, virtuous" because it advocated for positive consequences in the world. Biden's equivalent belief simply wasn't as consequentially important (you can consider the counterfactual where Biden was a slaver—no way that actually happens—whereas Adams' abolitionism had a much more non-zero chance of creating positive change.)

> Probably rationality is a lot more important.

This seems analogous to the original point about plausibility that Abe made. Why can't I construct a belief that's so ridiculously brave that it's ok that it's reasonably irrational too?

The conclusion that that would be possible makes me think my intuition of "virtue" in the slavery situation is actually much more consequentially founded.

Expand full comment
Mike Lawrence's avatar

This must have been said before, as it seems pretty obvious, so point me to an answer if one already exists.

What is the rebuttal to: shrimp are animals and not humans, they lack souls, they are not made in the image of the Lord, and they have been placed under our dominion. We’re free to kill them by the trillion.

You quote proverbs, and I guess want to argue that shrimp are to be heard as normal subjects. This doesn’t seem at all the right application of it but I can see the argument coming so I’ll stave it off. They obviously are not normal human subjects. They aren’t made in the image of God. It’s not speciesism, it’s just a given.

Expand full comment
Gavin Pugh's avatar

Rather than virtue ethics or utilitarianism, I follow a selfish reason for donating to animal welfare (at least some of which goes to shrimp):

If the shrimp are taken care of, maybe BB will write about something else.

Expand full comment
TheBlackReiter's avatar

>speciesism is a vice

Why? The reason I care about the scarred and malformed human (physically or intellectually) is for reasons of my religion, which states that God loves them and that they have an immortal soul, which means I have truly existing obligations towards them. The vice of racism is injustice towards people on account of their race, and sexism is injustice on account of sex, where justice is defined as fulfilling your obligations towards them.

What obligations do I have towards shrimp, and on what grounds have these obligations been established?

Expand full comment
Ash's avatar

If you believe in God, wouldn't you believe that God cares about some suffering more than others? If I'm a member of a religious group that prioritizes human suffering, shouldn't I donate to save one human life over even millions of shrimp?

Furthermore, I personally don't want to donate to save shrimp until shrimp are altruistic enough to donate to save me.

I think this shrimp example which should be a reductio ad absurdum to show how rationalism is not a viable ethics is here being taken literally.

Expand full comment
Connor Jennings's avatar

I like how to interweave stories in this post to illustrate your point better. Good read!

Expand full comment
Garreth Byrne's avatar

Throwing shrimp in -1oC ice slurry where they go numb and unconscious in less than 30s and die within 2 mins or so does not strike me as torture. Perhaps you are just very soft.

Expand full comment
Pete McCutchen's avatar

You could be a villain in an Ayn Rand novel.

Expand full comment
Oldman's avatar

This is starting to get boring. You should find another concept

Expand full comment
Will Barmby's avatar

New here, so apologies if you've already addressed this somewhere, but can you clarify the most basic moral principle you're operating on here? Is it just that "conscious suffering is wrong in all forms equally, worse when it's more intense and persists over a longer interval?" The idea that is speciesist to value the suffering of humans more than another animal which appears to experience suffering as well seems to me to be assuming that the conscious experience of humans is the same as other beings. That's not obvious to me. Can you clarify your chain of reasoning?

Expand full comment
Bentham's Bulldog's avatar

My idea is that suffering is bad. It may not always be bad and it may not be bad if it leads to other greata goods, but in the absence of greater goods, extreme suffering is bad. It may not be the only bad but it's certainly one of the bad things.

Expand full comment
TheKoopaKing's avatar

Let's grant shrimp feel pain. I claim all the pain shrimp feel is characteristic of pain asymbolia due to their underdeveloped nervous system. What evidence would you use to convince me I'm wrong and that their pain is in fact hurty?

Expand full comment
John's avatar

What level of credence do you have in that belief? The simplest explanation is that shrimp feel pain and suffer because of it since (a) they have the neural hardware to feel pain; (b) they behave the same way other animals, including humans, do when they experience pain (guarding an injured area, avoiding painful stimuli, etc.); and (c) almost all humans who satisfy properties a & b also suffer when they perceive pain.

Given how little we know about consciousness, it doesn't seem reasonable to assign extremely high credence to any non-human animal having pain asymbolia since we have no idea what level of neural sophistication is required for consciousness. More than zero, but certainly less than an extremely disabled human. You should also have some level of caution: it's worse to assume something is unable to suffer, when in fact it is, than to assume it can suffer when in fact it cannot.

And because of the math (trillions of shrimp x small amounts of suffering x very low cost per instance of suffering averted) you need extremely high credence that shrimp do not suffer to cancel out all of that. And then because there's a smooth continuum of neural complexity in animals, you get trapped in the grains-of-sand problem: at what precise neural sophistication is it acceptable to pointlessly torture a creature when it could be easily averted?

Utilitarian reasoning really does get trapped here and, as BB has pointed out, the utilitarian case for shrimp welfare is extremely sound if you can't find an extremely high credence way to assign shrimp suffering a value identically equal to zero.

Expand full comment
TheKoopaKing's avatar

The simplest explanation is also that you can play Fortnite on a toaster because it has a Turing complete microcontroller like a regular PC. Why not believe this? Because toaster microcontrollers and the integrated system they're a part of don't display any of the behavioral similarities of playing a Fortnite game on PC or console or smartphone. This doesn't mean it's impossible to create a Fortnite playing toaster microcontroller - but there are so many intermediate steps you would need to operationalize before such a claim gets taken seriously. And you would of course have to do things like functional decomposition to isolate the relevant parts of a pc that are involved in Fortnite processing. BB hasn't tackled either horn and neuroscience more generally isn't advanced enough to deliver proclamations like "Shrimp feel hurty pain" - or at least if it has, I want to see the evidence.

Expand full comment
Contemplating What?'s avatar

It’s probably wrong to think the debates and trends on Substack are always (ever?) relevant to the broader culture but I do wonder if the focus on God’s nichest creatures is good for animal welfarism as a political rights movement. I’m a full blown believer but I wouldn’t be surprised if the carnists of the world find it off putting.

Expand full comment
Bentham's Bulldog's avatar

Yeah that's possible. If I were running a campaign for animal rights I wouldn't write about this stuff. But I'm not, and getting people concerned about the shrimp is high impact.

Expand full comment
Contemplating What?'s avatar

Yeah that seems right. The question is whether think pieces are more like private lobbying or rights campaigns though.

Expand full comment
Contemplating What?'s avatar

Yeah, completely agree with your first point. I think the second is in tension with it though. If writing about shrimp is high impact, you are running some kind of campaign for animals.

Expand full comment
Bentham's Bulldog's avatar

Analogy: suppose that there was some really unpopular policy that was very important to pass. It might be worth writing about if you might influence people to pass it, but it wouldn't be worth having a major party campaign on it.

Expand full comment
User's avatar
Comment deleted
Mar 15
Comment deleted
Expand full comment
Bentham's Bulldog's avatar

Wow you're so smart. I didn't know that.

Expand full comment
User's avatar
Comment deleted
Mar 15
Comment deleted
Expand full comment
Garreth Byrne's avatar

Worst thought experiment I've ever read.

>There are mentally handicapped people.

>We show them sympathy and compassion.

>If trillions of them were being tortured this would be bad.

>Ergo shrimp welfare

Expand full comment