“If you think there’s a 20% chance that they feel intense pain and that the 19% estimate is too much by a factor of 10, a dollar given to the shrimp welfare project still averts as much agony as giving painless deaths to 76 humans.”
I think this is the most powerful critique of the whole argument for shrimp welfare. Why should I trade welfare that could be going to certainly effective methods of reducing pain in creatures that can certainly experience pain for something so (for lack of a better word) hypothetical?
If someone came up to me asking to donate to a charity that helps starving people across the world and said; “Your $10 donation will help a thousand people! But if you don’t believe us and think we’re overestimating that number, it’ll still help a hundred people, and that’s a lot!” I’d laugh in their faces. Not being able to remotely quantify the benefit, and basing the justification upon a thought experiment makes the whole argument silly in my view.
This critique isn’t a refusal to quantify either. It’s a refusal to acknowledge incredibly imprecise quantification that readily admits it can be off be an order (or orders) of magnitude. I’m not against charity of animals, but the farther we get from the concrete experience of the average person (Family —> Culture —> Race —> All Humanity —> Cute Mammals —> All Mammals —> Vertebrates —> All Animals —> Bacteria?) the less the argument holds any actual meaning as far as human sympathy goes.
It seems obviously irrational to just ignore things because you can't quantify them precisely. Imagine that you could prevent every single instance of suffering in hexapods in world history for one penny. It would be silly to say "Why should I trade welfare that could be going to certainly effective methods of reducing pain in creatures that can certainly experience pain for something so (for lack of a better word) hypothetical?"
At some point, you just have to maximize expected value. If a dollar had a low chance of preventing thousands of painful deaths, it would be well spent. So long as the shrimp welfare project is better than that, a dollar given to it is still well spent.
Layering multiple instances of probabilities we really have no way of actually estimating, and coming up with a final probability is just an exercise of imagination. At what point am I 99.999% sure I’m wasting my money and instead choose a quantifiable amount of suffering I can definitely end.
When you get into the realm of abstract probabilities based on intuitions, you’re not actually being rigorous in your estimate. The heavy use of probability reeks of rationalization, not an argument. To me it seems the conclusion has been reached (shrimp probably feel pain) and now let’s come up with some numbers (estimates based off almost nothing) to justify it. That’s fine to do for having a consistent worldview, but breaks down when we start to pretend those numbers are rigorous or meaningful, which they self-admittedly aren’t.
The argument that shrimp might be sentient and we should sympathize with them is the only actual argument here. The numbers and comparison based off those numbers are the charade.
Coming up with an imagined extreme does little to justify what is an attempt at a practical comparison either. You literally estimate shrimp welfare in terms of human deaths, so eliminating suffering of all hexapods for literally nothing doesn’t factor into any reasonable comparison.
There are only two things that need to be true for SWP to be a good bet:
1) Shrimp are conscious.
2) They can feel decently intense suffering.
The probability of these together doesn't strike me as low (certainly not below 30%, and I'd guess above 50%), and if they're both right, it ends up insanely effective.
The question is around 1) and if 2) means anything at all in the context of 1) being no. Or at least if shrimp consciousness is materially close to 0 on a scale of, say, worm with 100 brain cells to human
Your writing in the SWP sequence has convinced me to roughly double the SWP allocation to around 1/6 of my annual donation. I have two problems, though.
One is direct. The numbers look *really perverse* if you run them in the other direction. To wit: it's morally good to make one human's death agonizing if you can thereby stop a small trashbag of shrimp being dumped onto ice. Even if it's two or three humans is probably still net good. I find that really hard to swallow! Even if "a small trashbag" is replaced with "a shipping container," boy would I not feel justified murdering someone to stop them from brutalizing all those shrimp. You have to get up to lots and lots of shrimp before I can even entertain this. (A similar objection applies to other measures of animal-suffering-moral-worth, but those numbers feel closer to intuitively workable.)
The second is: in general, if invertebrates have nontrivial moral worth, and if you aren't allergic to population ethics, then it is incredibly morally impactful to increase or decrease the number of invertebrates that will live. Good if invertebrate lives are mostly good, bad if they're mostly bad.
If that's so---if the planet is crawling with a quintillion bundles of mute agony, then this world is a horrid carnival, a concentrated universe of *pain*, the worst thing in the galaxy, a torture chamber, and the kindest thing you could do would be: destroy it. The best thing we've done so far as humans is decimate the biosphere, and the best thing we could do---before we jet off to another planet, or before we wink off ourselves---would be to blow the whole thing up, to prohibit life from miserably teeming ever again.
And I don't buy this! I think the existence of the world is good. I think the existence of the natural world is good. I think blowing up the planet---even if we learned "yep, no more progress, humans go extinct soon but the biosphere rolls on in its way until the sun goes boom, just life as normal for another billion years"---I still think blowing up the planet is, to put it lightly, supervillain shit. But under the view "invertebrate life matters and is mostly bad," it would be the only really big good thing ever done.
This is something I wonder about with Brian Tomasik, in particular. For him, is nature a horrid mistake? Must if be abolished? And if we'll never have the power to David-Pearce-out everything that lives, should we pull the plug?
This is implausible to me. I can *slightly* more believe the view "oh bug lives are good, so we should step back as humans to let more bugs live"----but only slightly. And it seems to me that any view saying "bugs (i got tired at the end of typing 'invertebrates,' so pretend they're synonyms) are significant moral patients, plus population ethics is not forbidden" goes one way or another. (Forbidding population ethics still makes things weird.)
What's the solution? Is there none? Is the secret slogan of Utilitarianism, "abolish nature by any means necessary"?
I don't think so! Even if there's a real "nature bad" view among the tescreal-y (yes i know i know) parts of the blog world, the significant attention given to factory farming (rather than the way more tractable "pave over the amazon" makes me think that isn't the mainstream "most important thing."
Mostly I'm confused. What do you believe? What do y'all believe? Have I erred in these inferences?
I don't buy most of BB's arguments but nature is terrible. If you were born 10k years ago you would probably starve to the point of exhaustion before getting mauled to death by a predator - you would live a life like most animals today and in history have lived - a pretty awful one.
Welfarist approach to animals is not good. It will promote even more animals being killed because the welfarist approach makes people feel like animal agriculture is morally justified. We should focus on the animals' right to be not treated as a commodity.
I simply don’t care about shrimp and don’t think they are in any way comparable to humans in terms of suffering or “good”. So I’m still going to mock and laugh at this ridiculous idea.
Suggesting a shrimp's feelings should be valued as 3% of a human's doesn't feel right. Going just based on neurons, a shrimp has 10k (per Google's AI). A human has 80B. Treating a shrimp as 10k/80B of a human, and taking the 15k shrimp per dollar number, this suggests that we ease the death of one human for $5333. Perhaps you can say that this is worth it, but it becomes much less obvious that helping shrimp is as good as other charities
Yes and it’s likely that conscious awareness is emergent so 10k might get you effectively nothing. Or some extremely minimal level. Like adding neurons may have exponential value
In fact I think it’s likely that the level of awareness shrimp do have is much, much closer to a bacteria reacting to light or something. There is no conscious, subjective experience that it is “like” to be a shrimp. If we were transported into a shrimp and back out we might as well have been in a coma or deep anesthesia. Our bodily functions still work but not aware
I think you still haven't really addressed the strongest objection, which is simply that shrimp do not in fact feel pain because they have no conscious experience at all. In other words, it's true that a pain-like signal occurs in their bodies, but because their brains are so simple, there is "no one home" to experience it. I think this is clearly the case for things like the heat sensor in a laptop. The sensor detects a condition, elevated temperature, that can harm the laptop, so it takes action, shutting down, to avoid the potential harm. This signal is pain-like, and you could in a loose metaphorical sense say that the laptop feels pain when it gets too hot. The laptop, however, doesn't really feel pain because it isn't conscious so it doesn't feel anything at all. This means that the laptop isn't a moral subject so you can do anything you like to it no matter how "painful". The fact that something feels pain is not enough to establish that anyone has any moral obligations toward it. This is clearly true of at least some living organisms such as plants and fungi. I'm not sure exactly where the line between conscious and not is in the animal kingdom, but shrimp are quite plausibly below it.
>Whether something is similar to us seems morally irrelevant.
This depends on the foundation of your morality. Contractarians care greatly about whether something shares human-like morality; in this construction, there is no moral imperative to care about the suffering of shrimp, as shrimp do not comprehend or care about human morality and as such can't (even with a partial veil of ignorance to deal with their lesser capabilities) engage in a mutually-beneficial social contract with humans. In this view, subsapient animals like shrimp are effectively DefectBot or RandomBot and the correct response to DefectBot/RandomBot is all-defect.
(No known RL examples exist of a species with sapience but no morality, but to give a fictional example, 40K Orks are also outside the moral community in this view and can be guiltlessly exterminated due to their inability to co-operate with humans.)
>Fourth, people propose that what matters morally is being part of a smart species. But if we discovered that the most mentally disabled people were aliens from a different species, their pain obviously wouldn’t stop being bad.
There's an argument in this area that you may be missing, which is the argument from practicality. It is a lot easier to tell the difference between a shrimp and a normal human adult than it is to tell the difference between a human adult with shrimp-level mental capacity and a normal human adult, so there's more of a risk of getting it wrong in the latter case. This means that even if you have no regard for human adults with the brains of shrimp, you probably don't have enough information to know for sure that any particular human is such a creature, so one generally wants to avoid chucking suspected shrimp-brained humans in a Bottomless Pit of Endless Suffering. Shrimp are different; it is easy to know with 99.9999999999999999999999999999+% accuracy (conditional on no Cartesian-Daemon-esque scenarios, but in those you can't know anything about morality) that a shrimp does not have human-level brain function.
>Animal welfare is more neglected than longtermism.
Last I checked, there's a semi-major political party in my country (the Animal Justice Party) that is single-issue animal rights, and I'm not aware of any political party in said country that has significant longtermism in its policy let alone a single-issue party for it.
I wish instead of making baseless objections people just admitted that they would rather prioritize themselves & other humans because humans are sentient beings who are more likely than shrimp to do things that would help them.
Unfortunately, shrimp have little power to bring utility to humans in ways other than "be eaten".
(If I could wave a magic wand, many ways humans harm animals *would be* socially unacceptable, but it's just really hard to make people care about those other than themselves.)
I haven't read any good reason why I should assume shrimp are conscious, or that whatever proto-consciousness they have is anywhere near comparable to even a bat's, let alone a human's. I like the rest of your argument, assuming they have sophisticated consciousness. I am nowhere near assuming that. Pain requires more than whatever you have proven shrimp are capable of.
What about the argument that flow-through effects from helping shrimp are less than flow-through effects of helping humans? (If you prevent humans from dying they will make the world better whereas if you help shrimp die in a less painful way not much downstream will change)
Re the pain is bad meme: I think the bottom gymnast is effectively what you did in the ubiquitous pain thesis post. You did not demonstrate to us which animals feel pain, but instead relied on things correlated with pain behavior. There is no categorial difference between your approach and the gymnast's approach, just that the gymnast uses different things that are associated with pain (species membership) to argue against your thesis.
"given that the shrimp welfare projects is thousands of times more neglected than longtermism"
This seems like the wrong comparison: one org vs an entire cause area. You should be comparing to the best longtermist giving opportunity instead. I don't think the "thousands of times more neglected" claim will hold up there.
(I don't think this would necessarily change your conclusion to give to SWP.)
At some point I get annoyed by the scores of people commenting beneath every post mentioning a bad thing "so why does you God allow a bad thing." Adds nothing, is just annoying.
“This leads to Moral skepticism” was one of your arguments when you were an atheist. If you’re not going to answer it under the shrimp welfare posts, answer it in a theodicy post, at some point.
Let's say we limit ourselves just to the realm of animal welfare. Are you arguing that contributing to the Shrimp Welfare Project is the _optimal_ way to spend your resources? My personal inclination is that promoting veganism is the best use of our time and money, vs. making incremental changes to animal welfare. Promoting veganism has the added benefit of reducing greenhouse gases.
Perhaps. But can you answer the first question? Are you arguing that contributing to the Shrimp Welfare Project is the optimal use of dollars (assuming we're limiting discussion to just animal welfare)?
So under your system, we'd end up closing all animal shelters and farm sanctuaries until the marginal value of the SWP reaches some specific point. Correct?
“If you think there’s a 20% chance that they feel intense pain and that the 19% estimate is too much by a factor of 10, a dollar given to the shrimp welfare project still averts as much agony as giving painless deaths to 76 humans.”
I think this is the most powerful critique of the whole argument for shrimp welfare. Why should I trade welfare that could be going to certainly effective methods of reducing pain in creatures that can certainly experience pain for something so (for lack of a better word) hypothetical?
If someone came up to me asking to donate to a charity that helps starving people across the world and said; “Your $10 donation will help a thousand people! But if you don’t believe us and think we’re overestimating that number, it’ll still help a hundred people, and that’s a lot!” I’d laugh in their faces. Not being able to remotely quantify the benefit, and basing the justification upon a thought experiment makes the whole argument silly in my view.
This critique isn’t a refusal to quantify either. It’s a refusal to acknowledge incredibly imprecise quantification that readily admits it can be off be an order (or orders) of magnitude. I’m not against charity of animals, but the farther we get from the concrete experience of the average person (Family —> Culture —> Race —> All Humanity —> Cute Mammals —> All Mammals —> Vertebrates —> All Animals —> Bacteria?) the less the argument holds any actual meaning as far as human sympathy goes.
It seems obviously irrational to just ignore things because you can't quantify them precisely. Imagine that you could prevent every single instance of suffering in hexapods in world history for one penny. It would be silly to say "Why should I trade welfare that could be going to certainly effective methods of reducing pain in creatures that can certainly experience pain for something so (for lack of a better word) hypothetical?"
At some point, you just have to maximize expected value. If a dollar had a low chance of preventing thousands of painful deaths, it would be well spent. So long as the shrimp welfare project is better than that, a dollar given to it is still well spent.
I’m not suggesting ignorance.
Layering multiple instances of probabilities we really have no way of actually estimating, and coming up with a final probability is just an exercise of imagination. At what point am I 99.999% sure I’m wasting my money and instead choose a quantifiable amount of suffering I can definitely end.
When you get into the realm of abstract probabilities based on intuitions, you’re not actually being rigorous in your estimate. The heavy use of probability reeks of rationalization, not an argument. To me it seems the conclusion has been reached (shrimp probably feel pain) and now let’s come up with some numbers (estimates based off almost nothing) to justify it. That’s fine to do for having a consistent worldview, but breaks down when we start to pretend those numbers are rigorous or meaningful, which they self-admittedly aren’t.
The argument that shrimp might be sentient and we should sympathize with them is the only actual argument here. The numbers and comparison based off those numbers are the charade.
Coming up with an imagined extreme does little to justify what is an attempt at a practical comparison either. You literally estimate shrimp welfare in terms of human deaths, so eliminating suffering of all hexapods for literally nothing doesn’t factor into any reasonable comparison.
There are only two things that need to be true for SWP to be a good bet:
1) Shrimp are conscious.
2) They can feel decently intense suffering.
The probability of these together doesn't strike me as low (certainly not below 30%, and I'd guess above 50%), and if they're both right, it ends up insanely effective.
The question is around 1) and if 2) means anything at all in the context of 1) being no. Or at least if shrimp consciousness is materially close to 0 on a scale of, say, worm with 100 brain cells to human
Your writing in the SWP sequence has convinced me to roughly double the SWP allocation to around 1/6 of my annual donation. I have two problems, though.
One is direct. The numbers look *really perverse* if you run them in the other direction. To wit: it's morally good to make one human's death agonizing if you can thereby stop a small trashbag of shrimp being dumped onto ice. Even if it's two or three humans is probably still net good. I find that really hard to swallow! Even if "a small trashbag" is replaced with "a shipping container," boy would I not feel justified murdering someone to stop them from brutalizing all those shrimp. You have to get up to lots and lots of shrimp before I can even entertain this. (A similar objection applies to other measures of animal-suffering-moral-worth, but those numbers feel closer to intuitively workable.)
The second is: in general, if invertebrates have nontrivial moral worth, and if you aren't allergic to population ethics, then it is incredibly morally impactful to increase or decrease the number of invertebrates that will live. Good if invertebrate lives are mostly good, bad if they're mostly bad.
If that's so---if the planet is crawling with a quintillion bundles of mute agony, then this world is a horrid carnival, a concentrated universe of *pain*, the worst thing in the galaxy, a torture chamber, and the kindest thing you could do would be: destroy it. The best thing we've done so far as humans is decimate the biosphere, and the best thing we could do---before we jet off to another planet, or before we wink off ourselves---would be to blow the whole thing up, to prohibit life from miserably teeming ever again.
And I don't buy this! I think the existence of the world is good. I think the existence of the natural world is good. I think blowing up the planet---even if we learned "yep, no more progress, humans go extinct soon but the biosphere rolls on in its way until the sun goes boom, just life as normal for another billion years"---I still think blowing up the planet is, to put it lightly, supervillain shit. But under the view "invertebrate life matters and is mostly bad," it would be the only really big good thing ever done.
This is something I wonder about with Brian Tomasik, in particular. For him, is nature a horrid mistake? Must if be abolished? And if we'll never have the power to David-Pearce-out everything that lives, should we pull the plug?
This is implausible to me. I can *slightly* more believe the view "oh bug lives are good, so we should step back as humans to let more bugs live"----but only slightly. And it seems to me that any view saying "bugs (i got tired at the end of typing 'invertebrates,' so pretend they're synonyms) are significant moral patients, plus population ethics is not forbidden" goes one way or another. (Forbidding population ethics still makes things weird.)
What's the solution? Is there none? Is the secret slogan of Utilitarianism, "abolish nature by any means necessary"?
I don't think so! Even if there's a real "nature bad" view among the tescreal-y (yes i know i know) parts of the blog world, the significant attention given to factory farming (rather than the way more tractable "pave over the amazon" makes me think that isn't the mainstream "most important thing."
Mostly I'm confused. What do you believe? What do y'all believe? Have I erred in these inferences?
er sorry i kinda elided the ice-slurry / asphyxiation distinction here. but ykwim
I don't buy most of BB's arguments but nature is terrible. If you were born 10k years ago you would probably starve to the point of exhaustion before getting mauled to death by a predator - you would live a life like most animals today and in history have lived - a pretty awful one.
Every year 25 trillion wild shrimps are killed https://rethinkpriorities.org/publications/shrimp-the-animals-most-commonly-used-and-killed-for-food-production.
Please check out https://www.abolitionistapproach.com/about/the-six-principles-of-the-abolitionist-approach-to-animal-rights/.
Welfarist approach to animals is not good. It will promote even more animals being killed because the welfarist approach makes people feel like animal agriculture is morally justified. We should focus on the animals' right to be not treated as a commodity.
People are going to eat them. Demand will be supplied, farmed or wild.
I simply don’t care about shrimp and don’t think they are in any way comparable to humans in terms of suffering or “good”. So I’m still going to mock and laugh at this ridiculous idea.
Suggesting a shrimp's feelings should be valued as 3% of a human's doesn't feel right. Going just based on neurons, a shrimp has 10k (per Google's AI). A human has 80B. Treating a shrimp as 10k/80B of a human, and taking the 15k shrimp per dollar number, this suggests that we ease the death of one human for $5333. Perhaps you can say that this is worth it, but it becomes much less obvious that helping shrimp is as good as other charities
Yes and it’s likely that conscious awareness is emergent so 10k might get you effectively nothing. Or some extremely minimal level. Like adding neurons may have exponential value
In fact I think it’s likely that the level of awareness shrimp do have is much, much closer to a bacteria reacting to light or something. There is no conscious, subjective experience that it is “like” to be a shrimp. If we were transported into a shrimp and back out we might as well have been in a coma or deep anesthesia. Our bodily functions still work but not aware
I think you still haven't really addressed the strongest objection, which is simply that shrimp do not in fact feel pain because they have no conscious experience at all. In other words, it's true that a pain-like signal occurs in their bodies, but because their brains are so simple, there is "no one home" to experience it. I think this is clearly the case for things like the heat sensor in a laptop. The sensor detects a condition, elevated temperature, that can harm the laptop, so it takes action, shutting down, to avoid the potential harm. This signal is pain-like, and you could in a loose metaphorical sense say that the laptop feels pain when it gets too hot. The laptop, however, doesn't really feel pain because it isn't conscious so it doesn't feel anything at all. This means that the laptop isn't a moral subject so you can do anything you like to it no matter how "painful". The fact that something feels pain is not enough to establish that anyone has any moral obligations toward it. This is clearly true of at least some living organisms such as plants and fungi. I'm not sure exactly where the line between conscious and not is in the animal kingdom, but shrimp are quite plausibly below it.
>Whether something is similar to us seems morally irrelevant.
This depends on the foundation of your morality. Contractarians care greatly about whether something shares human-like morality; in this construction, there is no moral imperative to care about the suffering of shrimp, as shrimp do not comprehend or care about human morality and as such can't (even with a partial veil of ignorance to deal with their lesser capabilities) engage in a mutually-beneficial social contract with humans. In this view, subsapient animals like shrimp are effectively DefectBot or RandomBot and the correct response to DefectBot/RandomBot is all-defect.
(No known RL examples exist of a species with sapience but no morality, but to give a fictional example, 40K Orks are also outside the moral community in this view and can be guiltlessly exterminated due to their inability to co-operate with humans.)
>Fourth, people propose that what matters morally is being part of a smart species. But if we discovered that the most mentally disabled people were aliens from a different species, their pain obviously wouldn’t stop being bad.
There's an argument in this area that you may be missing, which is the argument from practicality. It is a lot easier to tell the difference between a shrimp and a normal human adult than it is to tell the difference between a human adult with shrimp-level mental capacity and a normal human adult, so there's more of a risk of getting it wrong in the latter case. This means that even if you have no regard for human adults with the brains of shrimp, you probably don't have enough information to know for sure that any particular human is such a creature, so one generally wants to avoid chucking suspected shrimp-brained humans in a Bottomless Pit of Endless Suffering. Shrimp are different; it is easy to know with 99.9999999999999999999999999999+% accuracy (conditional on no Cartesian-Daemon-esque scenarios, but in those you can't know anything about morality) that a shrimp does not have human-level brain function.
>Animal welfare is more neglected than longtermism.
Last I checked, there's a semi-major political party in my country (the Animal Justice Party) that is single-issue animal rights, and I'm not aware of any political party in said country that has significant longtermism in its policy let alone a single-issue party for it.
The link does not clearly explain how the money is used ti enhance shrimp welfare or why a dollar goes so far. Those details are essential
Yes! Thus is what I’m most curious/skeptical about.
I wish instead of making baseless objections people just admitted that they would rather prioritize themselves & other humans because humans are sentient beings who are more likely than shrimp to do things that would help them.
Unfortunately, shrimp have little power to bring utility to humans in ways other than "be eaten".
(If I could wave a magic wand, many ways humans harm animals *would be* socially unacceptable, but it's just really hard to make people care about those other than themselves.)
I haven't read any good reason why I should assume shrimp are conscious, or that whatever proto-consciousness they have is anywhere near comparable to even a bat's, let alone a human's. I like the rest of your argument, assuming they have sophisticated consciousness. I am nowhere near assuming that. Pain requires more than whatever you have proven shrimp are capable of.
What about the argument that flow-through effects from helping shrimp are less than flow-through effects of helping humans? (If you prevent humans from dying they will make the world better whereas if you help shrimp die in a less painful way not much downstream will change)
Re the pain is bad meme: I think the bottom gymnast is effectively what you did in the ubiquitous pain thesis post. You did not demonstrate to us which animals feel pain, but instead relied on things correlated with pain behavior. There is no categorial difference between your approach and the gymnast's approach, just that the gymnast uses different things that are associated with pain (species membership) to argue against your thesis.
"given that the shrimp welfare projects is thousands of times more neglected than longtermism"
This seems like the wrong comparison: one org vs an entire cause area. You should be comparing to the best longtermist giving opportunity instead. I don't think the "thousands of times more neglected" claim will hold up there.
(I don't think this would necessarily change your conclusion to give to SWP.)
Animal welfare is more neglected than longtermism. But even within animal welfare, suffering of invertebrates is almost entirely ignored.
I think that's reasonable. I would guess "thousands of times more neglected" is an overestimate given any reasonable way to measure.
No answer to the God objection. Looks like you’ve gone woke. If you can’t argue, just censor!
At some point I get annoyed by the scores of people commenting beneath every post mentioning a bad thing "so why does you God allow a bad thing." Adds nothing, is just annoying.
“This leads to Moral skepticism” was one of your arguments when you were an atheist. If you’re not going to answer it under the shrimp welfare posts, answer it in a theodicy post, at some point.
Let's say we limit ourselves just to the realm of animal welfare. Are you arguing that contributing to the Shrimp Welfare Project is the _optimal_ way to spend your resources? My personal inclination is that promoting veganism is the best use of our time and money, vs. making incremental changes to animal welfare. Promoting veganism has the added benefit of reducing greenhouse gases.
Promoting veganism is insanely ineffective comparatively and certainly doesn't avert tens of thousands of painful deaths per dollar.
Perhaps. But can you answer the first question? Are you arguing that contributing to the Shrimp Welfare Project is the optimal use of dollars (assuming we're limiting discussion to just animal welfare)?
Yes
So what are we to do with all the other animal charities that are in existence? Stop contributing until all shrimp are stunned prior to being killed?
I'm talking about what should be done at the margin. At some point, the marginal value of the SWP will go down.
So under your system, we'd end up closing all animal shelters and farm sanctuaries until the marginal value of the SWP reaches some specific point. Correct?
The important thing is that the money isn't going to undeserving poor people, amirite?