Slightly Contra Jesse Singal On Effective Altruism
Racism against shrimp has gone on for long enough!!!
Jesse Singal is one of my favorite journalists. He writes an excellent newsletter and cohosts the funniest podcast on the internet titled Blocked and Reported (BARPOD). I at one point showed BARPOD to my mother—among the least online people in the world—and despite it being about bizarre internet bullshit, she found it generally tolerable. This is a testament to just how funny it is; it would be like if you got Nick Fuentes to enjoy a DEI program or Kanye to like a celebration of Jewish pride.
Singal is broadly on team effective altruism. He gives away a portion of his newsletter’s revenue to effective charities, and sometimes defends effective altruism against dumb criticisms. Thus, Singal and I are very broadly in agreement when it comes to effective altruism, and while in this article I’m going to criticize some of the things that he says about it, I appreciate that he actually puts his money where his mouth is and gives away his money to effective charities. All too many people criticize some of the weird components of effective altruism, and then use this as an excuse not to spend money on malaria nets. But this is silly; malaria nets save the lives of children for just a few thousand dollars. So I appreciate that Singal doesn’t use qualms about certain kinds of EA orgs as an excuse to foreswear effective giving entirely.
(See here for context for the above meme).
Anyway, Singal’s basic view, which he’s articulated here, here, and here, for instance, is that when it comes to effective altruism, everyone gets too fixated on the weird stuff. Some EAs are strange, hyperutilitarians who buy into very odd moral arguments, like that we should kill half the world if it would slightly improve the quality of the future. A few of them fund strange, galaxy-brained programs based on this errant moral calculation, and their critics spend all their time harping on the weird moral implications of this. Instead, everyone should just chill out about the weird 10-D chess moral philosophy, and fund things like bednets that are uncontroversially effective and good.
I understand the appeal of this line of reasoning. Each month, I give some of my money to Givewell charities. I certainly agree with Singal about the critics of EA; if you think that shrimp welfare is dumb, for instance, then don’t spend your money on shrimp welfare. But you should still, at the very least, spend it on giving bednets so that poor children don’t get malaria (before editing this said “so that poor children get malaria,” oops!).
Where I disagree with Singal is that I think the weird galaxy-brained moral philosophy stuff is hugely important. I’m glad that EAs spend some money and effort on the weird stuff. Stuff can be weird and good.
Take shrimp welfare as an example. It sounds super weird; when I tell people at da clerb that I give 100 dollars per month to help the shrimp, I get very strange looks. But helping the shrimp is really important! Trillions of shrimp suffer in horrendous conditions for their entire lives, before being slowly suffocated to death.
For every dollar the shrimp welfare project gets, it makes painless about 15,000 shrimp deaths that would otherwise have been excruciatingly painful, by stunning the shrimp before slaughter. If you could spend a penny to give 150 lobsters a painless death, instead of a death from being boiled alive, that would be a pretty good use of money. The shrimp welfare project is as effective as that! It becomes especially clear how good it is in light of the fact that our best evidence points to these creatures feeling relatively intense pain. While we think of helping shrimp as a frivolous enterprise, if you were one of the trillions of shrimp who will be painfully tortured to death this year, shrimp welfare reforms couldn’t come soon enough.
Singal has used shrimp welfare as an example of EA run amuck. But I think that shrimp welfare shows what happens when EA goes right—when, it does the opposite of running amuck, when it walks acleanliness! Most people don’t care about shrimp welfare, but this is for bad reasons. It’s because they’re racist against shrimp!!! People mostly ignore the interests of the poor, oppressed shrimp because caring about them is inconvenient and they look weird. Deference to common sense is a good enough heuristic most of the time, but not when something is part of common sense for bad reasons.
The arguments for caring about shrimp welfare don’t result from 4-D utilitarian underwater chess. They result from a more basic principle, one that is actually common sense, but that most people don’t take very seriously: extreme suffering is bad. The argument for giving to shrimp welfare in a nutshell is:
Extreme suffering is bad.
It’s good to prevent lots of bad stuff at minimal cost.
Giving to the shrimp welfare project prevents lots of extreme suffering at minimal cost.
Therefore, giving to the shrimp welfare project is good.
We normally ignore the extreme suffering of shrimp. But this is just a failure of empathy. For the shrimp who slowly freeze and suffocate to death in a bucket of ice, it probably feels roughly like it feels when we drown. When we really, truly empathize with the poor shrimp, I think we can come to see that the fact that every single year, horrors beyond comprehension are inflicted on trillions of shrimp—to a population far greater than the number of humans who have ever lived—is a genuine crisis.
The above shows a dramatic reenactment of the fate of trillions of shrimp. Undercover reporting from the shrimp farms found the following dialogue between a shrimp named Jack and one named Rose:
“ROSE: I love you, Jack.
JACK: No...don’t say your goodbyes, Rose. Don’t you give up. Don’t do it.
ROSE: I’m so cold.
JACK: You’re going to get out of this...you’re going to go on and you’re going to make baby shrimp and watch them grow and you’re going to die an old lady shrimp, warm in your [estuaries, coastal areas, rivers, lakes, and the ocean, which is apparently where shrimp live]. Not here...Not this night. Do you understand me?
ROSE: I can’t feel my body.
JACK: Rose, listen to me. Being brought into this shrimp farm was the best thing that ever happened to me. It brought me to you. And I’m thankful, Rose. I’m thankful. You must do me this honor...promise me you will survive....that you will never give up...not matter what happens...no matter how hopeless...promise me now, and never let go of that promise.
ROSE: I promise.
JACK: Never let go.
ROSE: I promise. I will never let go, Jack. I’ll never let go.” (The letting go was metaphorical because shrimp don’t have hands).
We shouldn’t be that surprised if we discover that good things are sometimes weird. A very consistent lesson of history is that our empathy is inconsistent and erratic. For much of history, people thought that conquering and enslaving enemy nations was a really awesome and virtuous way to spend a Tuesday. Our present moral opposition to conquering would have been regarded as weird and in conflict with obvious norms. It’s not particularly unlikely that stuff we now regard as obvious will, in a more enlightened future, be regarded as monstrous.
In light of this, it’s not enough to dismiss ideas just because they’re weird. Almost every good idea was once weird. Imagine trying to explain computers to a fifth century peasant. Podcasts about wokeness in the knitting community would have seemed weird to almost everyone who ever lived. We therefore have to consider weird ideas and see whether they’re defensible. When we do this for shrimp welfare, the ideas turn out to be correct.
Another target of Singal’s criticism is Longtermism. Longtermists think that because the future could have so many more people than the present, making sure it goes well is very important. There are two kinds of Longtermists: Weak Longtermists and Strong Longtermists. And no, this is not about how much they lift—if it were, almost all Longtermists would be weak Longtermists, with the exception of Will MacAskill.
(A Strong Longtermist if I’ve ever seen one).
Weak Longtermists think that we should do a lot more to make the future go well. That’s it—that’s all they’re committed to! Strong Longtermists, in contrast, think that the future is so important that making it go well is a lot more important than things in the present. Longtermists of both types sometimes give their money to organizations trying to protect the world from existential pandemics and deadly AI.
Now, Strong Longtermism is controversial. It’s counterintuitive that the future matters so much more than the present. But there are two important things to note about it:
It’s supported by various extremely powerful arguments. The future will have a lot of everything that matters. Certainly if things go badly, the future could be filled with horrors beyond comprehension. Making sure that it has good stuff and doesn’t have bad stuff in it is important—and because the future could last so long, this will end up dwarfing everything else in importance. Many philosophers have come around to Longtermism, even when they weren’t initially attracted to it.
To support the stuff Strong Longtermists support, you don’t have to be a Strong Longtermist. Strong Longtermists do exactly the same stuff that the Weak Longtermists do—funding research to reduce existential threats, trying to pass policy regulations that guard against existential pandemics, and so on. Even if you’re not a Longtermist of any sort, as Thornley and Shulman have recently argued, efforts to reduce existential risks are horribly neglected. To accept that the Longtermists are doing good work, you don’t have to be a weird utilitarian; you just have to think that it would be bad if the world ended from nuclear war, biotechnology, or AI. Sounds pretty commonsensical!
Singal has articulated why he doesn’t think Longtermist arguments hold up, writing:
I have to confess that I have never been able to understand the logic here. I just can’t grok why we should view a future potential life as equal to a currently existing one. Someone who currently exists can suffer and feel pain and be mourned by their loved ones. Someone who doesn’t exist yet… doesn’t exist yet. If they never come to exist, they are not experiencing any negative consequences. My brain sort of returns a divide-by-zero error when I try to stack up a happy life against a nonexistent life. You can’t compare them! The nonexistent life doesn’t exist, so it has no happiness level. (If you find this stuff interesting, the nonidentity problem is a good way into it.)
I think this is wrong for a few reasons.
First, in order to think that making sure the future has a lot of people is important, you don’t have to think that “we should view a future potential life as equal to a currently existing one.” All you need to think is that a future person coming into existence and living a good life is somewhat valuable. Compare: I don’t think people breaking their legs is as bad as people dying, but if there was some chance that everyone in the world would break their leg, and their descendants would, and their descendants would, and so on, stopping this would be pretty important.
But the idea that it’s good when a person comes into existence and lives a happy life is, in my judgment, pretty commonsensical. Have you ever seen a baby? They’re cute! It’s good that they exist. When I reflect, it seems quite a fortunate thing that my parents had me—and not just because of my impact on others. It’s good that I exist; as a result, I get to have joy, love, dreams, and all the other valuable things in life. Maybe creating a life isn’t as important as saving one, but surely it counts for something!
(I actually have a paper on this that I’ve also written about here. In short, any view that rejects that it’s good to bring a well-off person into being implies certain great absurdities. See here for some other problems present in the philosophical literature for the view that there’s no positive reason to create a happy person).
Second, to be a Longtermist, you don’t even need to think that it’s important that the future has lots of well-off people. You might just think it’s really important that it doesn’t have bad things. This, at least, seems like common sense. But given how vast the future could be, steering it away from bad things that could span the next few million years seems astronomically important. For instance, it’s worth trying to make sure we know how AI consciousness works so that we don’t inadvertently cause huge numbers of artificial agents to suffer for vast periods of time. I similarly think it’s worth ending factory farming, so that it doesn’t proliferate across the galaxy over cosmic timescales!
Third, as I was saying before, even if you don’t think it’s important to steer the future in a positive direction, the actions taken by Longtermists end up looking pretty good. Thus, I think Singal is lapsing into the error he criticizes others for: making a criticism that’s far too hypothetical.
Strong Longtermists say weird things about philosophy. You know who else does? Catholics! They mostly think that huge numbers of people will go to hell forever, and saving a person’s soul is way more important than making sure they have less important stuff like food. Still, it would be silly to criticize a Catholic charity that feeds the poor on the grounds that they have weird moral beliefs. Similarly, even if the Longtermists think weird things—which, to be clear, I don’t think they necessarily do—you should still be pro-Longtermism.
I agree with Jesse that there is some obviously good, normal stuff that EAs do, and other weirder and more controversial stuff. I agree that if you don’t like the weird stuff, you should just do the uncontroversial stuff. But I think when one takes time to carefully consider the arguments for the more controversial stuff, they turn out to follow unavoidably from extremely commonsensical premises! I know it sounds weird, but you really should help the shrimp!
Persuasive! I’ve been leery about the whole shrimp-welfare angle but the Titanic dialogue tipped me over the edge. Just donated $50 to the SWP. Keep up the great work.
It just seems so trivial to say that the existence or nonexistence of potential people is a matter of good and bad. Isn’t that a big part of the reason why genocide, extinction, forced sterilization, etc. are so bad?