Criticisms Of Effective Altruism Are Obviously Motivated By People Not Liking Individual Effective Altruists
One reason why they're so bad
The level of error in the typical piece criticizing effective altruism is genuinely impressive. From deBoer to Wenar to the Bulwark to Torres, the criticisms are about as persuasive as the typical flat-Earth screed. In fact, they generally don’t have any arguments, just snark, just pointing and sputtering. For example, they’ll point to Longtermism and then say, without argument, that it “amounts to angels dancing on the head of a pin,” (deBoer said that one). Readers who already hate EA will feel vindicated because the article writes mean things about it eloquently, but nothing remotely in the vicinity of an actual argument will be given.
But maybe this criticism of mine is missing the point. Criticizing these people for their lack of arguments is like criticizing pornography for its poorly developed plot. That’s not really what it’s about. The aim isn’t to argue that effective altruism is wrong or harmful, it’s some kind of weird performance art.
The critics of effective altruism have a problem on their hands: effective altruism is mostly uncontroversial. Most of what the movement does is give antimalarial bednets and other things that prevent disease to poor people. A lot of the rest is to improve conditions in factory farms, so that, for example, when female pigs are giving birth, they have enough room to move. It’s not a good look to support pregnant pigs being in a tiny crate where they can’t move as they try to squirm around in agony as they give birth.
Effective altruism also has a third major component: Longtermism. This says we should try to do things that make the far future a lot better. But the things pursued under the banner of this are mostly uncontroversial: a lot of it is trying to reduce the risk of nuclear war. Again, it’s hard to write persuasively in defense of nuclear war.
Of course, some parts of effective altruism are controversial. Actions taken on AI risk garner lots of controversy, for many people don’t think AI is a risk, or if it is, they think it’s not the type of risk that effective altruists’s research is effective at solving. But effective altruists have written many thousands of pages arguing for the significance of AI risk, that the typical journalist cannot deal with.
So as a result, the critic’s only option is to be snarky. They cannot substantively engage with anything. Of course, they can take passing shots at random EA projects—hahahahaha you guys bought a castle, nevermind the fact that it was a good investment and made it easy to have a bunch of retreats at low cost and that it has since been sold and even if it was bad was only done by one organization which, even if bad, has nothing significant to say about the movement as a whole. They can fire off snarky sentences about Silicon Valley tech nerds and fanaticism and random things said in Ph.D theses of prominent EAs, but actually arguing against the core of effective altruism would require taking the wildly implausible stance that it’s good when kids get malaria or engaging in detail with the arguments for Longtermist projects.
So why don’t they just ignore it? Why do so many journalists and philosophers feel the need to write long articles criticizing effective altruism, despite having no clue what they’re talking about? The answer is simple: most of them don’t like effective altruists. Critics are motivated largely by spite.
Effective altruists have a very particular way of talking. We tend to be disproportionately nerdy and autistic and likely to talk about expected value. If you’re a history of philosophy professor who thinks that the heyday of philosophy was Aristotle, then you’ll be very annoyed by these mostly young irreverent utilitarians who don’t take ancient philosophy seriously. Of course, this is an issue with individual effective altruists, not effective altruism—it wouldn’t be an objection to Domestic Violence hotlines that many of the people who work there are annoying (even if that were true). But nonetheless, it’s not hard to see why Mary Townsend and Justin Smith-Ruiu are annoyed.
Similarly, journalists are mostly annoyed by the kinds of tech nerds who are in EA. Not that EAs are vicious—in fact, the ones I’ve met are almost exclusively superhumanly nice. But if a journalist’s exposure to effective altruism is largely from Twitter—which is where journalists spend most of their time these days—they’ll be annoyed by much of the messaging, just like they’d probably be annoyed by the obscure memes of hardcore star-trek fans.
On top of this, effective altruists aren’t just random people. They’re doing significant things in the world. Worst of all, they’re saying you should do those significant things as well. So unless journalists want to give away 10% of their money to save kids from malaria—and they don’t, they have cocktail parties to attend—they have to come up with some rationalization about why effective altruism is actually terrible.
They are stuck between two considerations: effective altruists are the go-to people to critique, being annoying to many of the people writing critical articles. But there problem is that it’s very hard to critique them because the stuff EAs do is mostly uncontroversially good. So as a result, they point and sputter. They get very outraged and employ wholly unprecedented levels of snark, but ultimately have nothing to say. It would be amusing if it weren’t so tragic. How many people have been put off taking the giving pledge—which would save many lives—by one of these articles giving EA a bad name? While for the journalists writing these articles, it’s just a Tuesday, for the people suffering and dying from easily preventable diseases, what those journalists say actually matters. As Richard says:
Btw, I’m back on Twitter (I know I said bad things about it, but those should be mitigated by using it sparingly)—feel free to give me a follow.
Seems like the truly objectionable thing about EA that motivates these attacks is the same thing that makes people object to veganism so strenuously - if adopted, they require you to make difficult changes to your life.
What effective altruism hasn't grasped yet is that regardless of whether the castle is a good investment, it is so very important from the perspective of reputation management to not own any castles or associate with high danger industries, like gambling, weapons and crypto. People want to do good are held to higher standards of ethical stringency- it's not fair, but it's reality. Also, there's a concern that one day the castle might own you rather than vice versa- a concern that I think is not entirely overblown.