Selling Nonsense: How Émile Torres Misrepresents Longtermism
Another bad critique of ea and longtermism
Émile Torres has made a career out of taking increasingly desperate, dejected, and disastrously confused swings at EA and longtermism. With the recent increase in support for EA, so too has there been a large upsurge in the number of people looking for critiques of it — Torres has filled that market, writing a spate of articles about longtermism being worse than Stalin.
Torres wrote an article for salon about this. This article took a variety of piss poor swings at ea, as is typical of so many of its critics. Given that many eas seem to be nice to carefully dissect really bad criticism, it seems that the task of doing so has fallen upon me.
Longtermism emerged from a movement called "Effective Altruism" (EA), a male-dominated community of "super-hardcore do-gooders" (as they once called themselves tongue-in-cheek) based mostly in Oxford and the San Francisco Bay Area.
The male dominated charge is true — however, lots of different communities fall weirdly across gender lines. Veganism is female-dominated, golf is male dominated, construction work is male dominated, philosophy and history are male dominated, and so on. While it may be good for ea to get more women, the fact that it is disproportionately one sex doesn’t mean the movement isn’t a good thing — particularly in light of the fact that it’s saved hundreds of thousands of lives. This comment is just a cheap barb — not a real argument.
Although the longtermists do not, so far as I know, describe what they're doing this way, we might identify two phases of spreading their ideology: Phase One involved infiltrating governments, encouraging people to pursue high-paying jobs to donate more for the cause and wooing billionaires like Elon Musk — and this has been wildly successful.
This is totally untrue. While some longtermists are working in government, and that is a career path that has been advocated by 80,000 hours, it is not some specific two step process. Instead, there are a variety of different longtermists doing a variety of different things, broadly in service of improving the longterm future. Torres presents it as though there’s some specific cookie cutter guide — totally misrepresenting what’s being done. The word infiltrate is also clearly just there to provoke — it is a loaded phrase that adds nothing beyond the sense that longtermism is some grand nefarious cabal.
Phase Two is what we're seeing right now with the recent media blitz promoting longtermism, with articles written by or about William MacAskill, longtermism's poster boy, in outlets like the New York Times, the New Yorker, the Guardian, BBC and TIME. Having spread their influence behind the scenes over the many years, members and supporters are now working overtime to sell longtermism to the broader public in hopes of building their movement, as "movement building" is one of the central aims of the community. The EA organization 80,000 Hours, for example, which was co-founded by MacAskill to give career advice to young people (initially urging many to pursue lucrative jobs on Wall Street), "rates building effective altruism a 'highest priority area': a problem at the top of their ranking of global issues."
Again, it is not some specific two-step process. Some longtermists work on building ea — others work on other things. Only a small percent is spent on growing ea. This is thus not half of the movement — contrary to Torres’ indications.
As MacAskill notes in an article posted on the EA Forum, it was around 2011 that early members of the community began "to realize the importance of good marketing, and therefore [were] willing to put more time into things like choice of name." The name they chose was of course "Effective Altruism," which they picked by vote over alternatives like "Effective Utilitarian Community" and "Big Visions Network." Without a catchy name, "the brand of effective altruism," as MacAskill puts it, could struggle to attract customers and funding.
It's easy for this approach to look rather oleaginous. Marketing is, of course, ultimately about manipulating public opinion to enhance the value and recognition of one's products and brand. To quote an article on Entrepreneur's website,
if you own a business, manipulation in marketing is part of what you do. It's the only way to create raving fans, sell them products and gain their trust. Manipulation is part of what you do, so the trick isn't whether you do it or not — but rather how you do it.
Why is this a bad thing? Being a big longtermist is a pretty significant thing — it’s not the type of thing one would stumble into totally by accident. Thus, a good name and otherwise desirable marketing won’t trick people into becoming major longtermists — it will just make the idea more palatable for people who are initially interested in the idea. Thus, marketing in regards to, for example, correctly naming organizations is a good thing, not a bad thing, and is not manipulative.
Let’s imagine that EA had been named the Effective Utilitarian Community. Well, that could have turned off the solid percentage of EAs that are not utilitarians — with disastrous results.
This is exactly what we see in the ongoing promotion of MacAskill's new book "What We Owe the Future," which offers an easy-to-understand version of longtermism designed for mass consumption.
Consider the word "longtermism," which has a sort of feel-good connotation because it suggests long-term thinking, and long-term thinking is something many of us desperately want more of in the world today. However, longtermism the worldview goes way beyond long-term thinking: it's an ideology built on radical and highly dubious philosophical assumptions, and in fact it could be extremely dangerous if taken seriously by those in power. As one of the most prominent EAs in the world, Peter Singer, worried in an article that favorably cites my work:
Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.
It's unfortunate, in my view, that the word "longtermism" has been defined this way. A much better – but less catchy – name for the ideology would have been potentialism, as longtermism is ultimately about realizing humanity's supposed vast "longterm potential" in the cosmos.
Torres is mistaken about longtermism. Longtermism isn’t just about making sure that humanity survives in the cosmos — longtermists also tend to support actions that, for example, expand the circle of moral concern, to decrease the risk of a terrible future.
Does Torres object to the pro-choice movement being called pro-choice, which no doubt evokes positive emotions? If not, why object to longtermism on the grounds that it has a nice-sounding slogan?
Also, Torres’ slogan would be totally bizarre and contrived. Imagine having to ask if people are potentialists? What a strange-sounding question. Of course, movements will have nice sounding slogans — they want to draw people in. That’s the reason the Affordable Care Act was called the Affordable Care Act, rather than some complex technical description of exactly what it did.
The philosophical assumptions are not highly dubious — as I’ve argued throughout my five-part series on the matter. Linking to a random paper that argues for a procreation asymmetry, while ignoring the wealth of papers arguing against it, is sloppy journalism, and is quite misleading. As Beckstead shows, a procreation asymmetry has really implausible implications. If it’s only good to make people happy rather than to make happy people, then if we could either create a world where everyone lived to 10,000 and was never miserable or a world where everyone lived to 100 and was never miserable or made humanity go extinct after this generation, we should be indifferent between the three options — after all, we have no reason to make happy people. If we have no reason in any case, our reasons can’t be different across the three cases.
I’ve already replied to Torres’ disgracefully bad objections to longtermism, wherein he claims it’s the world’s “most dangerous secular credo.” On top of this, one doesn’t think the far future matters more than the present to be a longtermist — just that it matters a lot, and we should do more than we currently do to safeguard it.
The point is that since longtermism is based on ideas that many people would no doubt find objectionable, the marketing question arises: how should the word "longtermism" be defined to maximize the ideology's impact? In a 2019 post on the EA Forum, MacAskill wrote that "longtermism" could be defined "imprecisely" in several ways. On the one hand, it could mean "an ethical view that is particularly concerned with ensuring long-run outcomes go well." On the other, it could mean "the view that long-run outcomes are the thing we should be most concerned about" (emphasis added).
The first definition is much weaker than the second, so while MacAskill initially proposed adopting the second definition (which he says he's most "sympathetic" with and believes is "probably right"), he ended up favoring the first. The reason is that, in his words, "the first concept is intuitively attractive to a significant proportion of the wider public (including key decision-makers like policymakers and business leaders)," and "it seems that we'd achieve most of what we want to achieve if the wider public came to believe that ensuring the long-run future goes well is one important priority for the world, and took action on that basis."
The weaker first definition was thus selected, essentially, for marketing reasons: it's not as off-putting as the second, and if people accept it, that may be enough for longtermists to get what they want.
Torres outrage at this is misplaced and badly confused. Consider the following analogy.
Suppose I think factory farms are the worst thing ever, and ending them is the world’s leading moral priority. I might describe this concept as “probably right.” However, if I want to make a movement to shut down factory farms, I wouldn’t need to convince people of this controversial thesis — this thesis that doesn’t matter to the immediate actions being taken. It would just be needlessly divisive.
Or imagine someone is an anti-natalist, and they want to get pro-choice ballot initiatives passed. It wouldn’t be helpful to call it the “anti-birth” policy — even if that’s their motivation. A movement’s slogan should represent what one has to think to be in the movement — not what its founders happen to think.
The importance of not putting people off the longtermist or EA brand is much-discussed among EAs — for example, on the EA Forum, which is not meant to be a public-facing platform, but rather a space where EAs can talk to each other. As mentioned above, EAs have endorsed a number of controversial ideas, such as working on Wall Street or even for petrochemical companies in order to earn more money and then give it away. Longtermism, too, is built around a controversial vision of the future in which humanity could radically enhance itself, colonize the universe and simulate unfathomable numbers of digital people in vast simulations running on planet-sized computers powered by Dyson swarms that harness most of the energy output of stars.
The EA forum is somewhat public facing — anyone can access it, but it’s primarily for effective altruists. However, it’s not some super secret underground ea chatroom, the way Torres suggests.
Torres is wrong about what “Longtermism” “is build around.” Some longtermists may adopt those views — but one doesn’t need to think any of those things to be a longtermist. I happen to think those things and be a longtermist, but one needn’t do that.
One can disagree with the petrochemical idea — I think MacAskill does these days. But it would be worth actually quoting MacAskill’s defense and responding to it — rather than using it as cheap invective, to show the allegedly crazy things that longtermists believe.
One could have a similar derisive attitude towards modern society — describing it as a world where people spent times on digital networks, using algorithms to discover information, moving around in vehicles powered by fossil fuels. However, that would miss what really matters about society — namely, how well lives are going for people who are around. If there were lots of people living unfathomably excellent lives, that would be a very, very good thing.
As noted, if Sophie were not to take the petrochemical engineering job, someone else would. So Sophie only makes others worse off if more CO2 is produced as a result of her working in that job than as a result of her replacement working in that job. But it seems unlikely that Sophie’s taking that job would result in more CO2 being produced than would have been produced had her replacement taken the job. She might even cause less CO2 to be emitted. After all, Sophie has an altruistic character: she takes this job in order to donate as much money as possible, in order to help other people as much as she can. She therefore cares about the fact that CO2 emissions harm others. The typical petroleum engineer, in contrast, does not. So it’s possible that, without compromising her aims as a philanthropist, she could work in such a way that she produces less CO2 than her replacement would have done. And, even if she cannot accomplish that, there is no reason to think she will produce more CO2 than her replacement would have done.
So there is no-one who is made worse off if she decides to pursue that career. But some people are better off: namely, the beneficiaries of her charitable donations. And, if she is able produce less CO2 than her replacement would have done, then those set to be harmed from climate change are also made better off.
This is particularly so given that lives can be saved for a few thousand dollars — so earning more is actually really important.
Torres seems to think that it’s a good objection to longtermism to find random things said by longtermists and then mock them — they don’t even see fit to actually argue against them. They just describe the ideas sneeringly; noting (correctly) that some ideas sound pretty weird.
No explanation of why people believe them is provided. Nor is any argument against them given. All Torres can offer is cheap derision.
For most people, this vision is likely to come across as fantastical and bizarre, not to mention off-putting. In a world beset by wars, extreme weather events, mass migrations, collapsing ecosystems, species extinctions and so on, who cares how many digital people might exist a billion years from now? Longtermists have, therefore, been very careful about how much of this deep-future vision the general public sees.
Well, if a utopia of digital people could exist for billions of centuries, but we might ruin it, then we should try really hard not to ruin it. If utopia could be quadrillions of times better than current life, then it’s astronomically important that we don’t ruin it. I argue more for these conclusions in my defense of longtermists. Again, Torres quotes no longtermist defenses of this — just more sneering. This pattern is pervasive in Torres’ critique.
Additionally, longtermists tend to be in favor of actions to reduce climate change, prevent collapsing ecosystems, and so on. After all, those things reduce the expected value of the future — not to mention harming current people. Longtermists thus have good reasons to think that they’re really important.
For example, MacAskill says nothing about "digital people" in "What We Owe the Future," except to argue that we might keep the engines of "progress" roaring by creating digital minds that "could replace human workers — including researchers," as "this would allow us to increase the number of 'people' working on R&D as easily as we currently scale up production of the latest iPhone." That's a peculiar idea, for sure, but some degree of sci-fi fantasizing certainly appeals to some readers.
But does MacAskill's silence about the potential for creating unfathomable numbers of digital people in vast simulations spread throughout the universe mean this isn't important, or even central, to the longtermist worldview? Does it imply that criticisms of the idea and its potentially dangerous implications are — to borrow a phrase from MacAskill's recent interview with NPR (which mentions my critiques) — nothing more than "attacking a straw man"?
I don't think so, for several reasons. First, note that MacAskill himself foregrounded this idea in a 2021 paper written with a colleague at the Future of Humanity Institute, an Oxford-based research institute that boasts of having a "multidisciplinary research team [that] includes several of the world's most brilliant and famous minds working in this area." According to MacAskill and his colleague, Hilary Greaves, there could be some 10^45 digital people — conscious beings like you and I living in high-resolution virtual worlds — in the Milky Way galaxy alone. The more people who could exist in the future, the stronger the case for longtermism becomes, which is why longtermists are so obsessed with calculating how many people there could be within our future light cone.
This is a really bad line of reasoning. Peter Singer has, at various points, claimed factory farms might be the worst things ever. However, this doesn’t mean that criticizing those other beliefs from Peter Singer would be a good criticism of animal rights activism. Even if a writer holds belief X that relates to cause Y, and writes a book about cause Y, that doesn’t mean that criticism of belief X is by extension a criticism of cause Y.
Torres, predictably, has yet to identify a single thing being done by longtermists that he thinks is bad — he just vaguely smears the movement. Zero targeted criticisms, more sneering; the Torres modus operandi.
Furthermore, during a recent "Ask Me Anything" on Reddit, one user posed this question to MacAskill:
In your book, do you touch on the long-term potential population/well-being of digital minds? I feel like this is something that most people think is too crazy-weird, yet (to me) it seems like the future we should strive for the most and be the most concerned about. The potential population of biological humans is staggeringly lower by comparison, as I'm sure you're aware.
To this, MacAskill responded: "I really wanted to discuss this in the book, as I think it's a really important topic, but I ended up just not having space. Maybe at some point in the future!" He then linked to a paper titled "Sharing the World with Digital Minds," coauthored by Nick Bostrom, who founded the Future of Humanity Institute and played an integral role in the birth of longtermism. That paper focuses, by its own account,
“on one set of issues [that] arise from the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-produced digital minds could derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings could contribute immense value to the world, and failing to respect their interests could produce a moral catastrophe, while a naive way of respecting them could be disastrous for humanity.”
This suggests that digital people are very much on MacAskill's mind, and although he claims not to have discussed them in his book due to space limitations, my guess is that the real reason was concern that the idea might sound "too crazy-weird" for general consumption. From a PR standpoint, longtermists at Bostrom's Future of Humanity Institute no doubt understand that it would be bad for the movement to become too closely associated with the idea of creating enormous populations of digital beings living in virtual-reality worlds throughout the universe. It could cause "brand damage," to borrow a phrase from MacAskill, as critics might well charge that focusing on digital people in the far future can only divert attention away from the real-world problems affecting actual human beings.
Torres has no reason to assume MacAskill lied; the book is already quite long, and a thorough treatment of the topic of digital minds and people would be near impossible, given space limitations. Making the book an extra 60 pages would be a bad idea.
But let’s say that Torres is right — MacAskill didn’t discuss digital people because they’re too weird. So what? He was marketing to a general audience. This is like objecting to animal liberation on the grounds that it doesn’t talk enough about wild animal suffering — claiming that Singer does have views on animal suffering. Maybe he does, but one cannot and need not discuss all the intricacies of a view in a book like MacAskill’s.
Even if MacAskill thinks that digital minds are an important part of why longtermism is generally good; that would not mean that criticisms of digital minds are extensionally also criticisms of longtermism. On top of this, Torres provides no criticism of digital minds — just more sneering. This pattern is really getting to be quite tiresome.
When 80,000 Hours first launched, we led with the idea of earning to give very heavily as a marketing strategy; it was true that we used to believe that at least a large proportion of people should aim to earn to give long-term; earning to give is much simpler and more memorable than our other recommendations; and earning to give is controversial, so the media love to focus on it.
Yet, MacAskill adds, "giving too much prominence to earning to give may nevertheless have been a mistake." As the EA movement gained more attention, this marketing decision seemed to backfire, as many people found the idea of working for "evil" companies in order to donate more money to charity highly objectionable.
This foregrounds an important point noted by many in the EA community: Movement-building isn't just about increasing awareness of the EA brand; it also requires strategically enhancing its favorability or, as some would say, the inclination people have toward it. Both of these can be "limiting factors for movement growth, since a person would need to both know what the movement is and have a positive impression of it to want to become involved." Or to quote another EA longtermist at the Future of Humanity Institute:
Getting movement growth right is extremely important for effective altruism. Which activities to pursue should perhaps be governed even more by their effects on movement growth than by their direct effects. … Increasing awareness of the movement is important, but increasing positive inclination is at least comparably important.
If a movement is good, then focusing on expanding the movement is also a good thing. Again, suppose that one is an anti-natalist. They might think that convincing people to be anti-natalists is good, but it would be bad pr, so instead they focus on increasing abortion access. That wouldn’t be wrong — neither is this. Also, quoting one person is not a good gauge of what the movement generally believes — particularly given that the author of the person who was being quoted says he no longer agrees with the contents of the article.
Thus, EAs — and, by implication, longtermists — should in general "strive to take acts which are seen as good by societal standards as well as for the movement," and "avoid hostility or needless controversy." It is also important, the author notes, to "reach good communicators and thought leaders early and get them onside" with EA, as this "increases the chance that when someone first hears about us, it is from a source which is positive, high-status, and eloquent." Furthermore, EAs
should probably avoid moralizing where possible, or doing anything else that might accidentally turn people off. The goal should be to present ourselves as something society obviously regards as good, so we should generally conform to social norms.
Is this objectionable? If so, Torres doesn’t explain why. This is just general advice that EAs shouldn’t be divisive — why in the world is this a bad thing. It’s only bad if it expands EA and EA is bad, but EA isn’t bad! Torres never argues that EA is bad overall — they prefer to have vague smears of the ideology of random EAs. Given the vast number of lives it has saved; one would be hard pressed to think it is an overall bad.
The first is that neoliberalism "was extremely successful, rising from relative outcast to the dominant view of economics over a period of around 40 years." And second, it was "strategic and self-reflective," having "identified and executed on a set of non-obvious strategies and tactics to achieve [its] eventual success." This is not necessarily "an endorsement of neoliberal ideas or policies," the author notes, just an attempt to show what EA can learn from neoliberalism's impressive bag of tricks.
No objection given here — Torres just tries to do word association between EA and neoliberalism, which Torres doesn’t like.
Yet another article addresses the question of whether longtermists should use the money they currently have to convert people to the movement right now, or instead invest this money so they have more of it to spend later on.
It seems plausible, the author writes, that "maximizing the fraction of the world's population that's aligned with longtermist values is comparably important to maximizing the fraction of the world's wealth controlled by longtermists," and that "a substantial fraction of the world population can become susceptible to longtermism only via slow diffusion from other longtermists, and cannot be converted through money." If both are true, then
“we may want to invest only if we think our future money can be efficiently spent creating new longtermists. If we believe that spending can produce longtermists now, but won't do so in the future, then we should instead be spending to produce more longtermists now instead.”
Such talk of transferring the world's wealth into the hands of longtermists, of making people more "susceptible" to longtermist ideology, sounds — I think most people would concur — somewhat slimy. But these are the conversations one finds on the EA Forum, between EAs.
Well, it’s unsurprising that one will sound slimy if you scroll through random articles on the EA forum — the EA equivalent of reddit — and find the most nefarious sounding out of context quote before placing it into your article. If you think longtermism is good, then, phrasing aside, lots of money spent on it is plausibly good, just like if you think that improving global health is good, you should want lots of money to be spent on it.
So the grift here, at least in part, is to use cold-blooded strategizing, marketing ploys and manipulation to build the movement by persuading high-profile figures to sign on, controlling how EAs interact with the media, conforming to social norms so as not to draw unwanted attention, concealing potentially off-putting aspects of their worldview and ultimately "maximizing the fraction of the world's wealth controlled by longtermists." This last aim is especially important since money — right now EA has a staggering $46.1 billion in committed funding — is what makes everything else possible. Indeed, EAs and longtermists often conclude their pitches for why their movement is exceedingly important with exhortations for people to donate to their own organizations. Consider MacAskill's recent tweet:
While promoting What We Owe The Future I'm often asked: "What can I do?" … For some people, the right answer is donating, but it's often hard to know where the best places to donate are, especially for longtermist issues. Very happy I now have the Longtermism Fund to point to!
How is this grift? Longtermists think longtermism is important. Because of this they fund longtermist organizations. Then they advocate people donate money to those, because they tend to be pretty effective. Why in the world is this a bad thing?
In fact, EAs have explicitly worried about the "optics" of self-promotion like this. One, for example, writes that "EA spending is often perceived as wasteful and self-serving," thus creating "a problematic image which could lead to external criticism, outreach issues, and selection effects." An article titled "How EA Is Perceived Is Crucial to Its Future Trajectory" similarly notes that "the risk" of negative coverage on social media and in the press "is a toxic public perception of EA, which would result in a significant reduction in resources and ability to achieve our goals."
When writing a journal article, it’s very easy to pop from point to point and, through negative phrasing, make it look as if one is writing devastating objections. But in reality, none have been given. Torres has mastered this craft — describing that EAs sometimes describe the risks of bad PR in a frightening sounding way isn’t an argument; it only mimics one, while capitalizing on the tendency for the way things are described to influence one’s moral decisions.
Another example of strategic maneuvering to attract funding for longtermist organizations may be the much-cited probability estimate of an "existential catastrophe" in the next 100 years that Toby Ord gives in his 2020 book "The Precipice," which can be seen as the prequel to MacAskill's book. Ord claims that the overall probability of such a catastrophe happening is somewhere around one in six. Where did he get this figure? He basically just pulled it out of a hat. So why did he choose those specific odds rather than others?
First, as I've noted here, these are the odds of Russian roulette, a dangerous gamble that everyone understands. This makes it memorable. Second, the estimate isn't so low as to make longtermism and existential risk studies look like a waste of time. Consider by contrast the futurist Bruce Tonn's estimate of human extinction. He writes that the probability of such a catastrophe "is probably fairly low, maybe one chance in tens of millions to tens of billions, given humans' abilities to adapt and survive." If Ord had adopted Tonn's estimate, he would have made it very difficult for the Future of Humanity Institute and other longtermist organizations to secure funding, capture the attention of billionaires and look important to governments and world leaders. Finally, the one-in-six estimate also isn't so high as to make the situation appear hopeless. If, say, the probability of our extinction were calculated at 90%, then what's the point? Might as well party while we can.
There’s no way to refute such claims that people are lying about what they really believe; however, several things count against this.
First, MacAskill has a much lower estimate, at closer to 1%. So maybe Ord is being dishonest, but most longtermists aren’t.
Second, if you read the Precipice, the estimates that seem right based on the evidence Torres provided are roughly Torres’ estimates.
Third, similar estimates have been provided by lots of people — Tonn’s estimate is extraordinarily low. To quote the future of life institute
ny prominent researchers, scientists, and government officials believe that this threat is high and intolerable (Bostrom, 2002; Highfield, 2001; Leslie, 1996; Matheny, 2007; Rees, 2003; U.K. Treasury, 2006). For example, based on his review of trends and situations facing humanity, Rees estimates 50–50 odds that our present civilization will survive to the end of the present century (Rees, 2003). Bostrom (2002) asserts that the probability of human extinction exceeds 25% while Leslie (1996) estimates that the probability of human extinction over the next five centuries is 30%. The Stern Review (U.K. Treasury, 2006), influenced by environmental risks such as climate change, reports an almost 10% chance of extinction by the end of this century.
Thus, lots of people would have to be in on it to manufacture that number.
Worse yet, the EA community has also sometimes tried to silence its critics. While advertising themselves as embracing epistemic "humility" and always being willing to change their minds, the truth is that EAs like the criticisms that they like, but will attempt to censor those they don't. As David Pearce, an EA who co-founded the World Transhumanist Association with Bostrom back in 1998, recently wrote, referring to an article of mine: "Sadly, [Émile] Torres is correct to speak of EAs who have been 'intimidated, silenced, or 'canceled.'" In other words, cancel culture is a real problem in EA.
This seems bad if true. I have not found this to be true in my experience — the EAs I’ve interacted with tend to be extraordinarily open to criticism. To recount one anecdote, while I was doing tabling for my University’s EA club, some critic of EA went up and started describing the terrible things about EA. The other EAs patiently listened to the criticisms, and suggested that she come to a meetup to express those concerns.
These were very poor criticisms — I felt a desire to argue with her; but the other EAs didn’t. It was pretty extraordinary how open to criticism they were.
But if this is true, it’s bad! Torres gives an example of a case in which this allegedly happened — I haven’t looked into the details in any great degree of depth. But this is not a criticism of what longtermism does broadly — just something done by a small number of longtermist organizations. One could no doubt find similar behavior from environmentalist organizations — ones Torres would no doubt support.
This yields a very troubling picture, in my view. Effective Altruism and its longtermist offshoot are becoming profoundly influential in the world. Longtermism is ubiquitous within the tech industry, enthusiastically embraced by billionaires like Musk, encroaching into the political arena and now — in what I'm calling Phase Two of its efforts to evangelize — spreading all over the popular media.
To understand what's really going on, though, requires peeking under the hood. Marketing, PR and brand management are the name of the game for EAs and longtermists, and this is why, I would argue, the general public should be just as skeptical about how EAs and longtermists promote their brand as they are when, say, Gwyneth Paltrow's Goop tries to sell them an essential oil spray that will "banish psychic vampires."
Skepticism is good. Fortunately, Torres gave no specific reasons to be skeptical of EA — in its place was snark and derision. This seems rather typical of criticisms of EA.
It's amazing how much mileage he thinks he can get by describing marketing in ways that make it sound nefarious. Every single thing he complains about is something that every successful movement in the history of the world has done.
Also very funny that he accuses EA of cancel culture when most of his other critiques of EA are just, "This person who is associated with EA was canceled."