Against TESCREALISM
The term TESCREAL is a cheap smear; also, dissecting a very dumb Washington spectator article
Suppose I wanted to argue against Democracy without actually having anything useful to say about why Democracy is bad. I’d probably say something like this: “Democracy, while appearing innocent, is just part of the pernicious, Demotist ideology based around rule of the people. Demotism includes fascism, communism, and Democracy. We know that Demotism gets tons of people killed and is a terrible idea whenever it’s tried out—therefore, we shouldn’t risk this whole Democracy thing.” In fact, we have a test case of people who try to argue against Democracy without having any decent criticisms, and this is exactly what they say.
Suppose I was trying to argue for the dangers of some very specific, moderate branch of Islam. I’d probably lump it in with the rest of Islam, and then argue that Islam is dangerous—just look at Isis. In fact, we know that this is what those trying to smear moderate Muslims do. Or if I was trying to rail against some particular atheist, I’d rail against atheism in general—pointing out the number of people killed by various godless regimes.
But this is a dumb way to argue. Arguments of the form “X is part of an arbitrary set of things, many of which are bad” are some of the worst arguments ever. Because it’s always possible to make up a set that includes what you’re railing against and a bunch of bad things, tied together with the loose threads that might comprise a conspiracy board. An argument that can be used to discredit anything proves too much!
This brings me to the acronym TESCREAL. Coined by Torres and Gebru—two people with whom I have major disagreements—it is an acronym that includes Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. Torres has a Twitter thread in which they explain why it makes sense to include them all together. I don’t know what about half of those things are, and neither do most other EAs that I know of, but apparently, we’re all one big happy family, according to Torres and Gebru.
The first reason is that they should be smushed together into one heavy-handed acronym, according to Torres, is that allegedly, they all came out of the eugenics tradition. This is completely wrong—effective altruism traces its roots to MacAskill and Ord, neither of whom are eugenicists, and its ideas more broadly to Peter Singer, who is, once again, not a eugenicist. Torres’ explanation of why they all come out of eugenics is that apparently the first person who used the term transhumanism was in favor of eugenics, and the others drew from some people who were transhumanists. Apparently, coming out of something is transitive—so if one movement came originally from a eugenicist, and then that movement mingles with another movement, the eugenics infects them like a virus. If my great great grandfather was a eugenicist, then I too would be a eugenicist—because I would ultimately have my origin in eugenics. Should we apply the Nazi’s grandfather rule to eugenics too?
The second reason Torres suggests grouping them together is that their members overlap. Apparently, many cosmists are also extropians (I don’t really know much about either), and so on. But the mere fact that two movements often intermingle does not make it helpful to bunch them together and then smear them all. When one is offering sophisticated critiques, they should be specific, rather than merely amorphous smears and the genetic fallacy. But by having this vague grouping, people like Torres can claim that the TESCREAL bundle came out of eugenics, just because some people in it are part of another movement who have a label, the first use of which was from a eugenicist (not a helpful term, btw).
The third reason is that they are all apparently “techno-utopian” ideologies. They want to bring about utopia. Now, I didn’t think that being pro-utopia was a controversial view, but apparently in Torres land it is. When all of one’s views are a pernicious combination of status quo bias and whatever one has to believe to disagree with EAs on everything without running afoul of social justice platitudes, this is apparently the view that one arrives at. Additionally, many in EA aren’t techno-utopians—many just want to help people and animals. Tomasik, for example, primarily wants to prevent extreme suffering, as well as many other S risks people.
This is why it helps when critiques are specific. If one has a critique of techno-utopianism, then that would be an argument against those who are in favor of tech utopias. But attempting to mix all these diverse ideologies into one globular paste so that one can smear them is intellectually lazy and gives one no reason to distrust many specific actions being taken by EAs. Even if utopia is the worst thing since dystopia, that would be no argument against nearly everything done by EAs, rationalists, and so on.
Next, the Torres Twetstorm smears Luke Muehlhauser based on him having worked for an organization that gets funded by Peter Thiel. If bad people donate to your organization, their badness is transmitted by their money, and turns the organization bad too! If Peter Thiel donated to the red cross, by this logic, then all red cross people would be bad.
Torres claims that the TESCREAL people want to “subjugate the natural world, maximize economic productivity, create digital consciousness, colonize the accessible universe, build planet-sized computers on which to run virtual-reality worlds full of 10^58 digital people, and generate “astronomical” amounts of “value” by exploiting, plundering, and colonizing.” I’ve never heard any say that their aim is to maximize economic productivity—ever. Some like Cowen argue for short-term growth as a top cause area, but this is far from a universal view, and no one supports maximizing it. It’s certainly not true that all TESCREALists want to create digital consciousness and colonize the universe—Tomasik, for example, is very opposed to both. Some don’t even think digital consciousness is possible.
The claim that they want to exploit, plunder, and colonize is particularly bizarre. Torres claims that the actions they advocate that require doing this would be spreading across the universe. So then who are they colonizing? The aliens? As for subjugating the natural world, this is far from a universal view in EA and rationalism and transhumanism. Eliezer (crazily) doesn’t even think animals are conscious, so he certainly doesn’t support destroying the natural world for its own sake. Is Eliezer not a TESCREAList? I support environmental preservation in the short term to reduce existential risks, but I think that all else equal, it would be good if there was less nature. Does this make me not a TESCREAList?
The fourth reason is that, according to Torres, TESCREAL ideologies are very influential in AI. So are, no doubt, Democratic politics. Should they also be part of the bundle? Or is it only Torres’ sacred cows that the standards are not applied to?
When analyzing something, it is sensible to be maximally granular, maximally specific. Lumping all of these ideologies together into some demented frankenideology is a tool of lazy scholarship; rather than having to say anything about why the EA position that one should donate lots of money to the against malaria foundation, people can just lump it together with the TESCREAL bundle which is very evil (we know it’s evil because one of the people that coined the term of one of the things that’s part of it was a eugenicist). Should Rawls’ support for eugenics mean that all his ideas, and all ideas stemming from Rawls’ ideas, should be discounted? Presumably not; the genetic fallacy is only a fallacy when applied to causes that Torres likes.
So I’m dubious about the usefulness of the term TESCREAL.
But there’s another reason I’m against the TESCREAL term: it seems that every time people use it, they’re arguing for something deeply idiotic. For example, Dave Troy has written a deeply terrible article titled “The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn.” The article is filled to the brim with falsehood and deception. I won’t respond to all of it, as some of it is so clearly wrong that it needs no explanation.
The title of the article is, of course, absurd. Effective altruism is disproportionately left-wing—and my sense is that the others are too. 72% of effective altruists are affiliated with the Left or Center Left politically. My sense is that the others are disproportionately both libertarian and left-wing—though almost certainly the majority of all the groups vote Democrat. This is my sense from hanging around them, as well as what one would expect from Demographics; EAs and LessWrongers are disproportionately highly educated which correlates with being left-wing.
The article admits that “by many measures (like donations by Big Tech employees to political candidates), the industry has been aligned with the Democratic politics that dominate the San Francisco Bay Area.” The industry in question is Silicon Valley Tech. So the author admits that by most objective metrics, Silicon Valley seems to be left-wing. How does he counter this? Well, he says
But contrarian alternate worldviews held by prominent voices like Elon Musk and Sam Bankman-Fried have emerged that not only counter old narratives but are actively merging with right-leaning political movements.
Musk seems like a centrist who is currently voting for Republicans. But Fried? Really? He literally donated millions of dollars to Democratic politicians and has said that he’s left of center politically. Troy provides no evidence that he’s right-wing, but apparently, anyone who is part of the TESCREAL bundle is automatically right-wing. I’ll have to alert my socialist friend who is worried about AI and thinks it’s good to donate to animal charities immediately! He’s apparently a right-wing socialist!
The article next cites Gebru and falsely claims that she was fired by Google. What really happened was she gave Google an ultimatum, threatening to resign unless they met her demands. They accepted her resignation letter.
Next, the article explains why these ideas should be lumped together, and the explanation gets weird. Cosmism is (allegedly) a view coming from Russian philosophers advocating both space colonization and Russian nationalism. I do not know any EAs or rationalists that are fans of Russian nationalism, but nonetheless, because it sounds sci-fi-ish, it’s just thrown into the bundle. Singulitarianism is a view advocated by Kurzweil that seems to be accelerationist—many Rationalists and EAs hate accelerationism. But again, the logic seems to be “this ideology sounds like science fiction, throw it in the bin with the other weird nerds who use their own private lexicon and also talk about science-fiction-sounding stuff.”
The article has a truly bizarre smear of Rationalism. Now, as I’ve said before, I’m not the biggest fan of the Rationalists. I think they are wrong about lots of things and exhibit some crackpot tendencies. But arguing for this requires actually disputing their views rather than cheaply smearing them, but arguing about controversial issues with intelligent people is not something that our author seems capable of. He says
Attracting mostly (but not exclusively) young men, the rationalist community has a tendency for hierarchy and a desire to “perfect” one’s understanding and application of reason. And according to some former members, some rationalist communities have exhibited signs of cultish behavior and mind control.
So why is Rationalism bad? Well, because Leverage Research is part of the Rationalist space, and they’re bad and culty (the linked medium post is just about why one small group of 20 or so people called Leverage Research is bad). This is like claiming that Sarah Lawrence College is a bad college because it had a small cult a few years ago or that progressivism is bad based on the existence of some culty tendencies in a group house of people working for the Democrats.
Imagine how this would sound to people outside of Troy’s echo chamber. Suppose someone described spending about an hour a day on Less-Wrong and thought that it had lots of good ideas. Would they be convinced to stop by “well, but like 20 LessWrongers were in a cult so it’s very bad.” Should they be? Of course not; this is just a cheap smear, an attempt to argue against no core ideas of the movement but just to associate it vaguely with culty behavior. They should be no more convinced by this than an atheist should be that atheism is bad based on the existence of a few atheist mass shooters. Pointing to a few members of a group who do bad things does not discredit the group. About Effective Altruism, Troy says
Effective Altruism aims to reframe philanthropy in terms of both efficiency and ultimate outcomes. Rather, say, than giving a blanket to the freezing person right in front of you, it might make more sense to devise systems to insure specific people get different resources to maximize their long-term chance of impacting the world. There’s a lot of hand-waving and rationalization here that I won’t attempt to parse now, but it’s a bit like if Ayn Rand was put in charge of a homeless services program.
Well, presumably if you have a blanket you should give it to a freezing person. Blankets are cheap and can’t be sold. The claim that Effective Altruism says one should “devise systems to insure specific people get different resources to maximize their long-term chance of impacting the world,” is just bizarre. Effective altruism has a few components, but the global health one is almost exclusively about giving either money or medicine to the poorest people on earth. What is the author talking about? He seems to maybe be confusing Nick Beckstead’s claim in his thesis, LongTermism, and global health charities. Effective Altruism has redirected tons of money, enough to save hundreds of thousands of lives—it doesn’t advocate paying rich executives to do specific things, unless those specific things are working for organizations that help poor people, animals, or reduce existential risks. EA orgs have famously low overhead. The Ayn Rand claim is so bizarre that it betrays a fundamental lack of understanding of either EA or Ayn Rand. Ayn Rand was against philanthropy. Any article that complains Peter Singer’s movement—which advocates giving all money above necessity to charity—to Ayn Rand who opposes charitable giving entirely is ridiculously confused. The followers of Ayn Rand hate EA with the burning passion of a thousand suns.
The author then falsely claims that these ideologies are generally pro-natalist and think lowering birth rates are a big problem. The evidence? Well, apparently Musk has said it, and he has once endorsed MacAskill’s book, so this makes it a representative belief among EAs. If you endorse someone’s books then all your views become there views. He then lies about Eliezer being pro-terrorism, when he supported fairly standard enforcement of an international agreement prohibiting GPU clusters. Once again, I don’t like Eliezer very much, but these claims about him are flat out lies.
Max Tegmark, an AI researcher at MIT, has also called for halting AI development in order to seek “alignment” — the idea that machine intelligence should work with humanity rather than against it.
Such alarmist arguments, which originate in science fiction and are quite common in the TESCREAL world, are rooted in a hierarchical and zero-sum view of intelligence. The notion is that if we develop machine superintelligence, it may decide to wipe out less intelligent beings — like all of humanity. However, there is no empirical evidence to suggest these fears have any basis in reality. Some suggest that these arguments mirror ideas found in discredited movements like race science and Eugenics, even as others reject such charges.
The arguments do not originate in science fiction. They come from lots of smart people thinking about the trajectory of AI. The claim that there’s no empirical evidence is bizarre—we don’t have superintelligent AIs. The instrumental convergence thesis shows that nearly all goal sets that a superintelligence could have would entail wiping out humanity. Troy doesn’t engage with this at all—he just calls it science fiction, erroneously asserts there’s no empirical evidence, calls it eugenicist (with no argument), and moves on.
Next, the author asserts that there’s a lot of overlap between these ideologies and the manosphere. Why think this? The author gives no reason—he just asserts it as if it’s obvious. All the guys that Troy doesn’t like must hang out together in the bars where all the bad guys hang out. He then addresses the claim that bundling TESCREAL together is reductive and basically agrees that it is but says that if others are being reductive, why can’t he? This is not an exaggeration—this is what he says
Combining complex ideologies into such a “bundle” might seem to be dangerously reductive. However, as information warfare increasingly seeks to bifurcate the world into Eurasian vs. Atlanticist spheres, traditionalist vs. “woke,” fiat vs. hard currency, it’s difficult not to see the TESCREAL ideologies as integral to the Eurasianist worldview. I also independently identified these overlaps over the last few years, and thanks to philosopher Émile Torres and Dr. Gebru who together coined the TESCREAL acronym, we now have a shorthand for describing the phenomenon.
Finally we get to the most bizarre part of the article which shows just how bizarre it is to combine all these ideologies together. He claims that TESCREAL advances illiberal ideologies like Russian nationalism and Putin’s agenda??!
As you encounter these ideologies in the wild, you might use the TESCREAL lens, and its alignment with Eurasianism and Putin’s agenda, to evaluate them, and ask whether they tend to undermine or enhance the project of liberal democracy.
TESCREAL ideologies tend to advance an illiberal agenda and authoritarian tendencies, and it’s worth turning a very critical eye towards them, especially in cases where that’s demonstrably true. Clearly there are countless well-meaning people trying to use technology and reason to improve the world, but that should never come at the expense of democratic, inclusive, fair, patient, and just governance.
Now, I’m not an expert on Cosmism, but maybe it’s illiberal and pro-Putin. But really? Effective altruism is pro-Putin. The only things on the EA forum about Putin are about how he’s raised the probability of the end of the world by his brutal invasion of Ukraine. I know no transhumanist, effective altruist, or rationalist who is pro-Putin. But when these are grouped together, one can just collectively smear them based on the bad things that the Cosmists allegedly do.
Ultimately, the TESCREAL label is an excuse for lazy scholarship and bad arguments. It allows people to thingify complex constellations of ideas, and then criticize them based on the ideas of one member of the group. It imagines that all the guys who think hard about the future and tech must hang out in secret bars plotting to bring about 10^20000000 digital minds. No evidence is needed for such claims.
The people who bemoan the pernicious TESCREAL ideology should be seen just like those who bemoan Demotism or criticize moderate Islam based purely on the actions of Islamic extremists. They are fundamentally unserious people guided by ideology and incapable of providing targeted criticisms.
What tf is wrong with positive, voluntaryist eugenics anyway? Its goals would be the same as those long articulated by leaders in education (diverse, well-rounded, pro-social, innovative leaders of the future, etc), with the key difference being only that the means is actually likely to work.
Critical typo: in the third-to-last paragraph, you clearly meant to write "Effective altruism is anti-Putin".