68 Comments
User's avatar
Chasing Ennui's avatar

A good rule of thumb is that anyone who rejects an institution or theory based on its origin should probably be ignored.

Alistair Penbroke's avatar

There's an even quicker heuristic: woke academic -> trash can.

Zachary Jones's avatar

Just started reading this book for a critical review, thank you for freeing me from the burden of doing so.

Trivial or False's avatar

I caught them on Non-Zero and was shocked by how unpersuasive their arguments were. My jaw hit the floor when they said they don't even use it. Like I can understand you don't like it but how do you expect me to take your opinion about the capabilities of these things seriously when you haven't used them? Insanity!

There is an X account I have followed for a while because he goes kinda hard and has weird opinions. But I saw this Tweet yesterday and when I read the replies I thought I was in The Upsidedown.

https://x.com/duns_sc0tus/status/2018050561086525864

Luke Cuddy's avatar

Reminds me a lot of those cultural critics who used to criticize video games as being the downfall of humanity even though they never played them and had no deep understanding of them.

Trivial or False's avatar

When Robert Wright asked how they know that AIs aren’t conscious, given that, he claims, you can’t even really know if other people are conscious (at least with certainty), Bender replied that she doesn’t “have conversations with people who don't posit my humanity as an axiom of the conversation.

Yes, this was very weird!

Also, did you catch the part where she corrected something he said and he said he was going to cut his mistake out so it would sound like she was correcting him for no reason, and she said she had a recording of her own. Wright was obviously joking and she was obviously NOT "yes-anding." She was dead serious. What humorless scold.

Tom Hitchner's avatar

I do not predict that “Majority World” catches on.

Israelite Introspective's avatar

Stop wasting your time arguing with idiots. It's like "debunking" flat earthers.

IMP's avatar

I like it + these authors aren't marginal internet crazies

Alfie's avatar

Except when you ignore ideas that are too stupid to be worth debunking you end up with RFK as the head of HHS.

Matt Beale 2's avatar

I wouldn't put the people who worry about AI in the same bucket as flat earthers. Without even looking hard at AI, you should know to be suspect of something so powerful and valuable. There are past lessons from fossil fuels, lead in gasoline, ddt, ozone depleting chemicals, and social media. Some wisdom applied would have been helpful in all of these, and valid concerns came long before implemented solutions.

Heck, the freaking radio of all things played a big part in two of the biggest genocides. Mass communicating lies can lead to dangerous responses, especially early on when people haven't built up any antibodies.

Israelite Introspective's avatar

I think that this book is making 2 false claims: 1) AI is useless/a sham 2) That it will have bad consequences

These are different arguments that just happen to be generated by the same motivated reasoning, so we lump them together. My flat earth analogy is more for the first claim. In a way, it is worse than flat earth because, as you said, it is self-evident that AI is powerful. While you at least need to have bssic reasoning or trust to accept a round earth. I agree with you that there are legitimate concerns about AI's consequences, however.

Alistair Penbroke's avatar

His point is that idiots like Hanna and Bender don't believe or care about their own theories, they're just engaged in tribal in-group signaling. Same thing that drives a lot of flat Earthers. That's why their answers are full of non-sequiturs and sound very much like the output of a small badly trained language model. It's all just a random stream of woke sounding phrases designed to impress their peers, who are presumably all just as stupid.

Israelite Introspective's avatar

Or alternatively, don't correct your enemy when he is making a mistake.

Charles Stewart's avatar

> AI can invent novel math proofs that impress the best mathematician in the world

I think this is a misstatement: Tao regards this result as the most impressive proof by an AI. He’s on the edge as to whether the production of this proof is autonomous. He thinks currently AI’s best use is as an assistant to a mathematician.

It’s also worth noting that this progress has been since late November and Tao’s opinion has been in flux since then. It usually takes a few months between finalisation of a book’s contents and its publication.

Michael M's avatar

This is… so weird.

I think what must be going on is that there is a large audience for books that communicate something to the effect of “the world isn’t complicated and unpredictable and there are currently no substantial changes happening in it. You should know that the world remains exactly as it has always been: a constant barrage of bad things being leveled on innocent people somewhere far away from you. But you aren’t complicit in that project at all, and need not worry about anything because you are a good person who isn’t doing the bad things. And because the bad things won’t actually change anything (they’re literally the same bad things from 1000 years ago) you also don’t have to worry about preparing for any kind of change in the world around you.”

And people find this so comforting that it’s actually a very reasonable thing to create and consume, sort of like drinking cough syrup or something.

All I can say is, I was doing a job in 2023 that is so totally unrecognizable from the job I do today that even if everything stops right here:

it would be the rest of my lifetime before the consequences of that *alone* (just my industry, just my type of role in it) settled out across the economy…

Camille B.'s avatar

Remarkable specimen of analytic / continental miscommunication, colorized 2026.

Joke aside, here's a crazy theory, but hear me out: the authors are not making claims about the world in the sense that you understand 'making claims' to mean. The authors see themselves as affecting the coordination/attention system and the distribution of power within it, and they believe to be doing so via the different assertions they are making.

If you use logic and logically arrive at the conclusion that longtermism is true, henceforth consolidating the power and legitimacy of Elon Musk (who is bad because e.g. he cut PEPFAR), socio-historical epistemology has it that you (by creating positive attention to it) actually, have consolidated the power of Elon Musk, and have done a bad thing -whatever logic initially says. In order to diminish his power, you have to pick a bottom line and fight for it until it becomes true. Repeat often enough that AI is a bubble, and the bubble will pop. This is why listing both upsides and downsides is not a valid move in (naive) socio-history (don't get me wrong, they are way more refined people on their own side). This would mean literally advocating for the status-quo. Describing reality, in this paradigm, is naming what you want or don't want, which in turn affects whether it happens or not.

As for the broader pattern of rejecting arguments based on their origin: again, "statements about the world" (as you understand them) do not exist in their discourse. The authors only see social actions: attempts at denigration or asserting power. If a Bad Person says the sky is blue, does it matter whether the sky is blue? It doesn't. What matters (for them) is the social effect of saying this sentence. Maybe it was to belittle a woman of color. Maybe it was an act of resistance. In general, the social effect of assertions is not considered to be a mere second-effect of analysing the world, but specifically, what analysing the world is all about to begin with (whether politics or math). Calling out those social effects allow to counter them. They live in Conflict Theory. They live in a Fight. What matters is how you posture in the social space, the only valid posture is egalitarianism, and pointing out errors to people (without the right sequence of social moves) is not egalitarianism.

Not only this, but social and past historical connections of concepts orient your attention to certain things and blind you to broader social and historical dynamics -you focus on birth rates now, but that's historically related to eugenics, therefore, the eugenics meme is dormant within people who share your opinion (they just don't say it out loud, or keep it at the periphery of their consciousness) and this meme is to be unleashed at some point, and you wouldn't notice if it were, since you're focused on birth rate as a separate issue. According to them.

Criticizing in turn this paradigm on the basis that it is factually inconsistent will be inefficient. The authors do not share your cognitive operations. They have their own. They are playing a game of Blood On The Clocktower while you are solving equations so to speak. Your only hope is to build trust and mutual respect first (!!) and then proceed to carefully, thoughtfully question them, with multiple markers of your deep care for the core values you agree with, and historical reflexivity for where your opinions come from.

Sorry if this all feels ranty/dismissive/crackpot or hastily written, I'm just vaguely trying to gesture at what I think is the core of the disagreement, again. I'm happy to talk more if none of this makes sense to you.

June Kur's avatar

I think there are better ways to do this than knowingly spreading falsehoods.

Eli's avatar

> you have to pick a bottom line and fight for it until it becomes true.

That is not how you affect the real world, and insofar as it "works" on the political world, it only does so insofar as the political world remains detached from the real world, weakening everyone participating in that political world.

Monika Putri's avatar

This is my first instinct as well; however, they talked about their frustrations with academic publishing as one of the reasons why they publish this book (in a 'Disrupted Science' podcast I found from the book's website), which make me think that their critics comes from a valid frustration with the general system and utilization of AI, meanwhile their uncaring of the science of AI is what makes their discussion really messy and ideological. Both are lecturers and researcher, op's word "The AI Con is what you get when a thesis you’ve been stochastically parroting for years is decisively disproven by the evidence" is so mean but so perfect. Still, the concerns they raise is a lot of present-day-people's concerns that still only have unsatisfying answers (such as accountability and biases), just kind of shooting themselves in the foot with their uncaringness of the science I guess.

Steve L's avatar

"Yet human neurons also follow deterministic physical laws. "

If you could prove that you could pick up your Nobel prize in physics and another one in neurobiology. Nobody knows if the world is deterministic.

"As it happens, I think we have an immaterial soul ..."

Is my immaterial soul deterministic too?

I was quite surprised to see you express belief in an immaterial soul, after giving such impassioned arguments (here and previously) that we are stochastic parrots just like the LLMs.

Do you think LLMs have immaterial souls? If they do, how do souls arise from algorithms executing on computer hardware? And if not, it seems to me that you have disavowed most of what you've been writing on the topic. LLMs may be wonderful, but they are not conscious. Not intelligent in the best meaning of the word, the human meaning of the word.

Allocate's avatar

Re "billions in profit", the only profitable AI company is Nvidia. OpenAI and the rest have only lost money on AI. As far as I am aware there are no profitable AI model companies yet.

Peter's avatar

The risk of super intelligent AI coming from LLMs is 0. They just aren't designed to be.

Mister_M's avatar

It depends what you mean by LLM. Likely you've never used a "pure" LLM in your life, unless you've built one. Are modern multi-modal models LLMs? Only in part. What about agents that use LLMs with reinforcement learning modules? Same answer.

Are you trying to say that the "current paradigm" in AI won't lead to superintelligence? The current paradigm is much bigger than LLMs, and is expanding rapidly. A statement like "superintelligence won't come from LLMs" is true or uncertain depending on your definitions, and regardless isn't very useful for assessing what's coming in the next few years.

Ben Schulz's avatar

That's like saying cars can't be self driving in 2026.

Lavander's avatar

Surely people know what kinds of designs actually produce superintelligences, and carefully avoided that while designing LLMs? It's not like someone would just try to make a system as smart as possible with any methods, by doing random intuition-motivated experiments, right?

Peter's avatar

It ain't coming from Chat gpt lol Stochastic parrot is a very good Stochastic parrot at the end of the day.

Donald's avatar

> a very good Stochastic parrot

It's already good enough to write basic code. So either "stochastic parrot" is a false hypothesis, or it's such a vague hypothesis that it's possible for a really good stochastic parrot to do some impressive stuff.

Can a Really Really good stochastic parrot do novel AI research? I don't see why not?

Peter's avatar

I mean it's a great Stochastic parrot, it just isn't sentient.

Donald's avatar

Ok. And what specific externally verifiable task do you think it can't possibly do because of that?

Because if an LLM is inventing nanotech, or the LLM codes something else which invents nanotech, that's still kind of a singularity.

Red Ambition's avatar

AI is a second order technology. It has a second derivative:

1.) AI improves

2.) The rate of AI improvement increases

3.) The rate at which the rate of AI improvement increases *also* increases

This is the first time we’ve dealt with a recursive technology. It’s also why these safety and alignment advocates will never win. They are fighting a second order technology with first order tactics.

Mark A's avatar

Now go read Romain Brette.

Steve L's avatar

Thank you for that reference, I'm reading his blog now. Good stuff.

Mark A's avatar

He’s brilliant. It’s the first time I’ve found someone who knows CS, neuroscience and philosophy and can see where the application of engineering concepts to the brain go wrong and how this leads to questionable research programs. Enjoyed your comment above.

Matthew Harvey's avatar

Hey! Great article, fun read.

I have a somewhat pugnacious reply—it sounds like you're reading Bender and Hanna with a bit of the same overstatement-prone zeal you ascribe to them.

If you're not in the mood for a nitpicky, argumentative take from an AI hater, read no further. If you are, it's on Medium. (Sorry, it immediately got way too long for a comment.) https://medium.com/@acornapocalypse/arguing-past-each-other-738e686f8940

Edit: This sounds like the most obnoxious self-aggrandizement. Sorry. Summary: I list what I think are overstatements in this article ("it can automate away big chunks of coding") and try to couch them within a point about how unwritten assumptions can undermine persuasion.

Benjamin Prior's avatar

Hi Matthew,

Funny enough, I wrote a response to your piece that makes roughly the same criticism you make here. I agree with you on some points, but I think several of the factual claims you make don't hold up. If you have the time and are interested, I'd welcome you to check it out:

https://benjaminprior.substack.com/p/arguing-past-the-facts

Matthew Harvey's avatar

Great piece! And I appreciate you taking the time for a careful answer.

You're right about my attitude, of course, and my dismissiveness of certain ideas. You're also right that I didn't offer nearly enough citation on the money stuff—although you're WAY too charitable, these companies are IMMOLATING cash—and didn't actually lay out the argument on knowledge (re: TI84s etc.). And about the "biggest theft in history" thing—colonialism IS probably bigger, that's true. I am happy to settle for third or fourth (or fifteenth) largest. It's still a massive, egregious crime by means of which a small number of tech assholes are enriching themselves without consequence—which is also part of the answer on equality, btw, along with bias and misinformation and the utility of AI for authoritarianism.

Can you link me to what you had in mind about pharmaceutical lit reviews? All I can find are cautious reports on how hardcore you gotta fact-check everything it produces. I've personally tried to use it for lit reviews many times, and never gotten a result more useful than just searching Scholar.

I'll leave aside the stuff about what is and isn't debated in cog sci—you're right, but a lot of shit ideas are still debated, and the behavioral sciences are VERY young and famously concerned with definitional questions. Chomskian linguistics are still around and that stuff's just pure fiction.

Perhaps most substantively, though, is the question of what we each take for granted.

Bluntly, I just hate this stuff. I hate the technology, I hate its boosters, I hate its funders, I hate its consumption and waste, I hate that we're all pretending like it isn't ruining college for an entire generation, I hate that the evil sociopaths who run silicon valley are in charge of it, I hate that it's beloved at Davos and in the White House, I hate that it's being added to our web browsers and email accounts and word processors in an effort to recoup bad investments by Microsoft and Google, I hate that it's flattening and homogenizing every corner of human intellectual life, and I hate the text and images it produces.

So I'm not available to be persuaded by the idea that AI is good. I AM available to be persuaded that many specific machine learning tools, some of them LLMs, are hella useful in niche contexts.

I'm not sure what BB (or you) are available to be persuaded of. Are you open the idea that this is all a huge, historical catastrophe?

David Wilmot's avatar

I read the book last year and posted this on the latest podcast interview on Youtube. Given how it was only written last year it's amazing how outdated it is.

Another criticism I would have is that it gives no agency to anyone who's non-white, apart from heroic POC anti-AI scholars or exploited victims of AI such as workers in Kenya. China's AI is nearly the same size as the US, with major labs and companies. The rest of Asia is likely about the same size as, or larger than, Europe. Yet the whole book is permeated by talking with Whiteness and Western imperialism. Would love to hear them make the same arguments at a big AI conference in China (or anywhere really in Asia) with Xi Jinping and other senior heavyweights and see what the reaction would be: "You only have a national AI strategy because you have internalised whiteness. Don't you know it's based on white supremacy? As POC in the Global Majority, you can't possibly have thought about the issues yourself or decided your own policy, and all these academics and tech workers can't really believe in what they are doing."

From watching the podcast on Youtube"

"One thing that was missing in Emily Bender's discussion of meaning is that, according to her paper, LLMs have only the form of words in relation to other words. To have some basic form of meaning would mean saying "switch on the lights," and it does it. Most newer LLMs models are multimodal. If it has a generative component, such as a diffusion model, and you ask it to draw a lightbulb with the bulb on, it will. So there isn't only a relation between words but also between words, images, videos, audio, etc. Firth and distributional semantics were only about word co-occurrence and not building relationships across modalities. More interesting work on world models (which usually also have LLM components) does just that: ask a character in Minecraft to chop down a tree, and it learns that it needs to get an axe and a saw, take them to a tree, and use them on it. So there's an association among the task, the objects, the tools, and the actions. Look at all the social media stuff about Lingbot or Genie 3 the last few days, which is admittedly different in creating an interactive 3D world rather than interacting in an existing one but they are similar in linking modalities and actions.

There's also this, even in just LLM Agents: If someone asks something in relation to what's happening now, like what happened last year, then learning to call the SYSTEM_TIME tool to get the current year, or if something is a Maths question, formulating an equation and sending it to a calculator. Of if you ask a model to check if a quote is real then calling a download on the page from a tool and then comparing the text against the generated text.

The key part of the Chinese Room, or Octopus, is their are only symbols. If the model has a video of a Chinese marketplace and of people shopping, talking, and buying things in Mandarin, or can even directly chat with people in the marketplace and see what happens, then it is completely different. If someone asks for Guo Kui then someone puts that object into a box; it can be associated with what it looks like. It could also be noticed it came out of a pan with oil and not a steamer. So if someone asks how Guo Kui made? Then Guo Kui associates with a pan with oil in it, a pan with oil with frying, and so the model infers fried. *

* For anyone Chinese I know they can also be fried and baked. Double frying taste better, though.

All the big labs have been talking a lot about world models for the last couple of years, so it would have been good to follow up on that. I've read the book, and in it they pretend people aren't aware of this issue, and this research isn't going on. I read the book earlier last year, so I could be misremembering, but I don't remember them discussing tools, actions outside the model, multimodality, or world models at all."

Nate's avatar

I was honestly most shocked regarding the definitions arguments they made. Bender is a professional linguist. Not conflating fuzzy definitions with fuzzy ontology is pretty day 1 stuff in Phil of Lang.