12 Comments
User's avatar
Carlos's avatar

I think it's likely that the jaggedness of AI intelligence continues. Sure, it's improving on some formal metrics, but it's still bafflingly bad at playing Pokémon Red. We could get a weird outcome where software development gets automated away, but we still don't have AGI, much less ASI.

This is my favorite forecast regarding AI, which still predicts things getting crazy within our lifetime:

https://deepforest.substack.com/p/15-years-to-agi

Highlights some persistent limitations of our current approach.

Elite Human Chatter's avatar

Good post. I mostly agree with the hype, an intelligence explosion is near, and AI will be extremely transformative for better and for worse. That said, I'm skeptical of extremely high GDP growth. I think you are underestimating the extent of regulatory power, physical constraints, adoption rates etc. Even as we get better with scaling, building new data centers takes time. But more importantly, it requires a lot of energy. Both Sam Altman and Mark Zuckerberg have spoken about power as being the biggest constraint they are facing. I also think there's something to Cowen's argument that as some parts of the economy see rapid growth from AI adoption, other parts of the economy won't adopt as well and slow down overall growth.

Bentham's Bulldog's avatar

I address both of those in the piece.

Ben Schulz's avatar

This week there were three recursive self-improving models posted on github. Coding/computer for AI agents skills can now be downloaded for free. Model merging works, self-distillation works, embeddings are another scaling sigmoid. The writing is on the wall. The No Free Lunch crowd was hilariously wrong. Lecunn was wrong.

Alex C.'s avatar
13hEdited

BB—what's your opinion of Gary Marcus? He always has a lot to say about this topic, and it's usually at odds with the conventional wisdom from inside the AI industry.

Bentham's Bulldog's avatar

Probably more bullish on LLMs than him, but I don't have a super strong view.

Mark's avatar

"There are only two ways to react to an exponential—too early or too late". Given that an intelligence explosion leading to AGI/ASI could lead to misaligned AI by default (leading to takeover and extinction/disempowerment), if it's true the intelligence explosion is imminent, then Pausing AGI development (or preventing the intelligence explosion) is imperative for all human life/freedom.

Grantford's avatar

> Another objection to an intelligence explosion: research isn’t the only thing one needs for an economic explosion. You also need to do experiments and to build stuff.

In many fields, experimentation is the core research task. I'm curious how much of a divergence we may see in research progress between fields that require physical experimentation vs. fields that do not rely on experimentation or can rely heavily on in silico experiments. Even if robotic experimenters can conduct physical experiments at scale, some types of studies (e.g., longitudinal studies of human and animal development) inherently take a long time to complete. So even with an intelligence explosion, I could picture something like biomedical progress advancing more slowly than something like AI development.

comex's avatar

I think an intelligence explosion is pretty likely, but I also think it’s important to differentiate intelligence from raw compute time.

As LLMs stand today, adding more tokens or more parallelism can improve success rates, especially at narrow and easily verifiable problems – but with exponentially diminishing returns, and hard caps on harder problems where agent swarms outgrow their own ability to coordinate. Therefore, if LLM intelligence were to hit a wall tomorrow, I don’t think we would ever get a true intelligence explosion regardless of how much hardware is built or how much numerical increase there is in “AI cognitive labour” (as PREPIE puts it). We would get improved productivity, especially as we got better at coordinating the models, but not an explosion.

However, the numbers argument goes both ways. LLM intelligence is not going to hit a wall tomorrow. And if LLMs start to outgrow these limitations and match humans, then I think we *would* get something pretty explosive regardless of how *little* hardware was built. A swarm of LLMs built a web browser in a week. Imagine if they could build one that actually worked.

And in reality, the amount of hardware built will not be little. So we’ll get a big explosion. But only if LLMs can break the barriers. They’ve already broken many.

Muhammad Wang's avatar

It would be nice if people who's jobs were writing substack articles and twitter posts about why they expect llms to automate large parts of the economy included sections within said posts in which they discussed the nature of work in the commercial economy, and the manner in which they expected said automation to occur

Just saying 'llms are probably going to continue to improve, and by some metrics, this improvement will seem ""exponential"" seems quite boring to me. I would like to see some close analysis of the manner in which adoption will occur, or of the way that an "intelligence explosion" will occur, preferably by people with technical expertise who make arguments that rely on such expertise and which couldn't be made by a cs/math layman who read a few lesswrong posts

Interesting post though!

BearlyLegible's avatar

I think that most people are extrapolating the perceived economic impact of AI (generating slop, fun chatbot, somewhat better google search) rather than extrapolating the intelligence of the AI. From this perspective AI hasn't even automated simple jobs like customer service, so automating something like AI research seems very far away.

But the returns from intelligence are nonlinear, GPT-3.5 and GPT-4 seemed to be progressing extremely quickly. Partially because they were, but partially because the progress made them more useful. GPT-4 was a better chatbot than GPT-3.5. But GPT-5.2 is only a somewhat better chatbot than GPT-4, if all you use it for is as a replacement to google it's not all that superior to GPT-4 with search tools.

There's this massive capabilities gulf in-between "useful chatbot" or "useful tool that must be constantly babysat" and "useful independent agent". There are very few returns to be found inside that gulf (programming seems about the only major one) but once the gulf is crossed there are PLENTIFUL returns.

Put another way, the "perceived additional power per additional unit of intelligence" is currently quite low, but we have reason to think that it'll get extremely high in the future, meaning we could have very rapid shifts even without a significant increase in how quickly we're gaining additional units of intelligence.