Why The World Is Underestimating AI Progress
Underrating exponential growth
The basic case for rapid AI progress can be made with reference to a few statistics:
The maximum task length AI can do has been doubling every seven months for several years.
The number of AI models we can run has been going up 25x per year, meaning that once we get AIs to human-level, it will be like the population of human researchers is going up 25x per year.
Both compute and algorithmic efficiency have been going up more than 3x per year.
For there not to be extremely rapid economic growth, multiple trends that have been consistent for about a decade would have to ground to a halt. Even if they only slowed down by a factor of, say, five, we’d still get explosive progress.
However, a lot of people are skeptical of rapid AI-driven growth. A scenario with quick AI takeoff—where the GDP might double in a year—just seems so crazy. So far AI isn’t near that level. Can such speculative arguments really establish any reasonable probability of explosive growth?
Will MacAskill (who has a substack by the way) has given a good analogy for thinking this through. In the early days of COVID, many people—many of them EAs—were predicting, even when COVID was just in Wuhan, that it would soon spread worldwide and infect millions. However, at the time, the mainstream media was writing things like “experts caution that the novel coronavirus has only infected 300 people outside of China, and it is thus too soon to be alarmed.”
How did the EAs get things so right? The answer is that they relied on exponential growth trends. If something is growing exponentially, then even if it is modest currently, it’s unlikely to stay that way. If something doubles twice a year, like maximum AI task lengths, then in a decade it will go up by a factor of a million. This is why even if there are only a few hundred COVID cases, it makes sense to be concerned. If COVID cases are rapidly doubling, they won’t stay rare for long.
(Graph stolen from McBride).
There are other similar examples. Solar progress was systematically underestimated because people failed to consider potential exponential growth. Humans have a bias towards assuming that linear trends will continue. But if the trends are exponential, then their continuing means the amount of growth will increase massively.
AI capabilities have, on almost every conceivable metric, been going up exponentially. In many cases, the exponent is quite large. This is a reason to predict that things will get crazy soon even if they’re not crazy yet. And when sophisticated people have tried to plot out the trends involved in AI capabilities growth, even when making conservative assumptions, they’ve found quite consistently that things will get very crazy very soon.
Of course, it’s possible that progress will plateau. There could be diminishing marginal returns. The case for continued exponential growth is not as robust for AI capabilities as it is for COVID. But there aren’t any extremely strong arguments for confidence that progress will level off, when progress has been consistent for quite a while. At the very least, we shouldn’t be extremely confident in progress stalling. The odds of super rapid growth remain non-trivial.
But you definitely shouldn’t be confident that there won’t be very advanced AI capabilities based on the fact that AI capabilities aren’t very advanced yet. That is an all-too-common error made in forecasting exponential growth. It is the derivatives that matter, not how much progress there has been so far!
If this is right, then it is fairly likely that we are about to see an era of ridiculously rapid growth in AI capabilities leading to insane rates of economic growth. The world must wake up to this fact if we are to survive such a destabilizing time. New deadly technologies, GDP doublings, takeover of space, and other things that have never been witnessed before are potentially on the horizon. Pandora’s box has been opened. Who knows what awaits us when all its contents spill out.



What I find frustrating about this topic is how confident many people are in their ability to predict where capabilities will top out in the current paradigm. LLMs have blown past what I and many LLM skeptics would have predicted their limits to be back in the GPT-3/early ChatGPT days. Does the current approach have some fundamental limitation that causes capabilities to plateau? Quite possibly, but imo any reasonable view should be substantially (>15-20%) uncertain about this. Instead, a lot of people talk as though they're 95% confident that LLMs have fundamental limitations that make a plateau imminent, despite having just made 5 incorrect predictions of what these limitations would be. Nicholas Carlini has an excellent post that articulates what I think is the correct epistemic view here. (https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html)
(many AI bulls have the same overconfidence problem in the opposite direction, but I think the bear view deserves more pushback at the moment because it seems to be very widespread among semi-technical audiences)
It's so crazy. I remember in early 2022, before AI was on most people's radar, looking at forecasts of spending by AI companies and seeing "Oh, if we just project this out, then based on current trends, in 3 years time, AI will be like 1% of the entire economy." I believed the graphs, but didn't really _feel_ the graphs. I feel them now. https://am.jpmorgan.com/us/en/asset-management/adv/insights/market-insights/market-updates/on-the-minds-of-investors/is-ai-already-driving-us-growth/
Now I think more people are starting to get to where I was 3 years ago. They believe it, but they don't feel it. We should try and get people to feel it as soon as we can, or I'm worried we're going to run into big big problems that we are not prepared for.