Discussion about this post

User's avatar
Joshua Tindall's avatar

What I find frustrating about this topic is how confident many people are in their ability to predict where capabilities will top out in the current paradigm. LLMs have blown past what I and many LLM skeptics would have predicted their limits to be back in the GPT-3/early ChatGPT days. Does the current approach have some fundamental limitation that causes capabilities to plateau? Quite possibly, but imo any reasonable view should be substantially (>15-20%) uncertain about this. Instead, a lot of people talk as though they're 95% confident that LLMs have fundamental limitations that make a plateau imminent, despite having just made 5 incorrect predictions of what these limitations would be. Nicholas Carlini has an excellent post that articulates what I think is the correct epistemic view here. (https://nicholas.carlini.com/writing/2025/thoughts-on-future-ai.html)

(many AI bulls have the same overconfidence problem in the opposite direction, but I think the bear view deserves more pushback at the moment because it seems to be very widespread among semi-technical audiences)

James's avatar

It's so crazy. I remember in early 2022, before AI was on most people's radar, looking at forecasts of spending by AI companies and seeing "Oh, if we just project this out, then based on current trends, in 3 years time, AI will be like 1% of the entire economy." I believed the graphs, but didn't really _feel_ the graphs. I feel them now. https://am.jpmorgan.com/us/en/asset-management/adv/insights/market-insights/market-updates/on-the-minds-of-investors/is-ai-already-driving-us-growth/

Now I think more people are starting to get to where I was 3 years ago. They believe it, but they don't feel it. We should try and get people to feel it as soon as we can, or I'm worried we're going to run into big big problems that we are not prepared for.

21 more comments...

No posts

Ready for more?