12 Comments
User's avatar
Ben Schulz's avatar

Good post. Glad to see you taking AI consciousness seriously. Most current philosophers are dismissing it out of hand.

James's avatar

Great article! Really hope you land a role at forethought -- think they do great work and could benefit from great people, as could we all! As I'm reading this argument for plausibility, I feel like we almost need to switch the framing -- by default this is going to happen.

Many people think that we should assume that the default is everything continues as normal -- this is not the case in the 1700s just after the invention of steam engines. It is not the case today. The default is things will get weird and hard to navigate, fast. Things that are the default sometimes do not happen due to unexpected circumstances, but more often than not they just... happen. What we would need for this not to happen is an unexpected circumstance.

Carlos's avatar

Ah, right, and the other thing is that you can go see the strongest Claude struggling terribly to play Pokémon Red right now on Twitch. It's difficult to square this with people essentially going "AGI in two weeks!". It could be LLMs are the proverbial "climb a tall tree to get to the moon" approach...

I heard Gemini did beat Pokémon some months ago, however, and that the difference is Gemini had a much more handholdy harness, clearly labeling what every tile does, for example. Maybe LLMs can be quite powerful after all if they get supporting software that renders the world intelligible to them...

Ibrahim Dagher's avatar

While I agree the risks are sufficiently high that we should take basically everything you say very seriously, I’m a little less persuaded by the trends you point out. There are good reasons to think that the kinds of scaling we’re engaging in right now won’t solve the key bottlenecks to get human-level capabilities (continual learning, updating in light of data, sufficiently rich world models, etc., all strike me as things requiring conceptual breakthroughs). And I don’t think algorithmic efficiency solves for that problem because the problem isn’t a lack of compute, but that the “static” NN architectures aren’t capable of the breakthroughs I listed above. Nevertheless, what I say could obviously be wrong, so the EV is still really high in pursuing what you suggest.

Carlos's avatar

Ilya Sutskever was just saying that the era of scaling is over, we have entered an era of research, the LLM approach is too compute intensive and has fundamental limitations. He still thinks superintelligence is coming in 5 - 20 years, which I find confusing, since needing more research to create a new paradigm is something far more uncertain than procuring more compute.

Bentham's Bulldog's avatar

Yeah there's some debate about scaling hitting a wall, but even then you could get continued progress with algorithmic improvements.

James's avatar

It is worth noting that Ilya currently runs a research-focused company, and is not really capable of getting VC investment for scaling (since why not just invest in openai or anthropic or deepmind instead?)

So he does stand to financially benefit from people believing scaling is over and we need more fundamental research. It also seems that he was forced out of the scaling race due to reasons not related to his beliefs about whether scaling would work, which have now seemed to change.

I don't mean to say you shouldn't take what he says seriously, he is a very serious person. Just flagging to you that this is the state-of-play as far as Ilya is concerned. If it's scaling all the way, he's out of the race.

EDIT: also, re: research, remember the transformer is only 8 years old! Deep-learning in the modern sense is only ~17 years old if I remember correctly. Research is moving quick in the era of deep learning too, so his belief is not necessarily that strange.

Roman's Attic's avatar

Gary Marcus also seems to think LLMs are hitting a wall and won’t make it to AGI, but he’s also probably the most annoying public intellectual

Austin Fournier's avatar

With regards to treaties about developing dangerous new technology, I feel it must be pointed out that the track record of treaties on stuff like this depends a lot on whether that dangerous new technology is actually useful. For example, chemical weapons were banned because they caused very painful deaths *without any meaningful tactical advantages.* See here:

https://acoup.blog/2020/03/20/collections-why-dont-we-use-chemical-weapons-anymore/

Bryan Frances's avatar

This is a good post, summarizing the things "we" should be thinking about.

One criticism: I don't think the space development point is as strong as the others. Getting to another star system is ridiculously difficult due to the distances and temperatures involved. 1000 miles per second, for instance, is way too slow (e.g., it would take about 1600 years round trip to the nearest star--and who knows if there's anything there worth getting). At 100,000 miles per second, it's 16 years. By the time AI can make a machine that can travel that fast, why would we need anything from other star systems anyway?

Rajat Sirkanungo's avatar

As Ben Schulz said, I am also glad that you take AI consciousness seriously. To me, whether you are a materialist, dualist, idealist, pansychist, or whatever, AI becoming consciousness is not crazy at all considering theists themselves believe that there are angels or beings who can pass through walls... have literal magical powers, etc. etc.

And I am a simple man... if it walks like a duck, quacks like a duck, and yada yada, then it is a duck, so similarly, if AI mind cries infront of me and behaves like a human in very very similar ways, then you know... i actually would really care.

Seemster's avatar

AI might be the best thing ever and we should welcome it. It is probably the best case scenario for consciousness existing for eternity. Consider The Conscious Wager I propose here: https://open.substack.com/pub/seemswithoutadoubt/p/ai-to-infinity-and-beyond?utm_campaign=post-expanded-share&utm_medium=post%20viewer