Discussion about this post

User's avatar
Justalittleless's avatar

I find it so strange when people support continued aggressive AI research because they say they feel it has a low likelihood of ending apocalyptically. Human annihilation is a worst case scenario for us. And it seems odd to gamble on you being right when the consequences of you being wrong are so dramatic.

I don’t skydive. It looks fun but the (admittedly very) low chance of me dying makes me not take that risk. Now others do and that’s fine. But if there was a nonzero chance that your body’s impact on the ground would set off an enormous thermonuclear detonation and could possibly ignite the atmosphere of the planet then I think we would all agree that it’s not really worth the risk.

I know AI could have enormous upsides. But flirting with the possible downsides before we’ve taken every single possible precaution just seems like a massive error of threat assessment.

Expand full comment
Infinite Spaces's avatar

I don't believe that AI will have intelligence in the same way humans have intelligence, for a number of philosophical reasons.

However, this makes me MORE worried not less, at least with regard to typical doomer concerns.

As a non-Humean I reject the Orthogonality Thesis (which. for Yudkowsky et al, seems key to most of the ideas about mis-alignment). I think genuine human intelligence involves genuine moral knowledge.

If we think superhuman "AI", though, doesn't involve true intelligence--if we think of it as purely a machine for narrowing the space of possible futures, for example--then it surely won't have moral knowledge either.

Expand full comment
50 more comments...

No posts