Discussion about this post

User's avatar
Malcolm Storey's avatar

"Regarding AI risk, there has only been one previous technology that serious people thought could end the world for reasons that weren’t obviously ridiculous: nuclear weapons."

The first nuclear test: they weren't 100% sure it wouldn't ignite the atmosphere.

There was a pretty good case for concern about mobile phone signals in the early days.

https://www.bbc.co.uk/news/health-12541117

But we seem to have decided it's ridiculous (despite documented changes to the brain) and can now just keep increasing signal strength until we discover otherwise.

The Grey Goo nanobot apocalypse: https://en.wikipedia.org/wiki/Gray_goo

Antilife (artificial life using D-isomer amino acids instead of L- and/or vice versa with sugars)

Plus all the usual ones: GM viruses, ozone layer depletion, nuclear waste, the extinction crisis, climate change, forever chemicals.

It's the precautionary principle. Like insurance - you've never used it so why do you keep buying it? (Assuming there are any insurance companies still in business after the terrible LA fires.)

Expand full comment
Woolery's avatar

As a new subscriber, I thought your argument against the analogy between AI risk and past technological leaps was really useful and clear. Thanks. I wasn’t expecting the theological turn but I’m beginning to understand this is part of the package (and it’s certainly your right to include it). Since you’ve come to the conclusion that theism is a sensible explanation for many puzzling things, and that many of the arguments against theism are unreasonable, do you think we should consider making theism a greater part of our standard teaching curriculum?

Expand full comment
14 more comments...

No posts