19 Comments

"Regarding AI risk, there has only been one previous technology that serious people thought could end the world for reasons that weren’t obviously ridiculous: nuclear weapons."

The first nuclear test: they weren't 100% sure it wouldn't ignite the atmosphere.

There was a pretty good case for concern about mobile phone signals in the early days.

https://www.bbc.co.uk/news/health-12541117

But we seem to have decided it's ridiculous (despite documented changes to the brain) and can now just keep increasing signal strength until we discover otherwise.

The Grey Goo nanobot apocalypse: https://en.wikipedia.org/wiki/Gray_goo

Antilife (artificial life using D-isomer amino acids instead of L- and/or vice versa with sugars)

Plus all the usual ones: GM viruses, ozone layer depletion, nuclear waste, the extinction crisis, climate change, forever chemicals.

It's the precautionary principle. Like insurance - you've never used it so why do you keep buying it? (Assuming there are any insurance companies still in business after the terrible LA fires.)

Expand full comment

As a new subscriber, I thought your argument against the analogy between AI risk and past technological leaps was really useful and clear. Thanks. I wasn’t expecting the theological turn but I’m beginning to understand this is part of the package (and it’s certainly your right to include it). Since you’ve come to the conclusion that theism is a sensible explanation for many puzzling things, and that many of the arguments against theism are unreasonable, do you think we should consider making theism a greater part of our standard teaching curriculum?

Expand full comment

I generally don't think that you should teach very controversial views as truths.

Expand full comment

I agree. But the great majority of Americans don’t see theism as controversial. It’s the majority position (god is a good explanation). You just arrived at it from a different direction. At least 99% of our elected federal representatives are theists. I’m not sure I agree that theism is controversial. It’s certainly less controversial than atheism, though of course atheism isn’t taught either.

It would be nice if we could reach a consensus. And it sounds like you feel pretty strongly that reason might be able to settle the matter in favor of god. I haven’t been able to follow your reasoning on this so far, but your claim is too important for me to dismiss prematurely. Thanks for sharing.

Expand full comment

Interesting point about how this relates to arguments for God. Do you think the fact that the best arguments for God are new is actually itself inductive evidence that we will continue to get new arguments for God but not atheism?

I do wonder, though, if the newness of the arguments for God should actually be a point against them, not in their favor. From the one example we have in the past of a strong abductive argument for God that was later shown to be faulty by science, there was a large gap in time between the first time the phenomenon was observed and the time science explained it naturalistically. Or maybe the arguments are just too new for philosophers to have discovered good refutations to them. Shouldn't it be a bit concerning that most of the arguments for God that convinced very smart philosophers in the past (e.g., Aquinas's Five Ways) are now considered to be rubbish even by your own lights? Of course, there is always the response that while the inductive argument might provide some evidence that current arguments for theism are mistaken, the current arguments are just so good that they overpower that evidence.

Expand full comment

//Do you think the fact that the best arguments for God are new is actually itself inductive evidence that we will continue to get new arguments for God but not atheism?//

Yep!

Your other point is interesting, but I don't think right. Our best arguments for most things have been invented after 1980--e.g. utilitarianism. I just think that philosophically, we're at the apex of history, and have better arguments almost across the board than we ever have

Expand full comment

I agree that this sort of induction isn't good evidence, but I do think it could still lead to a reasonable higher order doubt about the thing in question.

If people have historically been quick to make a certain type of judgement, then I do think it should make you less confident in any judgement of the same type. While it obviously doesn't affect the evidence for the case in question, it *should* make you skeptical that you're correctly assessing the evidence, or make you suspect that there is some evidence you're missing--there is presumably some underlying reason why people's judgements in this domain are systematically misguided.

Now for the examples you discuss, there is also the problem of whether it's empirically correct that people have made these mistakes, which is obviously pretty important.

Expand full comment

Very glad to see this point raised in the comments. Though I think this point is even more important than you say here. If you see a lot of bad arguments for X in particular then I think it is very important for your epistemic health to understand why (or take into account the different possible explanations for why). If it's because more resources are going into searching for promoting evidence and arguments for X than for not X, then you need to do more than simply evaluate the quality of the arguments to remain unbiased; you also need to take into account that the all the resources put into arguments for X didn't come up with an even better argument. Depending on the amount of selection pressure acting on X-arguments (and how compelling arguments for the wrong conclusion can be made), I think this sort of consideration can have a pretty overwhelming impact. For example taking an otherwise compelling argument and turning it into anti-evidence because it wasn't the most compelling argument you've ever seen (though it's admittedly sort of difficult to give anything other than a toy model of the sort of argument space you'd need to have a prior over to do this in a rigorous Bayesian way).

So I think the empirics are really important here contra OP. I think enough bad arguments for evolution really should make you sceptical of The Origin of Species. Though I do think the empirical case being argued against hasn't been made very strongly, which is ironically a small amount of evidence that we're actually seeing selection for arguments against AI-doom! All that said there is a version of this argument that I haven't done the research to endorse but, you know, I feel in my bones, and this post didn't really update me on: It feels like there's something about current internet and (to some extent) academic culture that selects for tales of crises and doom as, for instance, criticised though not much theorised over on Slow Boring. The overall pattern is plausibly due to non-rational forces and AI doom scenarios seem to fit the pattern, so at least some of their popularity is plausibly due to non-rational forces.

Expand full comment

"Now, like crime in multistory buildings, this is wrong on a number of levels."

I laughed way too much at this. Damn it!

Perhaps the difference between "God of the gaps" and in/abductive reasoning is that there's consideration regarding *which* is a likely explanation of certain phenomena.

It is admittedly a bit difficult for me to grasp the anthropics/SIA argument without much effort, so I definitely won't comment on it, but every time I've seen the "God of the gaps" levied, it's always been regarding something like:

"Isn't it absurd that [inaccurate understanding of evolution]? Do you *really* believe that? Doesn't God seem likelier?"

Which to me is a mix of both an argument from incredulity and ignorance, as opposed to any in/abductive reasoning, no?

Expand full comment

I think Bentham's Bulldog is right that, for most of the strong philosophical arguments for God's existence, like fine-tuning, the God of the gaps charge doesn't really work. There are certainly bad arguments for which it does work, e.g., "Dark matter can't be explained by science - it must be God causing galaxies to spin faster and extra gravitational lensing to be observed!" but philosophers tend not to make those.* I would say that "God of the gaps" counts as a fallacy in a case where there's no reason to think that the phenomenon in question is more likely under theism than naturalism, or where there are so many potential explanations that there's no reason to think God is the best one.

*Actually, there is one special case where I do see philosophers sometimes making them - this is when they use God to explain some *philosophical* puzzle rather than a scientific one. For example, philosophers have long been puzzled by what exactly abstract objects are and what mathematical truths are about. Some try to invoke God as an explanation for them. I think this is a terrible explanation and amounts to a category error, and based on his tier list, I think BB also considers these arguments to be very bad. But these arguments are also fairly unpopular among philosophers.

Expand full comment

You phrased it better than I can!

To me, the fallacy definitely lies within the context (ie whether there are many potential explanations / whether theism is *the* likely option), but I think it's been preemptively levied so often that it's a case of crying wolf. Sad!

Expand full comment

Fine tuning relies on a poor understanding of cosmology, so it definitely seems like an extended God of the gaps argument.

Expand full comment

Without saying which I think is right/wrong, I think almost everyone pattern-matches the "AI doom" debate into their experience with "arguments about climate change."

Expand full comment

That is the best newsletter name I have encountered in a long while. Might even consider voting, now.

Expand full comment

Thanks! I kinda like it even more now that it's 2025

Expand full comment

AI risk arguments are stupid on the face of it. The risk from nuclear weapons is obvious: demonstrations of nuclear weapons showed big explosions, industrial civilization concentrates large amounts of people in small spaces, big explosions in a small space kill lots of people. AI risk arguments instead use this train of logic:

Humans took over the world because they are smarter than other animals. So a superintelligent AI will take over the world even more because it is smarter than humans.

This logic is stupid because it presumes the only factor needed in human expansion was intelligence. By this logic, a superintelligent AI would have no problem taking over a world filled with animals. But in reality, what would happen is the AI would fruitlessly flash messages on a screen while animals milled around, until the computer it was running on eventually broke down. So obviously intelligence alone is no superweapon.

Expand full comment

But if an AI is able to manipulate humans into doing things, it can just use whatever advantages humans have to its own ends, so I’m not convinced this would really matter

Expand full comment

The ability of a superintelligence to manipulate humans is going to be limited, despite what AI-in-a-box people say. Why? Because humans, who are much more intelligent than dogs or chimpanzees, have an extremely limited ability to manipulate such creatures through pure use of language.

Expand full comment

>None of the best arguments was more 50 years old.

Interestingly, you didn't list Pascal's wager among your arguments for God. Is that just because it's not really making God theoretically more probable, but only more pertinent to you?

Expand full comment