Nature Releases A Stupid Editorial On AI Risk
"cocaine addict urges you not to focus on the problems doing cocaine would pose tomorrow when he has a desire to do cocaine today"
Nature publishes an op ed in which a cocaine addict urges you not to focus on the problems doing cocaine would pose tomorrow when he has a desire to do cocaine today. Okay, not really, but they released something just as insipid and unconvincing. The title of the article is Stop talking about tomorrow’s AI doomsday when AI poses risks today. Now, it turns out that arguing that something poses threats in the here and now does not, in fact, mean that it won’t pose more threats in the future, and in fact makes it more likely that it will pose threats in the future if it’s already worrying and only gaining more capacities. But apparently, such a basic point of elementary logic eludes the writer of this op ed in the prestigious Nature—who thinks that pointing out that there are current risks from AI means that we should not worry about the risks it will pose in the future.
Imagine this feat of grand illogic being applied to any other domain. How absurd would one look who claimed that we shouldn’t focus on reinvigorating the non-proliferation treaty, which would make it harder for countries like Iran and Saudi Arabia to get nuclear weapons, leading to possible nuclear war, because there already is a nuclear threat. Those who espoused this view certainly would not be published in nature, and that line of reasoning would not be touted by nuclear “ethicists,” by which I mean the people who think the real concern of nuclear weapons is that they might cause some people to say racial slurs or utter politically incorrect truths or exacerbate inequality. And yet when it comes to AI risk, people’s brains seem to depart their skulls.
You know, I’ve been quite specific about why AI is a threat. I’ve laid out the case in great detail at various times. And I’m far from the only one. Eliezer Yudkowsky has written about googolplex words about the dangers of AI, at various times. When two of the most prestigious AI researchers in the world—Bengio and Hinton—as well as a sizeable portion of AI researchers think there’s a high probability it will end the world, when we’re creating something far smarter than us which we have no idea how to control, as even its critics admit, something that could deceive us easily, it’s not unreasonable to worry that it might pose a risk to the continued existence of the species. Certainly, one might expect that if one is writing an op-ed arguing that we shouldn’t be worried about next generation AI, they might provide some reason to believe that, some reason to think that various visionaries in the field are systematically mistaken about the risks of AI. Unfortunately, one who expects that would be disappointed.
Why does the article claim that we shouldn’t worry about AI? Well, it doesn’t. You can read it—it provides no rebuttal to any of the arguments for why AI is a significant risk, nor any reasons to think AI is not a risk. Instead, it claims that this “fearmongering” is “counterproductive.” The authors claim that:
Fearmongering narratives about existential risks are not constructive. Serious discussion about actual risks, and action to contain them, are.
But at no point does the article justify the claim that existential fears are fearmongering. When one accuses another of fearmongering, they are claiming that the risks are overblown. But this article at no point provides any reason—not a single one—to think that the risks are overblown. Thus, the claims of fearmongering are presented wholly without evidence.
Then the article goes on to explain that by focusing on the fake risks, it distracts from the real risks. Now, perhaps this would convince you if you weren’t already convinced that AI posed risks. But then who the hell is the article for? The sizeable contingent of people who think that AI does not pose existential risks but that we should all take the existential risks seriously. Do such people exist? If the article assumes that AI isn’t an existential risk and then argues that we shouldn’t take it seriously as an existential risk, it should not convince anyone. It is just preaching to the choir—presenting reasons to believe some position that will only be convincing if one already believes the position.
Unfortunately, articles like this are common and are taken seriously. The contingent of people who think that the major risks of AI come from its potential exacerbation of algorithmic biases and the fact that it might say offensive things if it’s not adequately trained constantly claim that claims of AI as an existential risk distract from their supposedly very important social justice concerns. But unless they can grapple with the arguments for AI being an existential risk, their claims aren’t worth taking seriously. Pointing out that taking problem X seriously might make us take other problems less seriously is not an objection to taking X seriously unless one has a reason that X is not actually a risk worth taking seriously.
Unfortunately, many supposed AI “ethicists” have no such arguments, so instead they pander to social justice causes in the absence of arguments. You’d think that the potential imminent end of the world would be enough to get people to set aside their pet political issues, rather than claiming that worrying about something that might kill all people is a distraction from their pet issue, but unfortunately, humans are much too partisan for that to be the case.
I think the main argument I'm able to extract from the piece is that doom fears are unconstructive, and/or just involve giving the tech industry more power and/or fuel scary AI? Like, they write:
> First, the spectre of AI as an all-powerful machine fuels competition between nations to develop AI so that they can benefit from and control it. This works to the advantage of tech firms: it encourages investment and weakens arguments for regulating the industry.
and
> Fearmongering narratives about existential risks are not constructive.
If AI doom were a real threat, but the only thing that happened when you talked about it was that tech firms made more scary AI, then I think it would make sense to instead focus on other risks? That said, it's a strange premise to take for granted.