Nathan Robinson’s Primary Argument Against AI Risk Is That It Sounds Like Science Fiction, Which Is Not, in Fact, a Good Reason to Dismiss AI Risk
The anti-longtermism left is deeply confused
The lovely Garrison Lovely has a lovely tweet describing an article he wrote for Jacobin defending socialist longtermism. He argues that, because socialism is good—something which socialists should agree with—it will be good for the long term future, and thus, left-wingers should get on the longtermist bandwaggon. In the article, he mentions AI risk.
Nathan Robinson replied
Imagine if one made this argument against things that Nathan Robinson believes. Climate change above 5 degrees—sounds like science fiction, thus is not worth worrying about. Nuclear war—has never happened, sounds like science fiction, is not worth worrying about.
Let me describe the risk. We have the most thorough report on the subject as well as the majority of AI researchers predicting that AI will become generally intelligent in the near future. When AI is very good at a domain, it effortlessly outperforms humans. So, we have good evidence that, in about fifty years, AI will be orders of magnitude smarter than humans in every domain.
More than half of the top 100 most cited AI researchers say that there’s a substantial risk (at least 15%) that AI will be either on balance bad or an existential catastrophe. Top AI researchers, people who have written textbooks about AI, Elon Musk, Stephen Hawking, and Bill Gates are all very worried about AI. The basic argument for AI risk is the following.
1 We’re likely to get superintelligent AI soon.
2 Superintelligent AI poses major risks.
We have no idea how to make a thing much smarter than us safe. To quote an earlier article I wrote on the subject.
As Grace says “the probability was 10% for a bad outcome and 5% for an outcome described as “Extremely Bad” (e.g., human extinction).” How would this happen? Well, AI will be programmed with goals. Unfortunately, humans are notoriously bad at figuring out how to get AI to do exactly what we want. This is a very challenging problem called the AI alignment problem. It seems easy. It is sadly not.
Suppose we want an AI to cure cancer. Well, destroying the world cures cancer. There isn’t a good way to program into an AI “DO AS I WANT NOT AS I SAY.” AI merely tries to optimize for its utility function. If doing so harms humans then it would harm humans.
You might think that we can just program into the AI, “Don’t harm people.” The problem is, how do we define harm. If an AI cures cancer it will change the course of the world in dramatic ways—ways that will cause lots of harm. US law is enormously complicated and people still find loopholes. Programming into an AI as many laws as affect humans would be near impossible. Additionally, many laws have an aspect that’s hard to resolve without a human in the loop. An AI couldn’t be a judge because it doesn’t have an adequate understanding of the uniquely human interpretive frame through which we view AI.
This is just the tip of the Iceberg relating to the problem. Getting AI to have a reward function and do even very simple tasks without ending the world has proved to be elusive.
To disagree with the AI risk argument, you either have to disagree with the majority of AI researchers about when we’ll get AGI—and be sufficiently confident that the plurality of top researchers is wrong, even when their judgment agrees with the most thorough report on the subject, to dismiss the risk as science fiction, or think that a thing that is superintelligent—far smarter than us—doesn’t pose major existential risks. We have no idea how to get an AI to do what we want, and AIs are coming soon.
If you knew that aliens would be coming to earth in 50 years according to most astronomers and the most thorough report and that the aliens had unpredictable goals, it may be worth actually worrying about aliens rather than idiotically dismissing risks as science fiction.
We do not know how to get an AI to do what we want.
We do not know what things we should get them to do in the first place, even if we could get them to do what we wanted.
Most goal systems that we could give an AI would end the world, often in unpredictable ways.
One has to have blind faith in corporations and governments to deny that AI is a risk—to think that things much smarter than us don’t pose major risks to the world, when they can think 1000 times faster. We already know that AI can generate designs for chemical weapons at alarming speed. If it wanted to wipe us out, when it becomes 1000 times smarter than us, it could. But most goals that it could have would involve killing us with impunity, just as we step on ants when they don’t feature in our goals.
Remember, the mean expert prediction is a 10% risk of AI wiping us out.
On the one hand, we have a very plausible mechanistic story, with each step of the argument having no clear rebuttals. On the other, we have Nathan Robinson’s assertion that it sounds like science fiction.
This is an utterly laughable way of reasoning about things. Any predictions about the future are going to sound like science fiction. Chat GPT3 would have sounded like science fiction when my parents were kids—before computers were available for private individuals.
What evidence could convince him? Obviously we do not have evidence from past AI systems destroying the world. But it turns out that we have lots of AI experts predicting it, thorough models predicting AGI is coming soon, and a good reason to think that once AGI exists, it will be quite hard for us to control it at all. Most reward functions we could give the AI result in it taking over the world or destroying us, because we interfere with the things it could do. It’s actually incredibly difficult to specify a reward function that doesn’t destroy the world in the hands of a suitably intelligent entity.
But apparently, when presented powerful, rigorous cumulative cases, the Nathan Robinson school of reasoning involves dismissing it because it sounds like sci fi. It is this utterly feckless approach to the future that puts us in real danger, leading us to downplay AI risk. Robinson’s view seems to be that perhaps superintelligence will be a risk, but we have no reason to think it will exist
But we do have reasons. We’ve seen very rapid progress in AI recently, generally outpacing the predictions of all but a few people. We have the combined view of a thorough report using the biological anchors model and most AI experts. Even if you think there’s only a 20% chance of it—which seems absurdly low—it still seems clearly worth taking seriously.
All in all, Nathan Robinson, a man strangely taken seriously by lots of smart people, rejects a view regarded as a significant risk by lots of experts in the field, based on very powerful and rigorous evidence, based on no counterargument beyond it sounding like sci fi. That and the fact that he likes to call it a “hypothetical computer monster.” Let’s not mince words—Robinson’s view on AI is just as ill informed and idiotic as that of climate change deniers. But Robinson, unlike climate change deniers, is taken seriously by smart people. It is an utterly absurd situation. But one thing is abundantly clear—Robinson has nothing remotely useful to say on the topic.
To explain why AGI is dangerous, imagine two monkeys talking in a forest in the year 1777. One says "I think these humans with their intelligence could be a threat to our habitat someday. In fact, I think they could take over the world and kill our whole tribe!" The second monkey says "Oh, don't be silly, how could they possibly do that?"
"Well, uh... maybe they uh... hunt us with their machetes! Have you heard...they even have boom-sticks now! And they have saws that can cut down trees, maybe they will shrink the very forest itself someday!"
The monkey doesn't think about how the humans might build factories that produce machines which, in turn, can cut down the entire forest in a year... or about giant fences and freeways that surround the forest on all sides... or about immense dams that can either flood the forest of cut off its water supply. The monkey doesn't even think about the higher-order-but-more-visible threat of laws and social structures that span the continent, causing humans to work together on huge projects.
That will be AGI in relation to us. AIs today can outsmart the best humans in chess and go, or paint pictures hundreds of times faster than most human painters, or speak to thousands of different people simultaneously. AGIs could do all that and many more things we could never do with our primitive brains. We can try to control them by giving them "programming" and "rules", but rules have loopholes and programs have bugs, the consequences of which are unforeseeable and uncontrollable.
And a world with AGI does not have just one AGI, but many. I think most of the risk comes from whichever of the many superintelligences runs on the most poorly-designed rules. For instance, it might decide to kill everyone to ensure that no one can turn it off. It could design a virus that spreads silently without symptoms, only to kill everyone suddenly three months later. And this is just one of the ideas that us monkeys have come up with. What the most badly-designed rules and programming will actually cause, we cannot predict.
(I originally dropped this explanation somewhere else where everyone ignored it. This is the way the world ends; not with a bang, but with a mistake in a .cfg file.)
Agreed, except I do like Nathan Robinson on a lot of topics. He is quite good on most leftist issues.