2 Comments
Mar 4, 2023·edited Apr 21, 2023Liked by Bentham's Bulldog

To explain why AGI is dangerous, imagine two monkeys talking in a forest in the year 1777. One says "I think these humans with their intelligence could be a threat to our habitat someday. In fact, I think they could take over the world and kill our whole tribe!" The second monkey says "Oh, don't be silly, how could they possibly do that?"

"Well, uh... maybe they uh... hunt us with their machetes! Have you heard...they even have boom-sticks now! And they have saws that can cut down trees, maybe they will shrink the very forest itself someday!"

The monkey doesn't think about how the humans might build factories that produce machines which, in turn, can cut down the entire forest in a year... or about giant fences and freeways that surround the forest on all sides... or about immense dams that can either flood the forest of cut off its water supply. The monkey doesn't even think about the higher-order-but-more-visible threat of laws and social structures that span the continent, causing humans to work together on huge projects.

That will be AGI in relation to us. AIs today can outsmart the best humans in chess and go, or paint pictures hundreds of times faster than most human painters, or speak to thousands of different people simultaneously. AGIs could do all that and many more things we could never do with our primitive brains. We can try to control them by giving them "programming" and "rules", but rules have loopholes and programs have bugs, the consequences of which are unforeseeable and uncontrollable.

And a world with AGI does not have just one AGI, but many. I think most of the risk comes from whichever of the many superintelligences runs on the most poorly-designed rules. For instance, it might decide to kill everyone to ensure that no one can turn it off. It could design a virus that spreads silently without symptoms, only to kill everyone suddenly three months later. And this is just one of the ideas that us monkeys have come up with. What the most badly-designed rules and programming will actually cause, we cannot predict.

(I originally dropped this explanation somewhere else where everyone ignored it. This is the way the world ends; not with a bang, but with a mistake in a .cfg file.)

Expand full comment
Mar 4, 2023·edited Mar 4, 2023

Agreed, except I do like Nathan Robinson on a lot of topics. He is quite good on most leftist issues.

Expand full comment