13 Comments
User's avatar
Zachary Jones's avatar

Just started reading this book for a critical review, thank you for freeing me from the burden of doing so.

Trivial or False's avatar

I caught them on Non-Zero and was shocked by how unpersuasive their arguments were. My jaw hit the floor when they said they don't even use it. Like I can understand you don't like it but how do you expect me to take your opinion about the capabilities of these things seriously when you haven't used them? Insanity!

There is an X account I have followed for a while because he goes kinda hard and has weird opinions. But I saw this Tweet yesterday and when I read the replies I thought I was in The Upsidedown.

https://x.com/duns_sc0tus/status/2018050561086525864

Trivial or False's avatar

When Robert Wright asked how they know that AIs aren’t conscious, given that, he claims, you can’t even really know if other people are conscious (at least with certainty), Bender replied that she doesn’t “have conversations with people who don't posit my humanity as an axiom of the conversation.

Yes, this was very weird!

Also, did you catch the part where she corrected something he said and he said he was going to cut his mistake out so it would sound like she was correcting him for no reason, and she said she had a recording of her own. Wright was obviously joking and she was obviously NOT "yes-anding." She was dead serious. What humorless scold.

Chasing Ennui's avatar

A good rule of thumb is that anyone who rejects an institution or theory based on its origin should probably be ignored.

Israelite Introspective's avatar

Stop wasting your time arguing with idiots. It's like "debunking" flat earthers.

IMP's avatar

I like it + these authors aren't marginal internet crazies

Israelite Introspective's avatar

Or alternatively, don't correct your enemy when he is making a mistake.

Mark A's avatar

Now go read Romain Brette.

Peter's avatar

The risk of super intelligent AI coming from LLMs is 0. They just aren't designed to be.

Robert Hall's avatar

Could you provide some examples of novel philosophy that is AI-generated?

Ben Schulz's avatar

If you consider alternative theories of gravity to be Philosophy, then they do so quite often. MOND, Dark Matter, Quintessence, Nonlocal Gravity, varieties of String theory can all have their math and made into something new. I've been collaborating with their brainstorming. It's pretty engaging. At least two of their theories are covariant, and avoid "ghosts," while passing screening tests on the solar, and galactic scale.

Matt Beale 2's avatar

I believe AI is both incredibly powerful and subject to misguided over and under investment (remember dark fiber?). However, by focusing on the possibility of an AI acting on its own inhuman preferences at the expense of humans, we are ignoring the dangers that have already stepped over our doorstep.

1. Individuals or groups of humans using AIs to satisfy their desire for wealth that eclipses their empathy or, worse, to satisfy their desire to inflict cruelty and/or subjugation on others.

2. Individuals or groups of humans enabling AIs without truly understanding the potential impact, causing everything from trivial inconvenience and waste to catastrophic destruction. Not unlike an autonomous lawnmower in the flower bed...or in a field of bunnies.

Like any tool that amplifies human abilities, wisdom and care will be necessary to ensure that AI advances humankind. Motivated by tragedy or the anticipation of tragedy, Alfred Nobel and Albert Einstein were ultimately dedicated to peace. We can only hope that the "move fast and break things" generation gets enlightened sooner rather than later.

The Ancient Geek's avatar

> The authors start by summarizing the doom scenario as “machines become “sentient” enough to have their own preferences and interests, which are markedly different from those of humanity.” This is misleading; as basically every AI doomer has said many times, the AI doesn’t have to have consciousness to be dangerous

Apart from the bit about sentience, that is accurate. The typical doomer argument is that future A Is will be agents with preferences and interests, which are markedly different from those of humanity. Basically you just need to replace "sentient" with "agentive".

> “Embedded in the idea of alignment is a premise with which we fundamentally disagree: that AI development is inevitable.” But this is false.

It's a half truth. Standard doomer arguments regard alignment as difficult because you would only get one chance to get it right , because AIs would both develop rapidly and lock onto the values of their "seed". So there is a n assumption of inevitable *rapid* development.