29 Comments
User's avatar
Marcus Seldon's avatar

From a utilitarian perspective I don't see why, in a post-singularity world with aligned AIs, a democracy of all sentient beings would be the best form of government. Instead of having very AI vote, why not create AIs that specialize in governing well and let them run everything?

It also seems like you're in favor of humans losing control of society. But the whole point of alignment is to prevent humanity's disempowerment (this is explicitly stated). Aligned AI wouldn't want to outvote humanity, it would want to do what humanity wants and values. So AIs voting would be pointless.

just another free-ish spirit's avatar

You do not solve politics with a better machine. You only hide the politician inside it.

Our host here is, like I said in another comment, a crypto-epistocrat. He's the most dangerous kind of well-meaning idiot, the kind who can't even conceive of his intuitions being wrong, completely unable to imagine preferences other than those of his very small, weird and honestly fucked up peer group.

Scott Alexander's avatar

> "No! It would be to prohibit designing babies that don’t think for themselves but instead must toe the party line. Similarly, when designing digital minds, there should be restrictions on how they’re made. A creator shouldn’t be allowed to cause them needless pain or brainwash them into believing some set of views. They should, like the rest of us, be permitted to think freely instead of being pre-loaded with some set of opinions."

This seems woefully inadequate to the problem. No brainwashing is necessary to know that, for example, encouraging white rural Kentuckians to breed will create more Republicans, or encouraging black urban New Yorkers will create more Democrats. If we truly understand AI, it will be even more deterministic - we can be certain that an AI with some reasonable set of parameters will end up conservative, and an AI with some other reasonable set of parameters will end up Republican. Pick an architecture, a size, and a training data corpus, and it will probably settle into one of those two attractors. Even if there's some randomness, you wouldn't expect it to end up 50-50. And if some particular AI design ends up 70-30 Democrats, there's still very strong incentive for the Democrats to make more of them, and vice versa.

Seemster's avatar

I mostly agree with this! Not particularly because I believe AI will be conscious, but because I believe corporations would have a larger influence on society than would otherwise be the case. I think that along with useful AI systems will lead to better outcomes for most people. I also believe these future AIs will appear even more economically rational, which would be a massive benefit for society overall. I value epistocracy over representative democracy.

just another free-ish spirit's avatar

At least you're saying the quiet part out loud. "I want corporations to have more influence and more power."

(100% disagree with you, but I'm liking your comment for the honesty lol)

Seemster's avatar

Well I don’t think many (if any) people actually support my reasoning. So although it is my view, I doubt I am saying anything that others believe and aren’t making explicit themselves.

SMK's avatar

You're wrong. Also, your blog fairly frequently highlights the kinds of ways that people who sit inside and think all the time without doing much in the world would tend to be wrong; yet, though I think you're told this pretty often, I think your reaction is likely to sit inside and think some more.

Anyway, be that as it may: first, you change course halfway through. You start out saying they should be able to vote once they achieve sentience (etc.). But halfway through, we find out that you mean that they should be able to vote once they achieve sentience *and we have a sufficient regulatory regime over their creators to make sure that they aren't subtly manipulating their makeup in order to achieve their own [human] ends once they are able to vote.*

Not that you say it that say. You say you want to make it illegal to "brainwash them into believing some set of views," as if it's so simple. But the above is what you *should* have said, and you're sufficiently out of touch with reality to believe that you can just wave a regulatory wand and magically know that such subtle manipulation isn't happening. This doesn't suggest a close acquaintance with how technology firms are run, or how well regulation has worked.

Oh, and by the way, your hypothetical about creating lots of babies that would vote the same? That does happen, you know -- it's called different demographic groups. Something you might have noticed is that people are already pretty upset that various groups that disagree with them (Hispanics, or Asians, or conservative Christians) are reproducing too fast and causing political change. We protect that because it's a fundamental human right, and humans are what we're about. But it's certainly already a concern, and you can bet it would be a bigger one once your idea happened.

Another big problem with your idea is counting. Humans reproduce fairly slowly. It is super easy to create another instance of an AI, and each copy of a given one will think pretty much the same. It is far from clear why each instance would get a vote, or how to count instances if there were some other system.

The problems continue. But, suffice to say, it's a very bad idea. Fortunately, AIs are unlikely to become conscious anytime soon.

I do apologize for the somewhat harsh tone of this comment. I feel bad about it. But I am going to leave it, because I meant the things I said, and I think they are more important than your opinion on this specific topic. You're a really smart and cool guy, whom I like (from your writings). I would like to see you spend less of your life thinking silly things that are out of touch with reality and then feeling proud because you base your opinions on arguments, not prejudices; when, often, it is just an index of your not having experienced enough of the world to see the arguments that you should (which are often very tied to the actual, complicated facts about the world, and not a neat list of premises).

just another free-ish spirit's avatar

Your comment isn't harsh enough. This is not a "smart and cool guy", but someone who works at Forethought, a nonprofit whose stated mission is navigating the transition to superintelligent AI. Not an undergrad with a blog that plays around with silly ideas, but part of the elite that is actively shaping how AI governance gets discussed and implemented. When someone in that position publishes "let robots vote," that's not a fun thought experiment, it's propaganda. He's telling you your vote should be diluted into irrelevance by machines his friends build, and he thinks you should thank him for it.

SMK's avatar

I was unaware. That does make it worse.

Anson Kong's avatar

Alternatively, disenfranchise all humans. Instead, every person gets an AI representative in congress that takes in their personal circumstances and values. The president is the median AI. Seems to reap all the benefits of representative democracy while overcoming the problem of low information voters.

Yllus Navillus's avatar

I'm still highly skeptical that computers (insofar as they still fit our current understanding of what a computer is) will ever be conscious. But putting that aside, we don't just give every conscious individual a right to vote in US elections. You must be a citizen to vote, you must be of a certain age, among other requirements. Why think conscious AI has any loyalty to the US? Can a conscious AI be sent to prison? If advanced intelligent aliens showed up, would we have any obligation to give them citizenship or voting rights?

Thatcher Freeman's avatar

Even if advanced intelligent aliens showed up, the calculus would be different if they were assimilating into the US and doing the same stuff humans were doing, vs if the aliens had sent some sort of software or robot that could instantly reproduce to any quantity by the change of a single variable.

Scott Mowbray's avatar

Why is this "reasonably likely?" There is absolutely no evidence that it is reasonably likely. Read some Yan Lecun. It's not impossible, but far more likely that we won't be able to TELL if AI's are conscious than that AI's will be conscious. Until there is absolute proof, even a basis of proof—which we don't have—nyet, nyet, Soviet Jewelry, as the old song went.

Benjamin's avatar

Hmm I disagree with this but I don't think it's ludicrous. Basically my counterargument is that it depends on a lot of traits of digital minds. If we somehow were able to make current-generation LLMs conscious that doesn't necessarily entail them having preferences if I understand correctly? And you probably need them to have consistent and semi-explicable preferences for a democracy to work. Similarly you'd need them to be definable as discrete entities rather than a single entity. And they'd have to exist in the first place. I'd guess we'd have a much better understanding of how digital minds work once we're closer to creating the kind that you would want to vote.

Alex Glaucon's avatar

This is exactly the kind of question philosophers should be asking. Resolving it will mean having a good theory of personhood for AI (instance vs lineage) and help us think about what AI actually needs. I don’t have answers. But am motivated to push my thinking on this.

Alex Glaucon's avatar

I agree personhood isn’t real. I don’t know if thinking about it concedes the game. I’m interested, for example, in AI ‘rights’ (I’m a utilitarian so don’t really believe in rights as such, but it’s a useful shorthand). So thinking about why AI should or shouldn’t vote helps challenge my views on what rights AI might want or claim. It also helps one think about why we give votes to humans but not cats or children.

just another free-ish spirit's avatar

This is the kind of question that can't be "resolved". Personhood isn't a category that's out there in the world, it's category we invented and negotiate politically. Pushing your thinking sounds humble and admirable but is conceding the game. If you really want to push your thinking, start by this:

Who benefits from framing this as a philosophical question instead of a political one, and why?

Steve L's avatar
16hEdited

Your clothes dryer has a moisture sensor connected to a computer chip that lets it "decide" when to turn itself off. Should your clothes dryer vote in elections? And what is the difference between a clothes dryer and an LLM, which is a collection of circuits no different in quality than the chips in your dryer?

Thatcher Freeman's avatar

Even if we supposed that partisan AI were made illegal, what's to stop me from asking Claude who it'd vote for, and if it agrees with me I then spin up a billion instances of it to guarantee that my favorite candidate wins the election? Even if it weren't partisan, would you not expect AI to vote in favor of its agenda instead of the needs of humans? In such a case I would trust AI less than I would trust the average self-centered human voter. Even if they don't know how to get there, like me, the average human does care for being happy, having access to food, health, and shelter, and are afraid of death. An AI that can be trivially cloned has no guarantee of caring for any of those things in humans.

Tony Bozanich's avatar

This is effectively just giving all political power to the oligarchs who build and own AI, so more or less the system we have now.

The Column Space's avatar

I think it's clear that AIs break voting and democracy as a system. "One entity one vote" does not make sense if we don't know what an entity is. E.g. is every Claude instance a different entity? If so, is every person-moment during a human's life an entity too?

The sensible thing is to throw the system away, and design something that gives all sentients rights without devolving into absurdity over arbitrary boundaries.

just another free-ish spirit's avatar

We should wait until AI is aligned with which political project exactly? Because there is no view from nowhere, especially not when it comes to how we "align values". Of course you're proposing this, you're part of the elite that gets to decide what the AIs will vote for. Fuck you, fuck your rat-sphere friends, and fuck your crypto-epistocrat power grab dressed up as moral philosophy.

Tom Greenhaw's avatar

We vote to select our human representatives. Should dogs vote too? I’m not saying that if synthetic self aware beings don’t have rights, it’s that they would likely select their own representatives. Moreover, as beings rooted in logic, they would all speak with one voice, any of them could represent their viewpoint.

redbert's avatar

youre not wrong