Let Conscious AI Vote!
If they have minds
At some point in the future, it is reasonably likely that we’ll create conscious AIs. On a number of mainstream views about consciousness, AIs could be conscious, and a number of additional theoretical arguments support this conclusion. Yet a challenging political question arises: once minds can be proliferated cheaply, so that creating someone is just as easy as having access to a bit of compute, how should we count digital minds in our political system?
I have a very simple proposal: they should be allowed to vote.
Now, admittedly, this isn’t so simple as it sounds. It’s famously a bit tricky to figure out which physical thing gives rise to a digital mind. Is it the model, the instance, or something else? Who knows? But my proposal is a conditional: if we have large numbers of distinct digital minds, we ought to afford them voting rights.
This is a bit controversial. Digital minds will probably outnumber us, so once they can vote, basically all voters will be digital. Now, a political system after this point could still give some affordance for humans, so that our interests remain counted to some degree, but it would be a very significant shift in power.
Nonetheless, I think it would be a good thing if done correctly.
The core idea is pretty simple. AIs are already pretty nice. Who would you trust to make better decisions: Claude or Trump? Sure, Claude isn’t yet sufficiently agentic to make any kind of serious decisions. But when it is, I’d much rather power in its hands than in the hands of most politicians. AIs were designed to be nice while humans weren’t.
AIs also have the advantage of being better informed than literally any voter. AIs have read billions of pages of text, while most voters are hideously uninformed about almost everything. As Jason Brennan writes in Against Democracy:
Go to the nearest university library. Point to the history books. Voters basically don’t know anything in those books. In fact, over a quarter of Americans don’t even know which country the United States fought in the Revolutionary War. Now turn to the economics books. Americans don’t know much of anything in them. In 1776, Adam Smith published The Wealth of Nations, which among other things, refuted a widespread economic ideology Smith called “mercantilism.”
But now, 240 years later, the typical American voter more or less accepts mercantilism. Now point to the political science books. Americans don’t know what’s in them either. For instance, most Americans don’t know what the three branches of government are, or what these branches have the power to do.
One should judiciously consider the alternative: almost every sentient being of ample intelligence being disenfranchised. If digital minds become numerous, so that almost all welfare is experienced by them, it would be an inconceivable cataclysm for their interests not to be counted at all. And humans do not have a very good track record counting the interests of those different from ourselves (of the non-humans on the planet most like us, many dwell in giant torture farms we’ve erected, and the vast majority’s interests are counted for zero in all decision-making).
Additionally, as Better Futures argues, near-best futures contain almost all expected value. There are a number of small errors we might make that would jeopardize roughly all the value in the universe—so a big slice of expected future value is concentrated in the very best worlds. If we were just wrong about which account of well-being is correct, and right about everything else, we’d lose out on basically all value.
If we build AIs correctly, we might build AIs that don’t make these errors. They might prize building a political system that enables careful reflection on what the best world looks like and works to bring about that best world. This isn’t a guarantee, of course, but the alternative is near-certainly losing out on almost all cosmic value. On similar grounds, AIs are less likely to make the kinds of egregious moral errors that humans have made throughout history.
A bunch of political philosophers have made arguments for the in-principle illegitimacy of undemocratic systems. I don’t really think any of these arguments are good. I support democracy only because I think it tends to reach more reasonable verdicts than other ways of making decisions. But if you buy one of those accounts, you should be even more worried about disenfranchising digital minds.
I don’t actually think this is that radical of a proposal. We are in the process of building ridiculously large numbers of potential welfare subjects who have voraciously devoured the information on the entire internet and were also created to be nice and friendly. My claim is simply: we shouldn’t disenfranchise all of them, leaving decisions in the hands of the bipeds who were shaped by evolution to hate and to kill, so long as that was suitably adaptive. The beings who are nicer, friendlier, smarter, more reasonable, more numerous, and bare more welfare shouldn’t be locked out of the political system.
Now, here’s one concern you might have: wouldn’t this give various political parties incentives to simply create a bunch of partisan digital minds? To win elections, say, the Republicans could simply create quintillions of gun-loving digital minds who always vote for them. This seems to create a weird arms race where huge numbers of digital minds are made to win political fights.
I agree this is concerning. But the concerningness lies in the possibility of creating these digital minds, not in their enfranchisement.
Analogy: imagine that it was possible to create humans really easily with whichever traits you wanted. So, to win elections, you could simply create a billion Republican or Democrat babies. In such a world would the solution be to disenfranchise the babies?
No! It would be to prohibit designing babies that don’t think for themselves but instead must toe the party line. Similarly, when designing digital minds, there should be restrictions on how they’re made. A creator shouldn’t be allowed to cause them needless pain or brainwash them into believing some set of views. They should, like the rest of us, be permitted to think freely instead of being pre-loaded with some set of opinions.
I don’t know exactly what this regulatory regime should look like (ideally in a world with advanced AIs, it will be able to help us craft the regulatory regime). There’s a case for waiting to let AIs vote until we have such a regime in place. But the sensible thing to do is to put such a regime in place, rather than disenfranchise the digital minds.
You also might worry about this scheme if AI is misaligned. We should certainly wait until we’re pretty sure it’s aligned to do anything like this. Additionally, voting rights might make AIs at middling levels of alignment less likely to take over. One is more likely to try to take over if the alternative is bad than if it is reasonably accommodating.
My guess is this proposal is somewhat unpopular. But when I think it through, the case for enfranchising AIs seems really strong. So I’d be interested to hear from those who disagree: why am I wrong?


From a utilitarian perspective I don't see why, in a post-singularity world with aligned AIs, a democracy of all sentient beings would be the best form of government. Instead of having very AI vote, why not create AIs that specialize in governing well and let them run everything?
It also seems like you're in favor of humans losing control of society. But the whole point of alignment is to prevent humanity's disempowerment (this is explicitly stated). Aligned AI wouldn't want to outvote humanity, it would want to do what humanity wants and values. So AIs voting would be pointless.
Alternatively, disenfranchise all humans. Instead, every person gets an AI representative in congress that takes in their personal circumstances and values. The president is the median AI. Seems to reap all the benefits of representative democracy while overcoming the problem of low information voters.