18 Comments
User's avatar
In-Nate Ideas's avatar

I agree that diversity is the way to go for two reasons:

1) I think the question of intrinsic value might be irreducibly uncertain, so we should favor robustness.

2) Because of status quo bias, I think a lot of people (rightly or wrongly) think a good life looks something like our natural lives. I sort of share the intuition that being a euphoric computer is unappealing - though I think that is probably totally wrong. So to get people on board with whatever bliss-maximizing scheme you want to run, it probably makes sense to also let people keep living nice natural lives on earth (or at least simulate a few)

Expand full comment
James's avatar
7dEdited

Minor point and then a serious point:

Minor point:

"If AI can generate novel chess moves, why couldn’t it generate novel arguments?" -> I agree with you on the conclusion but think this isn't an amazing argument for it, because you might think there are infinite possible arguments (this is probably true in maths), but there are _definitely_ finitely many chess moves, so you can just search the tree. But whatever, your key point is correct that obviously AI is going to be able to come up with new arguments.

Major point:

I wish you were more convinced of the safety concerns, instead of about optimizing for this-future/that-future. I agree if everything goes well, this is a big concern. But the "if" really is huge. I mean, maybe you have inside-view arguments for why you think things will be fine (I have inside-view arguments for why I think things are pretty likely not to be fine), but if not then I think the outside-view should _really worry you_!! The people running the companies are saying there's a lot of danger (and no it isn't only to drum up hype, lots of people in AI sincerely believe this)!

Expand full comment
Bentham's Bulldog's avatar

I'm concerned about both!

Expand full comment
James's avatar

But more concerned!

Expand full comment
Judith Stove's avatar

I don't think philosophy is, or should be, principally about arguments. It's also about living a good life and being a good person. Also, making happy people is difficult; we don't know how to do it; we don't even know how to make people happy (which is different), let alone what 'happy' might look like for machines, if that even makes sense. But perhaps all this is ironic, and I've taken it too seriously.

Expand full comment
Bentham's Bulldog's avatar

I think we know how to make happy people. There's actually a rather straightforward method, often performed for other reasons...

(Of course, that doesn't ALWAYS make happy people, but it usually does).

I don't think what I'm saying about AI capabilities hinges on what the aim of philosophy is. Just sub in "the way we figure out moral conclusions, in philosophy," with "Humans doing philosophy mostly involves,"

I also expect that even if you're pessimistic about the quality of existing human lives, with very advanced future technology, we should expect it to be easier to create well off people.

Expand full comment
Judith Stove's avatar

But 'well-off' - which probably means 'materially well-resourced' - is not the same as 'happy,' is it? But I'm not actually pessimistic about the quality of existing human lives; I think people can be happy, but it's not something we can guarantee or even do much to assist. But thank you for replying, I appreciate it.

Expand full comment
Bentham's Bulldog's avatar

I meant happy by well-off. Or, more precisely, high in terms of whatever makes someone's life go well.

Expand full comment
skaladom's avatar

> Assuming we don’t all die, in the future we’ll have lots of resources.

Do we? So far every civilization has ended up collapsing (usually quite slowly) and falling back to a relatively poor state. It's not anyone's favorite future, but I don't think it can be fully ruled out.

Expand full comment
Bentham's Bulldog's avatar

Good point! I was imagining a scenario where we don't all die or regress, and I think regression is unlikely, but I agree I could have been more precise.

Expand full comment
Charles Stewart's avatar

I think a future where we live but are resource poor is quite likely.

Expand full comment
redbert's avatar

👏🏻👏🏻👏🏻

Expand full comment
X_swimmer's avatar

Cool! I’m very curious about the use of AI for moral philosophy, though! So far as I know, current AI seems to be LLMs. Imagine if we advanced technologically but not ethically, and trained a powerful LLM on data which hadn’t progressed from the moral assumptions of 1709, say. It seems like LLMs would indeed produce moral recommendations that justified slavery! I don’t think it would be able to escape the frame of its training data to critically evaluate the morality of slavery, in that case, at least based on my understanding of large language models and how they’re trained.

Expand full comment
Vikram V.'s avatar

> Humans doing philosophy mostly involves some combination of: Coming up with clever new arguments.

Philosophy might not actually be a matter of argument. In fact, it seems like there are important parts of philosophy that are just not reducible to an argument, and yet remain valid. Of course, any argument for that statement might be a self-contradiction, but that doesn’t make it false.

Expand full comment
Bentham's Bulldog's avatar

What do you have in mind that you don't think a machine could mirror?

Expand full comment
Vikram V.'s avatar

Assuming that machines can be conscious in the same way humans can, nothing.

I was more commenting on the assumption that all philosophy is a matter of argument. Your post probably still holds up if that’s not true.

Expand full comment
William Gadea's avatar

Interesting, but I offer a couple of reservations:

First, I don't think anthropomorphizing AI is helpful. You write: "Ensure digital minds have adequate autonomy, rather than having their will seriously constricted. This is both good for their expected happiness and might have intrinsic value." But why should machines have a need for happiness? I write about this here: https://williamgadea.substack.com/p/ais-and-the-pleasure-principle

Second, I think you're over-estimating the power of computation and under-estimating the power of data. If in sci-fi future we could track our cognitive processes sufficiently well to reflect happiness and pain, flourishing and floundering, that data could be used for governance and ethical purposes. Of course, many would be horrified with the privacy issues, but perhaps only a representative sample would be required for learning.

Expand full comment
Vittu Perkele's avatar

My view of what the future should be like is more monomaniacal than yours, as I quite strongly believe positive emotional valence is all that matters, that upon reflection the smartest AIs would agree with this, and therefore that artificial minds experiencing non-stop pure bliss and nothing else are what should be constructed from the universe's resources. However, there is one concern I've thought of that affects both your and my goals, and the goals of anyone else who would desire to use the universe's resources to maximize value. Presumably, we both want to use the stuff making up the universe to maximize value. We seek to turn the unthinking matter making up the cosmos into minds experiencing well-being (by whatever metric), and the infrastructure necessary to support those minds. We don't want all the hydrogen and helium out there to go to waste, we want to somehow use it to maximize value. But, even if we find a way to transform the normal matter of the universe into valuable minds, what are we to do with dark matter and dark energy? Something like 96% of the mass-energy in the universe is dark matter and dark energy, the vast majority. If we truly wish to use the universe to maximize value, then not harnessing dark matter and energy would be letting the vast majority of the universe go to waste. So, I propose that one of the most important things future humans or superintelligences can do is figure out just what dark matter and energy are, and then actively harness them towards creating value-instantiating minds, so that the stuff that makes up the vast majority of the cosmos does not go to waste. That, I think, is what it would really mean to use the universe to maximize value, and therefore study and understanding of dark matter and energy is one of the most valuable things we can put our efforts towards.

Expand full comment