10 Comments
User's avatar
John grant's avatar

This is Dark Forest theory (Cixin Liu

The Three-Body Problem) and I am afraid there's no way round it. Anything technologically capable of getting itself to within striking distance of us is so powerful that the only prudent move is to strike it. No "We came in peace for all mankind" nonsense

comex's avatar
9hEdited

It has overlap with Dark Forest theory, but it’s pretty different from the “standard” version of Dark Forest.

Dark Forest theory usually posits a stable equilibrium where many civilizations can exist but they all have to hide. Thus the theory serves as an answer to the Fermi paradox.

This post posits an unstable scenario where humans or human-created entities will probably destroy everything. By implication, humans are probably first to intelligent life (within some local area), so the Fermi paradox is not solved.

The two scenarios overlap if either:

(a) We’re not first, but we’re very early, early enough that every single existing large-scale civilization in the area has managed to avoid going rogue so far; nevertheless, every single civilization has gone into hiding for fear of a preemptive strike from another civilization worried about them going rogue.

Or:

(b) We’re not early, but civilizations at our level can reliably be exterminated before becoming able to destroy everything. But if ASI is as close as it seems to be, then the exterminators are cutting it extremely close. And there’s no obvious reason for them to cut it close.

In any Dark Forest scenario that posits lots of civilizations, there’s a bit of a paradox. The available options for a civilization are not just “publicly advertise you come in peace” or “hide”. There are also the options “send a probe to move a safe distance away, then start broadcasting”, “preemptively build von Neumann probes to replicate enough you can’t be destroyed”, or a combination of the two. For these approaches to be insufficient, the feared threat would have to wipe absolutely everything out in a huge area, so that any reasonable distance traveled before broadcasting, and/or any achievable size the civilization grew to with von Neumann probes, would not be enough to protect the home planet. But the larger the scale of destruction, the harder it would be to avoid leaving evidence behind! (And that’s assuming the attackers even want to avoid leaving evidence behind.) Yet we haven’t observed any such evidence.

Ben Schulz's avatar

I prefer Hanson's "Grabby Alien's" and it certainly makes sense.

Ibrahim Dagher's avatar

Wow this is incredible timing. Literally was going to write a blog post this week on how I think space could be a *solution* to the Vulnerable World Hypothesis. Hopefully I’m right!

River's avatar

Interesting. This all seems very speculative and I'd put very low probabilities on any of it. But I think where we really diverge is I just don't buy that things that happen in the near term will have some unusual lock in effect. Is there any reason for thinking that, which doesn't take as a premise the imminent creation of some magically powerful AI? I expect that the first institutions of space governance will probably have significant problems no matter how long we wait before going to the stars, and we will iterate on them and build better institutions over time, as we have with past institutions. We only learn by doing.

Tony Bozanich's avatar

Doesn't the President of the United States already have unlimited power to destroy the world with nuclear weapons? There are no legal restrictions to a POTUS giving orders to launch nukes.

Yllus Navillus's avatar

Not sure if you've read the Dune series by Frank Herbert (you'd love it if you haven't), but there is an event called The Scattering in the later books whereby the human race scatters across the far reaches of space after three millennia of stagnant peace, coercively enforced by the God Emperor Leto II. The reasons why this happens are pretty complicated and I won't go into it here, but the gist of it is it makes the human race impossible to ultimately exterminate. It could be that developing space will, in the short term, benefit a small number of elite developers at the expense of everyone else, but that has been the story so far here on earth too. It could be that developing into space, in the long term, results in dramatic expansion of human freedom as enterprises become too vast for a small number of people to control.

Doctrix Periwinkle's avatar

Von Neumann probes would make replicas of themselves out of what, exactly? Most of space is a vacuum. Sure, I could imagine Von Neumann probes being a catastrophe for a planet, but how do they get to replicate themselves across a universe when the universe is mainly made of nothing?

Ethan Morales's avatar

Does the speed of light put a limit on this concern? The one concrete example here is the Von Neumann probes, but they would also be limited by light speed travel. The distances between stars is so vast that there is a pretty substantial limit on how fast a risk can travel. Of course that might just prevent galactic settlement absent some FTL technology (and if we have FTL technology, the probes could probably propagate at such speed). But in a world where we settle the Galaxy via generational ships and the risk is only developed later, I think the long time horizons might make this functionally similar to other end of the universe concerns (which I guess is the final Longtermism problem).

I mean there’s always the possibility of developing new technology or new physics that just instantly destroys the universe, but I don’t think we can put any credence on that either way. Plus, if we can posit that, you can probably also posit that a technology could be developed that is so easy that no regime could prevent its development, which just like, sucks if true.

I think there are other far more immediately pressing concerns with space settlements from a longtermism perspective related to governance in extremis (to which I would gratuitously self-cite to an article I wrote for OUP), but the VWH or VUH as formulated here is too speculative as to be actionable I worry, although worthwhile thinking about

Vikram V.'s avatar

Not worried. Since the probability of God is 100% (as the only way to produce “unsetly many” beings), this state of affairs must be maximally conducive to the Good.