The Vulnerable Universe Hypothesis
The vulnerable world hypothesis might be true of space
What do you do if a lone private actor can destroy life on Earth?
It’s not an easy question. It’s not obvious what regulatory structure you’re supposed to have in such a situation. Nick Bostrom famously posed this question in his paper The Vulnerable World Hypothesis, in which he muses about regulatory regimes for controlling technology that is both existentially destructive and easy to make. His proposals ended up sounding pretty Orwellian—though the alternatives might be worse.
There are possible worlds where a rogue agent could unleash a bioweapon cheaply that kills millions. If you have a vulnerable world, and there are no highly effective regulations for preventing such bioweapons, then that is the end of civilization. You can’t have counterterrorism so effective that no one slips through the cracks. If one person slipping through the cracks spells the death of a sizeable fraction of a continent, then civilization is doomed unless it takes pretty aggressive measures like 24/7 surveillance for everyone.
Fortunately, so far, we’ve dodged a bullet. There is no technology that allows private actors to unleash this level of destruction at low cost. Private individuals can kill a few hundred people, but that is not enough to threaten civilization, given how rare individuals primarily motivated by causing destruction are. It’s looking like we don’t live in a vulnerable world. Unleashing civilization-wide destruction is pretty hard.
It might not be in space.
If this is so, then absent a tightly controlled space regime, things go hideously wrong. Or so my colleague at Forethought argues in his piece Interstellar travel will probably doom the long-term future. His more precise take is that it will probably doom the long-term future if it’s possible to produce the sort of technology that could threaten an intergalactic civilization. Which it very well might be.
Suppose that using less than the resources of a planet, any actor can bring about a chain reaction that ends an intergalactic civilization. Maybe, for instance, they can release an unstoppable chain reaction that destroys everything in its path. If so, then as you settle the galaxy, you have to make sure that no one does anything like that on any planet. And there are trillions of planets in the Milky Way alone. It’s hard to get a regulatory regime without even a one in a trillion failure rate.
This is somewhat analogous to the vulnerable world hypothesis. Rather than a single-planet civilization being vulnerable to the actions of any individual terrorist, a multi-planetary civilization is vulnerable to the actions of any individual rogue planet. You need tight control over everything that happens in the galaxy.
Now, I don’t know if we live in such a world. Maybe technology that can destroy a civilization is impossible. But it’s not obvious that it is. Jordan lists, in his piece, a number of technologies that could do it. These include:
Von Neumann probes: these are self-replicating probes. They can be sent to a distant planet, and then make a copy of themselves. The copy can then make a copy, and pretty quickly we’re off to the races. Given exponential growth in Von Neumann probe swarms, it doesn’t take very long to gobble up a big chunk of the galaxy. Then, once you’ve done that, you can go back and fight wars with the other planets and win. Now, maybe they have their own Von Neumann probes, but a world of endless expanding Von Neumann probes gobbling up resources so that civilizations don’t get killed by their neighbors doesn’t sound great! Von Neumann probes are definitely possible in principle.
Strange matter: it’s a possible more stable form of matter that could turn normal matter into itself on contact. It would be an expanding sphere that turns what it hits into itself, eliminating all surrounding value.
Vacuum decay: a possible scenario in which there’s a more stable vacuum state than our current one. If vacuum decay was instantiated, it would kick off a bubble of unstoppable destruction growing at the speed of light.
Other: probably there’s lots about physics we don’t understand. It wouldn’t be shocking if one of the things we didn’t understand was a way to destroy galaxies.
Jordan also lists a bunch of other possible scenarios. I don’t think these are anything like guaranteed. I’d probably put them—with the exception of Von Neumann probes—well below 50%. But probably above 10%. And that is a terrifying possibility. It means that an intergalactic civilization would, to survive, need an unprecedented level of top-down coordination. It would need some mechanism that prevents any rogue actors on any planets from ending the galaxy.
This is a level of coordination that could maybe be pulled off by superintelligent AI. An ASI could make sure the sub-AIs managing individual planets prevent anyone from going rogue and blowing up the universe. But it’s not obvious how it would be solved.
Settling on other planets has traditionally been thought of—e.g. by Elon—as a general solution to existential risks. Yet these dynamics illustrate that it’s not. Space warfare poses a range of serious existential challenges. If space is defense dominant, favoring the defender, it creates incentives to consume resources before anyone else can. If it is offense dominant, it creates incentives to carry out devastating first-strikes. In any case, coordination is difficult. Once we’re multiplanetary, we are not out of the woods. Our challenges are just beginning.
For this reason, I am now pretty pessimistic about the prospects of near-term space development. Even aside from existential risks, it is a bit hard to see by what dynamics goodness could compete. Evolutionary pressures favor amoral and expansionist actors. Space institutions, if development comes soon, would have to be decided upon quickly, and that would make it hard to get maximum value from space resources. My guess is that near-term colonization of space and grabbing of space resources would result in the loss of a big slice of expected future value, and we should not do it. That it might quickly destroy all life is one major downside, but far from the only one.


This is Dark Forest theory (Cixin Liu
The Three-Body Problem) and I am afraid there's no way round it. Anything technologically capable of getting itself to within striking distance of us is so powerful that the only prudent move is to strike it. No "We came in peace for all mankind" nonsense
Wow this is incredible timing. Literally was going to write a blog post this week on how I think space could be a *solution* to the Vulnerable World Hypothesis. Hopefully I’m right!