Mathematics, rightly viewed, possesses not only truth, but supreme beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. The true spirit of delight, the exaltation, the sense of being more than Man, which is the touchstone of the highest excellence, is to be found in mathematics as surely as in poetry.

—Bertrand Russell, Study of Mathematics

Some mathematical proofs are beautiful. The proof that there are infinite prime numbers is SO COOL. The diagonalization proof that there are more real numbers than natural numbers is similarly elegant. The truth about some domain adjacent to math often produces beautiful, elegant solutions, where there’s no weird asymmetry. String theory, I am told, is similarly beautiful. Because of this, we should expect the truth about anthropics to be decently likely to produce elegant solutions to things.

This is one reason to accept the self-indication assumption (SIA). The self-indication assumption is the view that says that, all else equal, a theory that predicts N times as many people exist is N times more likely. So, for instance, if there are two theories, one of which predicts that one person will be created in one room, another of which predicts ten people will be created in one room each, upon waking up in a room, SIA instructs one to think that theory two is ten times better than theory one, if they are otherwise equal.

Take SIA’s response to the doomsday argument, for instance. The doomsday argument says that you should think humanity will go extinct soon because if humanity lasts a long time you’re anomalously early. If humanity has a trillion people, the odds are only 11% that you’d be in the first 110 billion people. If humanity has a quadrillion people, the odds are even worse! So therefore you should think that humanity won’t last a very long time!

This inference is pretty straightforward. If you do exist, then it’s less likely that you exist early if civilization lasts a long time. Theories according to which civilization only has 110 billion people predict with absolute certainty that you’d be one of the first 110 billion people. But nonetheless, it seems like you shouldn’t expect civilization to end soon just from this armchair speculation. To figure out if civilization will end soon, one needs to actually look at the things that might wipe out civilization, rather than merely philosophizing.

SIA’s solution is compelling and elegant. By holding that you’re more likely to exist if there are more people, it exactly cancels out the doomsday argument. If there are 1.1 trillion people, the total odds you’d exist are 100 times the odds you’d exist if there are 1.1 billion people. But given that you exist, the odds are only 1% as great that you’d be in the first 1% of people—so therefore the probabilities exactly cancel out.

Not only is this a good way to avoid the doomsday argument, and other even more grotesquely counterintuitive results that avail every view other than SIA, it’s an elegant way to do so. It exhibits the beauty and parsimony that’s typical of truth. It doesn’t just undermine the inference, it precisely and exactly cancels it out. Beauty doesn’t always make something true, but true theories tend to produce elegant results.

There are two formulations of SIA. One says you should think that a theory that predicts that there are N times more people is more likely all else equal. The other says that for beings that are epistemically indistinguishable from me (in other words, people who I might, based on my current evidence, be), a theory that predicts there are N times more of them is N times better. And yet these end up being indistinguishable—they say the same things about the world!

To see this, imagine that a coin is flipped. If it comes up heads a single person in a red shirt will be created. If it comes up tails, 1 person in a red shirt will be created and 9 people in blue shirts will be created. I wake up wearing a red shirt. Both versions of SIA say the same thing about this case (and all cases, though this one should be helpful at illustrating the principle).

The first formulation says that the coin coming up tails makes it ten times likelier that I’d exist. But given that I exist, the odds are 1/10 that I’d have a red shirt. So therefore this theory says I should think tails and heads are equally likely.

The second formulation says that the theories are both equal—I’m a guy in a red shirt, and they both predict the same number of guys in red shirts.

Again, this isn’t just plausible, it’s elegant. It’s elegant in the way that true theories tend to be. SIA invokes simple probabilistic reasoning—about more people being likelier. It has no need for grotesque theoretical fantasies like reference classes, nor any need to violate any view of probability like the law of conservation of evidence.

Or here’s another case: should you think that there are more people in far-away continents (prior to observing the continents)? If you reason the way SSA—the main rival to SIA—does, where you act as if you’re randomly selected from all the people, then if there are more people on other continents, it’s less likely that you’d be on this continent. SIA avoids this elegantly once again—the less great likelihood of you being on this continent is exactly canceled out by the greater likelihood of your existence!

And this is just scratching the surface. SIA is elegant in another way—it corresponds with rational betting behavior. If you bet in accordance with SIA over and over again, you’ll maximize your long-run winnings. In the sleeping beauty problem, for instance, if you bet in accordance with SIA’s advice, you’ll win money, while if you bet in accordance with the halfer view, you’ll lose money if the experiment is repeated.

None of this proves that SIA is right but it’s at least some evidence. SIA produces the types of elegant solutions to long-standing problems that are typical of true theories yet atypical of false ones. SIA, unlike most of its competitors, is not some gerrymandered aberration, with numerous epicycles added in an attempt to rescue a few unreliable anthropic intuitions.

Doubt thou that the stars are fire.

Doubt thou that the sun doth move.

Doubt truth to be a liar.

But never doubt it to be beautiful.

Maybe I’m confused, but SIA doesn’t seem to solve for “we should expect civilization to collapse soon”.

Under SIA we apparently know that there are an unboundedly infinite number of people in existence, because that scenario is infinitely more plausible than any other. So the chances that we exist somehow somewhere are 100%.

But we don’t just exist somehow somewhere. We exist here and now as humans. And it still seems very odd that we are one of the first 110 billion humans if humanity is destined to produce 10000000000000000000000000 humans eventually. Them producing more humans doesn’t make our existence “more likely”. In either scenario every possible being exists, but it does seem to make near term civilization all collapse more likely.