Aron Wall is one of the most interesting people I know (his blog is here!) He’s a brilliant physicist, and also a devout Christian who seems to know an immense amount about philosophy and, frankly, everything under the sun—including, for example, other religions. He’s better at philosophy, by my estimation, than most professional philosophers, even though it’s not his area of specialty. He’s also an effective altruist and committed Bayesian whose recommended blogs include Just Thomism, mine, Astral Codex Ten, philosophyetc (update your recommendations, Aron—Richard’s new blog is Good Thoughts (I had to say that to make the title a successful pun)), and Ed Feser’s blog. While on the subject of flattering Aron, I also got the sense that he was an extremely good and decent human being, the sort of person who exemplifies the very best of Christian decency! I recently had a very interesting conversation with him about anthropics, interesting enough that I thought I’d write it up into a blog. The objection that Aron has raised has been raised in various form by many other thoughtful people—including commenter Mark—so given the large number of brilliant people who seem to be worried about this, I thought I’d explain why I’m not convinced by them.

I adopt the self-indication assumption, as readers of the blog know. This view says that given that you exist, you should think there are more people—for example, if a theory says there are a billion people, and I come to exist, I should think, all else equal, that theory is a billion times likelier than a theory that says there’s only one person. Aron agrees that the self-indication assumption is very plausible and elegant when it comes to finite cases. The problem, in his estimation, is that SIA completely implodes when it comes to the infinite. (I will point out that it’s a bit amusing that I’ve recently had two conversations with two of the smartest theists—and people—that I’ve ever met, and they’ve both involved them trying to convince me that an argument I like for God’s existence doesn’t work).

For example, suppose that in some of the uncountably infinite worlds that God creates, induction doesn’t work. Now, this doesn’t require that God make completely non-inductive worlds with no predictable laws, but surely there are some worlds where various inductive predictions go wrong. But then how should I go about making my inductive predictions? Let’s say there are Beth 2 people—well there are Beth 2 people for whom induction works AND Beth 2 people for whom it doesn’t. So why should I trust induction—the probabilities seem undefined.

I agree that this is a real puzzle. SIA math gets weird when it comes to the infinite. This is, by far, the best objection to SIA—in fact it’s one of only two objections that moves me at all (the other one is a clever money pump, though it’s probably counterbalanced by the many money pumps for halfers). But despite this, I’m ultimately unmoved by this objection, so I thought I’d explain why.

First of all, I’m optimistic that there will be some nice way to do the math surrounding probabilistic reasoning involving bigger infinites. It’s already possible with some small infinites—for example, there’s a coherent sense in which there are more odd numbers than primes, in the sense that they have greater density. Maybe something similar is possible with God’s worlds—if we imagine that God made the world’s one by one, if for each finite number of world’s he’d make, they’d mostly be inductive, then induction is reliable. Again, I don’t know if this is right, but I’m optimistic that when we get to heaven, God will have something convincing to say about how to do infinite anthropic reasoning (if we’re concerned about that at that point). It might be that we’re just thinking about infinites *completely wrong* and there’s some elegant solution that will make the puzzles evaporate, a bit like Newton’s new paradigm eliminated the puzzles for the theories that came before him, according to which things only exerted force by pushing on each other.

Second, infinity just seems to *break everything*! It breaks ethics generating horrifying paradoxes, creates a million paradoxes when applied to causality, makes anthropic reasoning implode, and has a bunch of other paradoxes with extremely non-obvious solutions. I thus agree with

Manley (unpublished) summarizes: “as usual, infinities ruin everything.” I think this is a touch pessimistic overall (better solutions than I’ve discussed might well be available). Regardless, though, I actually think that the “as usual,” here, should come as some comfort to SIA-ers. That is: “uh oh: this potentially attractive view, developed in the context of finite cases, says weird/unclear/fanatical things in infinite cases” is a sufficiently common alarm bell (see e.g. expected utility theory, population ethics, and so on) that its ringing with respect to SIA is a weaker update (and as I say, it rings for SSA as well). Indeed, the fact that these problems with SIA—e.g., Pascal’s mugging-type cases, infinity issues—are so structurally similar to problems with expected utility theory and totalism seems, to me, some solace. They aren’t good problems. But in my opinion, it’s good (or at least, respectable) company.

Given the weirdness surrounding infinity, I don’t think it’s a big problem when a very plausible view implies infinite weirdness. A lot of plausible theories imply weirdness when it comes to the infinite. Given this, I think the most likely possibility is either a) that infinity just sort of breaks the world and makes a bunch of paradoxes where there aren’t really precise facts or b) that we’re thinking about the infinite wrong and there will be some elegant solution that makes the puzzles evaporate or c) somewhere in between. If anthropics was the only thing that got wonky with infinites, that would be one thing, but when *everything does*, it’s time for a paradigm shift!

Third, most of these problems come from comparing the broad numbers of people with different properties. I agree that objective probability gets really weird when one is dealing with infinities. But objective probabilities are propaganda from big frequentism—what matters much more is subjective probabilities. And when it comes to those, I’m optimistic that there are some limited ways to reason.

For example, suppose we’re theists in an infinite world, trying to see if we should trust induction. Here’s an argument for why we should: we reason “I’m loved by God, and God wants those he love to generally not have false beliefs. Therefore, I should trust that I have mostly true beliefs, and that my mind is not totally fault—that induction works.” Even if the number of people with reliable reasoning equals the number with unreliable reasoning…so what?

Fourth, it might be that every being in the universe has generally reliable reasoning. This is what we should expect if there is a God, for he has no reason to make anyone generally deceived. But if most of my reasoning is reliable, then it seems that I should trust my reasoning. (Note: this assumes that there’s an afterlife, but theism naturally predicts that).

Now, Aron had a response to this, which is roughly the following: it’s one thing to say that infinity is a problematic mess to be mostly ignored or figured out later, but it’s quite another thing to say that when your theory tells you that there are infinite people. SIA says that your existence gives you infinitely strong evidence for the existence of infinite people. As Aron said, it’s one thing to say that your theory produces paradoxical results when you set the variable X=3 and quite another to say that and *then set X=3*.

But I think my arguments above show that, even if we’re very confused about it, there’s likely to be some acceptable way to reason about infinites. Views other than SIA don’t have an easy time doing that either—they generate similar horrendous paradoxes. So while SIA poses some practical difficulties, given that infinite people existing is possible, the fact that SIA instructs you to think those infinite people are actual shouldn’t be an additional problem. If we know that there’s a possible world where X=3, then who cares if the theory tells you that X actually = 3 or just in a possible world—either way, there’d have to be some way to reason about it.

And we’re not completely in the dark about SIA. If you notice, all of my arguments about why we can trust induction and stuff assumed theism. If one’s a modal realist, for instance, or adopts some other view on which there are infinite people other than theism, it’s unclear how they’ll be able to trust induction. Similarly, SIA tells us to think there are N people, where N is the most people there could be—whether that’s Beth 2, Beth 3, or unsetly many. This—surprise surprise—favors theism. There’s something a bit suspicious about this if SIA is false—it tells you to adopt a very specific picture of ultimate reality that happens to be independently supported.

The final point is that I think the problems for SIA are similarly problems for everyone! Suppose there are an infinite number of people—one in room 1, one in room 2, one in room 3, and so on for n rooms. It seems I should have a uniform prior across being the ones in those rooms—I should think it’s just as likely that I’m the one in the first room as the one in the 8456th room. Now, Aron actually rejects this, which I think is really crazy, but if you accept it then you get, in cases where there are infinite people, a credence of either zero or infintesimal in being any particular person in the room. So I think the puzzles for SIA are just generic puzzles if you have a uniform prior across the people you might be, which I think you should! But once you have infinite people and a uniform prior across them, all the paradoxes arise—should you give a higher credence to being in an odd room or a prime room? However you answer, the result is super weird. And it gets even weirder if there are Beth 1 rooms!

Now, Aron’s view is even weirder. In order to avoid these paradoxes, you have to think that your credence in being in various rooms rapidly drops off. So, for example, it’s very likely you’d be in one of the first billion rooms, but after that, it’s very unlikely that you’re in a later room (your credence has to get progressively smaller per room because if it didn’t your credence in the various outcomes would add up to infinity, rather than 1).

But this means that there’s some number N where everyone should bet that they’re in the first N rooms. But—and here’s the kicker—if everyone bet this way, infinity people would lose their bets and only a finite number—N, to be specific—would win their bets. But if your betting advice causes *100% of people to lose their bets*, then it’s bad advice! So for this reason, I think this is a puzzle for everyone. SIA at least has the dignity to *not give a clear answer*, while alternative views either give an unclear answer or give a horrendously, grotesquely counterintuitive answer, about as unintuitive as accepting that no one has arms!

I do not know what to say about anthropic reasoning with the infinite. I admit that it’s incredibly weird. But I don’t think it’s a special problem for SIA. In contrast, views other than SIA imply horrendous paradoxes even in cases without infinites and thus are worth rejecting.

edited Jun 5>First of all, I’m optimistic that there will be some nice way to do the math surrounding probabilistic reasoning involving bigger infinites.

I think this optimism is misguided. Mathematicians have devoted an unfathomable amount of effort over the past century to rigorously formalizing probability theory, and none of it really comes close to working at the level of generality you seem to want and/or need it to work at. In fact, there's a certain sense in which the entire story of the development of probability theory since Kolmogorov is one of mathematicians collectively realizing "hey, we can't naively apply these intuitive probabilistic concepts to situations willy-nilly, but instead need to very tightly delimit the places where they have some hope of working out, and figure out which other places exhibit behavior too pathological for us to say anything meaningful about." For example, all the early 20th century work that went into distinguishing between measurable and non-measurable sets, all the precise and non-trivial conditions sufficient for us to get "disintegration theorems" that let us conditionalize on events of probability zero, all the non-standard analysis research into Loeb measure-based probabilities on which we can incorporate infinitesimals but only in *very* specific and limited ways (see internal vs. external sequences and so forth), etc. There are many others.

None of this is a disproof, of course. Maybe some probabilist will come along tomorrow and present a revolutionary new theory that's vastly more applicable somehow, even to all the exotic, low-structure probability thought experiments that philosophers want to run - and even though there's always been a ton of incentive to do this among thousands of the world's smartest researchers who would be motivated to do exactly this, and none of whom would have needed CERN-level funding to carry it out if were indeed possible. I just think the appropriate emotion here is pessimism rather than optimism.

> It’s already possible with some small infinites—for example, there’s a coherent sense in which there are more odd numbers than primes, in the sense that they have greater density.

There's a bunch of problems with this. First, density isn't really an intrinsic property of a set of objects by itself, even a merely countably infinite set. You also need to impose an ordering. In the case of natural numbers, there is a "natural" ordering by size. It wouldn't be difficult to rearrange these numbers so that the density of primes on this new order goes to 1, or even any other number between 0 and 1, or even be undefined. Now, you can (and many would) argue that the natural order on N should have special epistemic privilege or something over any more gerrymandered-seeming order, and that's fine, but the issue is that in thought experiments, there need not be any obvious/natural order at all. If God just tells you he's created a countably infinite number of epistemic duplicates of you such that X and a countably infinite number of other duplicates such that ~X, how are you going to come up with a density of X based on that? And, more importantly, *we are in fact always in this situation with respect to almost everything*, since there's always a small positive subjective probability that this situation I just named is actual!

The other issue is that densities arguably aren't really probabilities, even presupposing a privileged order. They violate countable additivity - which, OK, maybe you're willing to give up - but more importantly, they frequently fail to be defined for even basic questions. For example, on the natural order, what is the density of positive integers which (in base-10) begin with the digit "1?" The answer is that there isn't one. It keeps going up and down between different numbers (IIRC ~11.1% and ~55.5%) as you increase the maximum cutoff of positive integers you're looking at.

Maybe your answer to this is to just say we simply need to hold our tongues in instances like this where the density approach doesn't turn up a unique, well-defined answer, but my point is that this is almost always going to be the case in real life as opposed to highly artificial toy scenarios in which we stipulate away every source of mathematical inconvenience!

There were more things I wanted to object to, but this comment is probably already too long.

//Am I crazy, or is this just a blunt appeal to faith? If you're wrong about this argument, then you're not going to get into Heaven. Obviously if you die and wake up in heaven, then it doesn't matter whether hevaen is logically impossible, because you're already there. Yay! //

This was just a poetic way of saying there’s some unknown solution. Don’t reject infinites because a) math needs them b) physics suggests an infinite world c) space is likely infinitely divisible and d) anthropics points to infinite people.