Eneasz Brodski Makes Silly Mistakes In 'Bentham's Bulldog Makes Silly Mistakes in His "Best Argument For God."'
My article is not, in fact, riddled with errors
1 Introduction
I recently appeared on the Bayesian Conspiracy podcast—great name!—to discuss my article The Best Argument For God. It was a fun conversation—Eneasz and Steven, the hosts, were friendly and amicable. The conversation was sparked by an episode they’d made calling the anthropic argument for God terrible, and displaying bafflement that a guy (me) who makes these sorts of arguments is taken seriously by smart people like Scott Alexander. Hopefully I cleared up the bafflement!
I’m not going to go into too much detail about the conversation—if you want to hear about it, I recommend you watch it. While Eneasz and Steven called the anthropic argument bad, it seems their actual problem was with the prior probability of theism. They said very little in the way of reasons it didn’t raise the probability of theism, but instead argued that theism was so improbable that we should look for alternative explanations of the phenomenon. But this is a weird bar for an argument to have to clear—if an argument for evolution makes evolution way more probable, but wouldn’t convince a creationist who starts out nearly certain it’s false, it may still be a successful argument. As I said in the article, my aim was to argue that if a person was either agnostic or even pretty convinced of atheism, I think this argument should convince them. If your credence in theism is 1 in 100 trillion, then no single argument should convince you. But this is true of almost all arguments for all things.
Eneasz Brodski, one of the cohosts, has now written an article titled Bentham's Bulldog Makes Silly Mistakes in His "Best Argument For God," thus leading to this article’s witty and succinct title “Eneasz Brodski Makes Silly Mistakes In 'Bentham's Bulldog Makes Silly Mistakes in His "Best Argument For God."'
I’m going to give a very simple explanation of the anthropic argument so that our dispute makes sense, but probably you’ll be confused unless you read the argument in more detail. In short, it begins with the idea that you exist! Next, it claims that your existence is likelier if there are more people—if there are 1000 people, all else equal, it’s 100 times likelier that you’d exist than if there were only 10 people. From this, we learn that the number of people that exist is the most there could be—some extremely large infinity—because if only finite people or a smaller infinity worth of people existed, it would be infinitely less likely that you’d exist. Finally, because God is good, and would want to make either all possible people or some huge portion of them, God’s existence better explains why so many people would exist than naturalism. The probability of that many people existing is higher conditional on theism than naturalism.
2 Bogusly assuming infinite people exist
Okay, preliminary confusion out of the way, Eneasz first supposed error is that my argument bogusly assumes that infinite people exist. He says:
I agree this follows from the assumption that the universe is infinite, and I’ve heard that many cosmologists say the universe is infinite. Matthew erred in never saying this part explicitly in this original essay. Even a link back to a previous post where he had said this would do.
This demonstrates basic confusion about the argument. The claim is that if there are two theories, and one of them says that N times more people exist than another, your existence confirms that theory by a factor of N. This is the self-indication assumption—it’s overwhelmingly plausible—and Eneasz accepts it! But if this is true, your existence gives you infinitely strong evidence that infinite people exist. No physics is needed!
The argument wouldn’t be hurt by the universe being finite. The claim is that Beth 2 at least—that’s a super huge infinity—people exist somewhere in the multiverse. Beth 2 people couldn’t all fit in our universe (don’t worry if you don’t follow this sort of technical explanation, but I’ll explain it if people want: Beth 2 people can’t fit in one universe because for every two people bound in spacetime who are more than a point of size there’s a sentence describing their spatial relations with each other. But there are only aleph null possible sentences, which is much smaller than Beth 2).
So the size of our universe is flatly irrelevant to the argument. The argument gives you a reason to think there are way more people than fit in our universe, for one. If the argument just established there are infinite people and they’re all in our universe, it wouldn’t be effective, because atheism doesn’t need weird contortions and epicycles to predict aleph null people—it’s higher infinities that are a problem for atheism.
While finding out that the universe is infinite might bolster in minor ways small parts of the argument—it would, for example, rule out finitism as a philosophical view, and finitism obviously requires one rejects the argument—it’s not at all essential. The argument argues, on anthropic grounds, that there are a higher cardinality of infinity worth of people at least. The size of our universe doesn’t matter—the argument points to a mind-blowingly large, infinite multiverse.
The next bit of the article is odd. Eneasz weirdly postures about my failure to claim the universe is infinite making me a charlatan because I misleadingly avoid the weirdness in the argument (???!!!—oh, and follow-up, ?????!!!):
If you don’t defend it you won’t be taken seriously. If you don’t even say it explicitly you start to look charlatan-adjacent. Because if you were to say it explicitly you’d give someone the opportunity to say “Hold on, I’m not sure that’s valid” and a charlatan doesn’t want to ever let an argument be held up to scrutiny, and thus will avoid granting any such opportunities.
Oh yes, I’m really running away from the weirdness by *checks notes* claiming that there are Beth 2 people—more people than there are numbers! Eneasz claims that the idea that there are infinite copies of you, and related ideas “have literally no evidence to make them worth considering as anything other than fun hypotheticals.” But he accepts SIA! If you accept SIA, you think your existence gives you evidence there are more people—infinite evidence that there are infinite people! So of course there’s evidence—of an infinitely strong variety, on our best theory, in fact.
I conclude that Eneasz’s criticism in this section is badly confused.
3 Coinflip galore
In explaining the self-indication assumption, I gave the example of a coinflip. If a coin comes up heads, 1000 people get created, while if it comes up tails only 10 people get created. SIA proponents say that if you get created from this, you should think 10:1 odds the coin came up heads. Eneasz says:
Matthew states that the fact(??) that there are infinite people provides “infinite evidence” that his god exists. In the hypothetical the fact that is given near-infinite likelihood is “the coin came up tails”, and we already posited that there was a coin that was flipped. In Matthew’s extension of the analogy, this means “my god exists” is the equivalent of “the coin came up tails.” Which smuggles in the idea that a coin exists and was flipped IRL.
But I didn’t say this! At least, I don’t think I said it, because I don’t believe it and can’t find anywhere I did say it. My claim is rather that on the self-indication assumption, you get evidence that some huge infinite worth of people exist. Next, I claim that some huge infinite worth of people existing is super strong—not infinitely strong, but very strong—evidence for theism. The coinflip was used to illustrate the self-indication assumption—how you update based on there being more people. It was, however, just an illustration—I wasn’t arguing that God is like a coinflip. Eneasz is, therefore, badly confused when he asks “What exactly is the coin that is being flipped IRL?” No coin is being flipped—it was simply used for illustrative purposes (given that the only difference between heads and tails is that tails has more people, unless you get an update in favor of theories on which there are more people from your existence, you won’t think tails is likelier than heads).
Note, also that the case for SIA doesn’t hinge on anything about coinflips—that was just for illustrative purposes. Lots of other similar cases can be constructed.
Eneasz next says:
…infinite people existing is equally likely in a natural universe. (In case this needs defending — Since our universe appears to be a natural universe, if we accept that it could be infinite in a way that allows infinite people to exist right now, then a natural universe obviously allows for infinite people to exist.
This is wrong in two ways. First, the argument doesn’t merely establish that there are infinite people, but that there’s a big infinite number of people—at least Beth 2, a modest lower bound on the number of possible people. So it’s not enough for a naturalist universe to be infinitely big and have aleph null people—it needs MOAR!
Second, this is probabilistically confused. The fact that A and B can be true together doesn’t mean that B isn’t evidence against A. I, of course, deny that our universe appears to be natural, if by natural one means lacking in God. If one simply means that it obeys natural laws, well, I claim that certain features of those natural laws—including, potentially, having infinite people—are more strongly predicted on theism.
One could similarly argue, after getting 10 royal flushes in poker “since I appear to be playing fair poker, if we accept that I could get 10 royal flushes by playing fair poker, then fair poker obviously allows for 10 royal flushes.” This wouldn’t negate the possibility that 10 royal flushes is evidence for cheating.
In an even more confused and rather hilarious example, Eneasz later says:
A key part of the Self-Indication Assumption is the fact that one knows that previous to their creation, God flipped a coin!
No, no it’s not. The self-indication assumption says from the fact that there are more people, it’s likelier that you’d exist, not anything about God having flipped a coin. This demonstrates quite profound confusion.
This section, claiming I’m falsely arguing by analogy is, therefore, confused. I’m not arguing by analogy—I’m illustrating the self-indication assumption. Just as Bayesianism is often introduced by examples without one claiming those examples are analogous to all cases where one applies Bayes theorem, the same is true of the self-indication assumption. Thus, contra Eneasz, it’s not the case that I got so “excited by the SIA argument that [I] forgot that the hypothetical is giving likelihoods to a god-flipped-coin that has no analog, and thus rests on nothing at all.”
4 Was I mugged?
Next, Eneasz claims I got mugged! By Pascal! Eneasz writes:
Bentham’s Mugging
Mostly as an aside - in his rejection of Eliezer’s writings it seems Matthew has even evicted the now-widely-popular Pascal’s Mugging. The argument “the fact(???) that infinite people exist provides infinite evidence for my god” is just a restatement of that mugging. Whether you say “3^^^3” or “Beth 2” doesn’t much matter.1 I’m sorry Matthew got mugged. This is what happens when one has bad epistemics.2 :( It’s hard to call having bad epistemics a “silly mistake,” it’s a personal tragedy with far-ranging consequences on one’s entire life. Let’s just call it unfortunate.
Eneasz puts quotation marks around the phrase “the fact(???) that infinite people exist provides infinite evidence for my god.” This is confusing because, to the best of my knowledge, I never said this! For one, I don’t think the anthropic argument is infinite evidence for theism—just a lot of evidence. For another, I capitalize God (to distinguish God from, say, Zeus). For another, I don’t use the phrase “my god,” as I don’t claim to have any special ownership of God. I assume, therefore, that Eneasz was just putting quotation marks around the summary of the argument. But this isn’t a faithful summary.
On the self-indication assumption, you do get infinitely strong evidence, from the fact that you exist, that there are infinite people. Of course, this is the inside view—in reality, you shouldn’t be infinitely confident that there are infinite people because you should think there’s some chance that the self-indication assumption is wrong.
Is the self-indication an instance of getting mugged akin to Pascal? No! Pascal’s muggin is about decision theory, not probability. In probabilistic reasoning, you should obviously update infinitely strongly from evidence that’s infinitely more strongly predicted on one hypothesis than another. If there’s a theory that a random number generator which produces a random number between 1 and infinity, is rigged and will always turn out 1, it’s not a case of Pascallian mugging, to think you get infinite evidence for the rigging hypothesis over the fair lottery hypothesis if you get 1.
Now, maybe Eneasz’s proposal is that we should do anthropics the way some people try to avoid Pascal’s mugging. People avoid Pascal’s mugging by either discounting low probabilities or having a bounded utility function, where utility above a certain threshold doesn’t matter very much. The problem is the analogue of these views in the anthropic context is crazy!
Let’s first consider the analogue of the bounded utility views. On this picture, once you have a lot of people, doubling the population doesn’t doube your odds of existing. Going from, say, 1 quadrillion to 2 quadrillion people doesn’t actually double your odds of existing.
First of all, this will imply that if there’s a coin that will create 1 quadrillion people if heads and 10 quadrillion people if tails, upon the coin coming up tails, you shouldn’t think tails is 10x likelier than heads but that if a coin is flipped that creates 10 people if tails but 1 if heads, if you get created, you should think tails is 10x likelier than heads. This is a weird kind of asymmetry.
Second, these views are unmotivated. Where does the threshold come from? It seems incredibly bizarre and arbitrary.
Third, every argument that I give in the article for SIA is an argument against this view. I’ll just describe one of these, but it applies straightforwardly to the others. On this picture, to avoid the muggings that Eneasz worries about, one must think that as the number of people approaches infinity, the increased probability of your existence from the number of people doubling approaches zero. If we go from googol people to 2 times googol people, your existence doesn’t become much more likely—let’s just say it makes it no more likely to make the math easier, but the same point with apply, mutatis mutandis (look at me, using that phrase), if it has a small effect.
Suppose googol people will get created. Then, after they’re created a coin is flipped, and if it comes up heads, googol more people get created. After being created, no one knows their birth-rank. If I get created, I should, by this logic, conclude that heads and tails are equally likely. Now suppose I learn I’m one of the first googol people. Now, because the odds of that are 1/2 if the coin came up tails but 1 if the coin came up heads, I should think it’s twice as likely that a fair coin that hasn’t been flipped yet will come up heads. This seems wrong (and has even weirder implications).
Fourth, this view has weird implications regarding the relevance of far-away people. Ordinarily, if a coin is flipped that creates one person if heads and two if tails, I should think tails is twice as likely as heads. But if there are already a bunch of people, then that stops being the case. This is super weird. Tim Walz would hate it.
The second view, according to which you should discount probabilities that would be super low absent considering the anthropic evidence, is a little better, but still crazy.
First, the third point I raised against the last proposal applies—all the objections also apply to this view.
Second, the fourth objection also applies—whether you should think a coin would double the population depends on whether you’re already in a low segment of probability space.
Third, this is probabilistically aberrant—in other cases where something is infinitely more strongly predicted on one hypothesis than another, you infinitely update. Why should this case be any different?
Fourth, if we’re almost at the low probability that you should discount threshold, this implies that even if that theory predicts googolplex times more people than the alternative theory, and is 99.99999% as intrinsically probable, you should discount it (this is for the same sorts of reasons, in a different context, I described here If you accept that, for any number of people, a theory that says there are 100,000,000,000 times more people with 99.9999999999999999% the intrinsic probability, you’ll necessarily think you have infinitely strong evidence that there are infinite people). This implies, in line with 1, that you can get arbitrarily certain that you’ll get 100 consecutive royal flushes in poker game you haven’t played yet, simply because if you don’t you’ll create a bunch of people.
Fifth, even if this is right, it probably doesn’t hurt the argument. Perhaps you should discount theism if you think its probability is super low independent of anthropic considerations, but you simply should not think that. Remember, the argument is intended to convince people who are close to uncertain, not people who are extremely confident atheists. This is a general feature of arguments—they tend not to convince people who are almost certain that the view that the arguments are in support of are false.
I conclude, therefore, that the Pascal’s mugging charge is flatly unfounded.
5 Are we fundamental? Is goodness?
The final claim of error is that I assume that “Humans Are Fundamental Aspects of Reality.” Now, I don’t think this—humans aren’t fundamental but emerge from facts about consciousness combined with the physical laws. Eneasz says that the argument relies on thinking that goodness is fundamental, because if it’s not, then a being of unlimited goodness is complicated. If goodness is just a complicated human construction, it’s not the sort of thing that can be simply maximally embodied.
First of all, I do, in fact, think that moral realism is true and that goodness is fundamental. I’ve argued the point in various places like here, here, here, and here, and am convinced by the arguments given by many other people, like Parfit in On What Matters, and the Yetter Chappells in this paper. If morality is a human construct, then there aren’t really things that are worth caring about, that matter even if we don’t think they do. I find that view nuts! Even if no one had ever cared about wild animal suffering, even if our moral language didn’t include reference to wild animals, wild animal suffering would still be bad. If a race of aliens all thoughts rape and torture were fine, and humans had never been around, rape and torture would still be wrong.
Second, even if you think moral realism is probably false, you shouldn’t be super confident in its falsity. A ton of incredibly smart people are moral realists including most philosophers, and about a third of philosophers take goodness to be fundamental. You should, therefore, be no more than 90% confident that they’re all wrong. But then theism will have a sizeable probability of being simple! This means your overall prior in it should be low but non-trivial—perhaps 1%—which is much higher than alternatives that predict Beth 2 people (and those views also undermine induction).
Third, this is a side issue from the argument. The argument is intended to show that anthropic considerations drastically raise the probability of theism. You could still not be a theist if the prior is low enough, but that’s not a fault in the argument. It’s not a problem if an argument for theism doesn’t address independent reasons one might be doubtful of theism.
Fourth, even if one isn’t initially inclined to accept moral non-natural realism, they should think it’s likely because of its wonderful explanatory power. Eneasz suggests that my view is ontologically bloated, saying:
Matthew seems to believe (from what I can gather) that “goodness” is a fundamental aspect of reality, like Time or Space. I don’t know how many fundamental aspects of reality there are in Matthew’s view, but it seems to be at least four - Time, Space, Consciousness, and Goodness.
But I don’t think space and time are fundamental. There’s a deeper explanation—God. By positing that goodness is fundamental, we can, from a simple starting point, reduce many complicated features of the world. Therefore, even if one wasn’t previously sympathetic to taking goodness as fundamental, because theism becomes so simple and explanatory if goodness is fundamental, this should cause them to think goodness is fundamental. Because theism has the potential to have so much higher of a prior than naturalism, you should think that it is probably right (I elaborate on this point here and here).
6 Even the critics—I have Democrats, and they come up to me and say, Sir, even though we don’t agree with the Anthropic, right, we love anthropic, in terms of the argument, we think you did a beautiful job with it, in terms of the arguments, and many other things. They say—and these are tough people—they’ve never seen anything like it
Eneasz closes his article with some kind words!
I want to thank Matthew for coming on the podcast to talk about this. It unearthed several things about the argument for me, so it was valuable to me in that regard, and the conversation was fun (which is always of high value to me personally). I think he’s more intelligent than I am in raw brain power. I hate to see it squandered (IMO) on such basic mysticism. The amount of intellect it takes to sufficiently obfuscate arguments to this level is formidable. It’s also indicative of how bad the basic idea is if that much work is needed to cloak it. The ontological argument was always drek, and adding an anthropic-principle pascal’s mugging doesn’t improve it.
It is, however, impressive to watch, and I suspect Matthew will continue to make waves in whatever field he pursues.
Thanks Eneasz! While I think everything he said in the article was false (the argument resembles the ontological argument about as much as a squirrel resembles…the ontological argument), the Bayesian Conspiracy podcast is good, and Eneasz seems like a nice and interesting guy. Worth checking out his blog.
>Second, even if you think moral realism is probably false, you shouldn’t be super confident in its falsity. A ton of incredibly smart people are moral realists including most philosophers, and about a third of philosophers take goodness to be fundamental. You should, therefore, be no more than 90% confident that they’re all wrong.
It seems like something weird is going on here.
Suppose Muslims believe that the Qur'an is the exact, uncreated and eternal word of God. Suppose Islam also teaches that God had a 100% probability of sharing this word with Muhammad and preserving it for all time.
Under this possibly hypothetical version of Islam (I'm not actually sure how accurate it is), there's a 100% chance Muhammad would relay all the words in the Qur'an in its present form in their precise, current order. Under the vast majority of non-Islamic views, by contrast, even if Muhammad had some high probability of coming up with *something* as his false prophetic text, the probability that it's the exact Qur'an we have today is near-infinitesimal. So the likelihood ratio of this version of Islam over common-sense alternatives is massive, way over a googol at the very least.
Now, there are millions of really smart Muslims who (again hypothetically) adhere to this theological view and who've thought a lot about it. You might want to say that Islam therefore should get a prior of at least 10^-50, if not significantly higher. But this is of course going to be massively outweighed by the ridiculously huge Bayes factor. So on this view of the epistemology of disagreement, we should consider Islam near-certain not due to any miracles, but merely because the Qur'an contains a lot of words in a fixed order!
Little known fact about philosophy but the SIA actually only applies to situations involving coin flips. Hope this helps clear up most of the confusion.