The Fine-Tuning Argument Simply Works
The laws, constants, and initial conditions are finely tuned to allow for life. Atheism has no good explanation of that. Exciting update at the end.
Fine-tuning explained
There probably is a God. Many things are easier to explain if there is than if there isn’t.
— John von Neumann (maybe the smartest guy ever).
My attitudes toward the fine-tuning argument follow the trajectory of the midwit meme. When I first heard about the fine-tuning argument, in middle school, I became a deist for about a day. Then, after giving it more thought, I concluded that there were good explanations of fine-tuning that were consistent with naturalism. Now that I’m older and wiser (currently very old and very wise), I have reverted to my original assessment: fine-tuning is ridiculously surprising and crazy strong evidence for God. There are lots of objections to it but none of them work.
The argument begins by noting that the parameters of the universe are finely tuned in the sense that they fall in a very narrow range needed for life. For instance, with the cosmological constant, of its possible values, only 1/10^120 are capable of sustaining anything valuable or complex—most values would result in everything simply flying apart. Fortunately for us, the cosmological constant happens to fall in that narrow value.
There’s broad agreement in physics that there is significant fine-tuning. It falls into three categories: fine-tuning of the laws, fine-tuning of the constants, and fine-tuning of the initial conditions. The laws are finely tuned in that if you tweaked or deleted any of the laws—with the possible exception of the weak nuclear force—no complex structures could arise. There’s also something called the Pauli exclusion principle which also is indicative of fine-tuning.
The constants are the values that get plugged into the laws that determine their effects—for instance, the strength of gravity is determined, in part, by some constant G. These constants are the most widely cited example of fine-tuning: the cosmological constant, for instance, is finely-tuned to 1 part in 10^120—as improbable as throwing a dart randomly across the entire known universe and hitting just one atom. Lots of other constants are finely-tuned.
Finally, the initial conditions are what the universe looked like at the beginning—at the big bang. They’re finely tuned to an even greater degree—the low entropy state at the beginning of the universe represents only 1 part in 10^10^123 of the available values it could have taken on. That’s a truly staggering number—if you wrote the letter 9 on every quark in the universe, the number you’d write out would be much smaller than 10^10^123.
If there is a God, then the universe being finely tuned makes sense. God would want to create a universe capable of sustaining life. He’d be decently likely to make a universe like this one, with predictable, stable physical laws that we can scientifically explore. If there is no God, then the constants, laws, and initial conditions could be anything, so it’s absurdly unlikely that they’d fall in the ridiculously narrow range needed to sustain life.
There are lots of objections to fine-tuning that people raise. Some are, on their face, more plausible than others, but I think that none of them succeed. I won’t address literally all the objections to it given that there are a ton—and some of them I’ve already addressed—but I’ll cover the main points. My conclusion: the argument is extremely successful and should dramatically raise the probability of theism. The objections range from completely terrible to interesting but extremely unpersuasive. The names of the objections will be in bold, so feel free to skip around.
There is no fine-tuning
He is before all things, and in him all things hold together.
—Colossians 1:17, predicting the fine-tuning of the cosmological constant.
People often claim that there is no fine-tuning. Often they justify this by appealing to some other point—I’ll address those points later—like that life could have arisen under different conditions, that there could be more fundamental laws, and that the constants could perhaps sustain life if they were drastically different. But sometimes people say that fine-tuning is just not well-evidenced in physics.
This is totally wrong. One can read the SEP page on fine-tuning which summarizes a large number of parameters in physics that are agreed to be finely-tuned. Three of the four fundamental laws, at least five constants, and three different initial conditions are listed as examples of fine-tuning, each of which is agreed to be finely tuned by a significant number of physicists. And this is just scratching the surface.
Therefore, in order for fine-tuning not to be real, a widespread consensus surrounding lots of different finely-tuned constants, discovered by lots of physicists including, as the SEP page says, “Leslie 1989: ch. 2, Rees 2000, Davies 2006, and Lewis & Barnes 2016; for more technical ones see Hogan 2000, Uzan 2011, Barnes 2012, Adams 2019 and the contributions to Sloan et al. 2020.” Many of these people are atheists.
You don’t even have to know any physics to see that this can’t possibly be right. There’s a much more basic conceptual point at issue—it’s very hard to make a physical system produce anything interesting. Here’s a very simple law: all particles move in a circle at 1 mile per hour. Here’s another: they all move in a straight line until they hit each other and then they bounce off in a random direction. One can make an infinite number of laws like this by varying the speeds and making them follow other simple patterns.
Try writing a computer program that has stuff move around. You probably won’t get any interesting system. Random disorder is much likelier than a valuable, well-ordered system.
Famously, John Conway made a game called the Game of Life (click the link if you want to explore the game). You can place down dots and then the dots move around according to the following rules:
Each cell with one or no neighbors dies, as if by solitude.
Each cell with four or more neighbors dies, as if by overpopulation.
Each cell with two or three neighbors survives.
For a space that is empty or unpopulated: Each cell with three neighbors becomes populated.
It’s a fun thing to play around with. It produces certain somewhat interesting interactions, though nothing that could produce life. But he had to work incredibly hard to design rules like the ones he ended up on, which are capable of producing anything interesting—most ways he tried to set up the game were incapable of producing any long-lasting system. And he needed several different rules: if he’d had just one, it wouldn’t make anything interesting.
Even a system like Conway’s isn’t enough. First of all, it requires certain special initial conditions. If you place dots down randomly, it won’t produce anything interesting. It also pretty quickly loops—leading to either the same system repeating—or just destroys all the cells placed down. The range of physical systems that will continually evolve, for a long time, with looping is very small.
Thus, in order to have a life-permitting universe, the initial laws and conditions have to fall in a very narrow range. It’s very hard to get a physical system to produce anything interesting. So even if we didn’t know any physics, we can see conceptually that producing an interesting physical system requires great fine-tuning and precision.
Let’s call this kind of fine-tuning nomological fine-tuning. The nomological fine-tuning argument, then, is the conceptual argument that it’s very difficult to produce a physical system that produces anything interesting or valuable.
Remember this point. It will be important later. Arguably, it represents the main crux of the fine-tuning argument.
What if you change a bunch of constants at once?
One objection to fine-tuning that people are fond of making is that perhaps the share of possible constants that are life-permitting isn’t small because, though if you just changed one constant, life wouldn’t be able to arise, if you changed a bunch of constants at once, perhaps life could arise in other conditions. This has two problems: it is wrong about physics and even if it were right about physics, it would be confused conceptually.
I won’t spend too much time about the mistaken physics as Luke Barnes has already addressed that (see section 4.2.1 The Wedge is a Straw Man). We simply have been able to vary multiple different constants and see that no life would arise.
The conceptual point is more interesting, however. Let’s start with an analogy: imagine that there is a house with two ways to get into it. The back door has a code with only 2 numbers—so if you enter 100 numbers you can probably get in. The front door has a code with 10 numbers—so the odds of entering all the right numbers are 1 in 10^10.
Imagine that a burglar enters the front door’s code. There are two suspects: Jim, who set up the code and knows the codes for the back and front, and Todd who doesn’t know the code. Imagine that Todd is, on independent grounds, 100 times more likely to break in, but would have no way of knowing what the code is.
You confront them both and suspect that Jim did it. But Jim says: “hold on a minute, you suspect I broke in because the person got in the house, but the prior in Todd entering the right code is like 1%, so he’s just as likely to be guilty.” This wouldn’t be a good argument. Even though Todd might have a 1% chance of getting into the house, the odds that he’d get into the house in this way are near zero.
Similarly, no matter how many very different ways there are for life to arise, those don’t affect the probability that life like ours would arise.
Call sets of laws fragile if changing them slightly would make it impossible for life to arise and nonfragile if they could be changed a lot. Proponents of this objection claim that our laws are fragile but there are vast islands of non-fragile laws in possibility space, making the probability of life-permitting laws not terribly low. Note that for this objection to work, there have to mostly be nonfragile laws—if they’re mostly fragile then that means that for every life-permitting laws, there are many that aren’t life-permitting.
But if most laws are nonfragile, then it’s inexplicable that our laws our fragile. Precisely as one ratchets up the share of life-permitting laws that aren’t fragile, by the same amount that they raise the probability of life-permitting laws existing at all, they lower the probability of having laws like ours, conditional on having life-permitting laws.
Making there be more laws different from ours may raise the odds that life-permitting laws would exist in the first place, but they don’t affect the odds of our life-permitting laws arising one bit!
So this objection isn’t just wrong about the physics, it’s wrong probabilistically. It also fails to deal with the nomological fine-tuning point, that laws capable of sustaining life are inherently rare. It’s hard to make a system that produces anything interesting!
Couldn’t different kinds of life arise?
Here’s an objection to fine-tuning that arises constantly: okay, maybe if the laws and constants were different humans couldn’t arise, but other kinds of interesting life would arise. So no fine-tuning problem! If you ever make this argument the ghost of Victor Stenger will appear and raise this objection, despite the fact that it is demonstrably mistaken about how the fine-tuning argument works.
First of all—and I know I’m saying this a lot, but it’s because this is a generic problem with every objection to fine-tuning—this doesn’t grapple with the nomological fine-tuning argument. Even if life could arise under dramatically different circumstances, the odds we’d have a universe that makes anything interesting is super low. Even if different types of life could arise if you varied the constants, it would still be miraculous that we have laws that produce anything interesting.
Second of all, this just flatly misstates the physics (see this debate for an elaboration on the point). As the SEP notes:
A joint response to both worries is that, according to the fine-tuning considerations, universes with different laws, constants, and boundary conditions would typically give rise to much less structure and complexity, which would seem to make them life-hostile, irrespective of how exactly one defines “life” (Lewis & Barnes 2016: 255–274).
If the cosmological constant weren’t extremely finely-tuned, for instance, stuff just wouldn’t hang together. If the universe hadn’t been in a low entropy state, everything would be random disordered chaos. Going by the SEP’s claim, if the relative amplitude Q𝑄 (I don’t quite know what that means but I trust the archons of this aeon who write SEP pages) had been smaller, the universe would have been structureless, while if it had been larger, everything would have quickly collapsed into black holes. The basic point is that if one just reads through the examples of fine-tuning, they’re mostly things that are needed for anything interesting to happen.
This argument also faces the same conceptual problem as the last one: if life could arise under tons of different conditions, it’s unlikely that it would arise under these conditions. As long as theism gives a decent probability of life like ours arising, because the odds on atheism are so low, we have good grounds for theism.
Can’t do probabilistic reasoning about the constants
Here’s a common class of objections: we can’t actually do probabilistic reasoning about something like the laws of physics. To do probabilistic reasoning, you have to have a range of possible values, but it doesn’t make sense to think of the laws of physics as having a range of possible values—there isn’t some machine that determines the laws of physics by rolling dice.
Here, we have to distinguish between objective chances and subjective chances. The objective chance of something is, if you repeat the process a number of times that approaches infinity, what share of them will turn out that way. For example, the objective chance of a coin coming up heads is .5 because as the number of coins you flip approaches infinity, the share of them that will come up heads will approach half. Subjective chances are the odds one would assign to something before seeing how it turns out. The subjective chance of Biden being elected next election might be 50%—that doesn’t mean if you keep running elections he’ll get elected half the time: it means that given our uncertainty we should think there’s a 50% chance he’ll get elected. For probabilistic reasoning of the kind invoked in the fine-tuning argument, subjective chances are what’s relevant.
To determine subjective chances, you should imagine a rational agent assigning probabilities to the possible outcomes before they know the actual outcomes. In the case of fine-tuning, for instance, a person who didn’t yet know what values the laws fell into would think there was a super small chance it would fall in the finely-tuned range because it could fall in any range. Because there are at least 10^120 values of the cosmological constant alone, and nothing special, conditional on naturalism, about the finely-tuned one that it happens to be, the odds of fine-tuning by chance of just the cosmological constant are 1/10^120.
Let me give an analogy: imagine that we discover that several billions years ago someone built a special machine. This machine will spit out a number, determined by some undiscovered laws of physics that can’t be different. The number could be any number from 1 to 100 billion.
Imagine that we know that the person who made the machine just loves the number 6,853. It’s his favorite number, he has shirts attesting to the greatness of the number, he was once caught, well, I won’t get into it, but it was gross and it involved that number, and he got a tattoo with the number. Now imagine that the machine spits out the number 6,853.
Clearly, this should give us some evidence that he rigged the machine. Even though the machine, on the hypothesis that it’s not rigged, is wholly determined by the laws of physics, it’s unlikely that the laws of physics would determine 6,853—there’s nothing special about it. In contrast, he’d be much likelier to pick that number. Therefore, if the number is picked, it favors the hypothesis that it’s rigged.
I submit that this is relevantly like fine-tuning. In both cases, it happened in the past, so no future predictions are being made. In both cases, on the hypothesis that there isn’t design, the outcome is determined strictly by the laws of physics, and the laws of physics aren’t variable. If they’re analogous, then you should reason about them the same way, and conclude that this fine-tuning gives evidence of rigging, just like this theory does.
You might object that there’s something special about the anthropic situation. If there hadn’t been fine-tuning, we wouldn’t exist. But we can mirror that in this case too: imagine that this guy was one of the first humans, and would only have offspring if the number came up 6,853. If he hadn’t had offspring, the first humans would have died off. Nonetheless, this is still powerful evidence of rigging.
Perhaps the disanalogy is supposed to be that in this case, the law only applied once, but our laws apply generally throughout the universe. But we can modify the scenario to accommodate this again. Imagine that we build a bunch of other machines that all produce the same number—6,853—but we know the laws of physics are such that if he rigged the first one, they’d all be rigged in the same way. In such a case, there’s still strong evidence of rigging.
Here’s another analogy: imagine that, in science fiction fashion, we go outside the universe and see the universe from which it was born. We come across aliens with extremely detailed plans for designing a universe—they describe wanting to make a cosmological constant like ours, and gravity, and so on. Every law and constant they describe being in accordance with ours.
However, those aliens are long dead. We don’t know if they succeeded. There are two possibilities: first that the aliens succeeded, second that they didn’t and the laws in our universe are just a copy of the laws in the progenitor universe (crucially, we have no idea what the laws in the progenitor universe were, and neither did the aliens). In such a case, the laws being exactly like those described in the blueprint surely provide very strong evidence that the aliens succeeded. But this is relevantly like the fine-tuning argument for theism: we have a hypothesis that some beings would design a universe in a certain specific way—upon finding out it is designed in that way, we get evidence that the beings succeeded in designing it that way.
The basic principle is simple: if there are a wide variety of different ways something can turn out, and none of them are special, if some theory naturally predicts it turning out the way it actually does, that theory is majorly supported.
Anthropic principle
Lots of people argue that the anthropic principle explains why we exist. If we didn’t exist, we wouldn’t have been here to wonder about it, so our existence can’t be evidence for anything. This has several problems:
I think it’s falsified by the examples I gave in the last section.
It’s just a nonsequitur—the fact we wouldn’t have been around to observe evidence if things had turned out differently doesn’t mean that things turning out one way can’t be evidence.
It’s obviously absurd in lots of cases. It implies that your existence isn’t evidence that your parents had sex or didn’t use effective contraception. It implies that if a man is fired at by 500 people and survives, that’s not evidence of a conspiracy—for if he hadn’t survived, he wouldn’t be around to wonder about it.
Deeper laws
A popular response to fine-tuning is to suggest that there might be deeper laws that predict fine-tuning. Perhaps there’s some more fundamental law that requires that all of the constants be in the range that they are. Now, there is no good solution in physics to how the heck this would work and plenty of big problems with it. But it faces a bigger conceptual problem.
Imagine that a person gets 10 royal flushes in a row in poker. You accuse them of cheating. “I’m not cheating,” they cry “there is more fundamental physics that we don’t know about that explains why I got 10 royal flushes.” Clearly, they have gone wrong. While it’s possible that more fundamental physics would make them get a bunch of royal flushes, fundamental physics could result in them getting any sequence of cards, so it just pushes the improbability back a level—it becomes extremely improbable that the laws of physics would be such as to produce that particular sequence.
The same thing is true in fine-tuning. There could be more fundamental laws resulting in any specific arrangement. It’s thus monstrously improbable that they’d produce a finely-tuned arrangement rather than the overwhelmingly more populous non-finely-tuned arrangements.
Stalking horse
A popular reply to fine-tuning—see here, for instance—is the stalking horse objection. It has more and less technical versions, but the basic idea is as follows: the mere hypothesis that there’s some agent or another doesn’t predict fine-tuning. Sure, the theist can add to their design hypothesis a more specific stipulation that makes fine-tuning likely, but then the naturalist can add to their non-design hypothesis a more specific stipulation to predict fine-tuning. Thus, if the theist gets to add to their hypothesis that there’s a mind disposed to finely-tune the universe, then the naturalist can add some auxiliary hypothesis to explain the data.
I have a few worries.
First, imagine that the initial conditions of the universe were arranged to spell out “
"made by God." Surely that would be good evidence for the existence of God. But one could make the same stalking horse move there. One could say “obviously if you build in to your theistic hypothesis that the designer is motivated to say ‘made by God’ then you predict the data, but if an atheist builds that in to their hypothesis, they explain the data as well.” Something has gone wrong here. The same thing, I submit, has gone wrong with fine-tuning.
Second, in order for an argument to be successful, all it needs to establish is that the odds of the conclusion are higher than they’d otherwise be. But insofar as the fine-tuning argument is in support of the conclusion that God exists—capital G God refers to a perfect being—then so long as God wouldn’t be super unlikely to make a life-permitting constant, the argument is successful. Sure, if you start out thinking that a capital G God is super unlikely—on the order of improbability of a very tiny slice of naturalism’s probability space—the argument won’t move you, but if you were on the fence about God’s existence, the argument should move you. The argument supports the conclusion that God exists, not that some creative agent or another exists, and in showing that, it is successful.
Third, as the fine-tuning argument establishes, the odds of fine-tuning conditional on naturalism are extremely low. Thus, in order for this argument to go through, it must be that the odds of an agent who would finely-tune a universe conditional on there being a powerful agent making a universe is similarly low. If you think an agent is equally likely to make any cosmological constant, then the odds of a finely-tune cosmological constant will be comparably low. But so long as we have reason to reject this claim, the argument won’t go through. It seems we have at least three plausible routes to rejecting this claim, towards thinking the odds that an agent would make life permitting constants aren’t super low:
There’s something special about the cosmological constant value that gives rise to life: namely, it gives rise to life. So long as the prior probability of an agent caring about agents rather than chaos isn’t too low—on the order of 1/googol—then an agent is likelier to create a life-permitting cosmological constant value than some other constant value. But insofar as the odds really are low—on the level of 1/googol—it seems we’d need a strong argument for that. You’d have to think that the odds an agent would want to produce a life-permitting constant value is on the order of likelihood of hitting a particular atom via a dart thrown across the known universe.
An argument is successful if it would convince an agnostic. Now, if the fine-tuning argument is designed to convince an agnostic that God exists, then we must imagine someone with a non-trivial credence in God’s existence. But if a person has a non-trivial credence in God’s existence, then necessarily they think that the odds of God existing, conditional on some agent creating the universe, are high. Thus, the argument ought to convince someone who is agnostic about God. In addition, the notion that if there’s a designer it’s likely to be God is quite motivated—God is the simplest kind of designer, because he’s simply an unlimited consciousness (a limitless consciousness has no limits on what it knows or could do, and thus would grasp and be motivated by the moral facts).
Thus, because the odds of fine-tuning given naturalism are so low, all we need is the conclusion that 1) the odds of the simplest kind of mind—God—are not super low conditional on there being a designer and 2) the odds of God making a finely-tuned universe aren’t super low. Both of these are quite plausible, and thus the stalking horse objection fails to derail the fine-tuning argument.
Even putting aside God, a commonly held thesis is that the good is self-motivating. If an agent sees something is good, they’ll be motivated to bring it about absent a conflicting desire. But so long as this is right, and worlds that are life-permitting are the best worlds, an agent might be motivated to bring about a life-permitting world instead of one with, say, a cosmological constant value that prohibits life. Note that because the odds of a life-permitting universe are so low on naturalism, we just need some story of why a designer might make life-permitting universe finely-tune constants to favor the design hypothesis.
The multiverse
This is the best objection to fine-tuning. It says that if there are a bunch of universes with different constants, then some of them will be life-permitting. Naturally, we find ourselves in the life-permitting universe—no mystery there! There are different ways to make a multiverse: you can either have lots of universes exist at the same time or have the universes repeat with different laws (as Penrose suggests). These are a bit different, but similar in how to think about them, so I’m going to group them together.
For a while, I was convinced by this response. I thought atheists had a good explanation of fine-tunig. But I’ve since come to believe that it has very major problems: I’ll list 6 of them which cover, I think, the main problems (as the common charge of the multiverse committing the inverse gambler’s fallacy fails completely). It’s still the best reply on offer, but it’s very unconvincing.
First, it doesn’t deal with the nomological fine-tuning argument. A far greater share of the possible fundamental laws produce nothing interesting—say, just particles bouncing around in simple ways or doing nothing—than produce anything interesting. A multiverse is an interesting thing for them to do. If you just write a simple computer program that has some stuff follow laws, it’s unlikely to result in a multiverse being generated. The process required for generating multiple universes is complex and requires fine-tuning of its own!
Here’s an analogy: imagine you see a painting on the table. You infer it’s designed. Someone objects: there’s no design needed, maybe there’s just a machine that generates nice paintings. But that pushes the problem back a level: why is there such a machine? Similarly, positing that there’s a system that generates tons of universes doesn’t solve the problem—that’s an extremely unlikely way for the world to be, beaten in simplicity by literally an infinite number of universes. In fact, Robin Collins and Luke Barnes have argued in various places that the existing multiverse models themselves require fine-tuning—parameters falling in a very narrow range.
Second, the multiverses have to be very improbable in a variety of ways! There’s fine-tuning of the laws, constants, and initial conditions—so it has to be able to vary all of them. It’s very hard to get a system to randomize all of those things! The multiverse can’t just vary the constants: it has to vary all those things. But the vast majority of conceivable multiverses won’t be like that, so even once there’s a multiverse, the problem isn’t solved. Varying the laws in a principled way is especially tricky.
Third, a multiverse might lead to the proliferation of Boltzmann brains. A Boltzmann brain is a conscious observer that blips out of existence before dying after a few seconds—for example, a brain just randomly forming out in space for a few seconds before dying of asphyxiation.
Boltzmann brains are more likely than regular observers for three reasons (I’m pretty sure this is all right, but people who know physics can correct me). First, there are so many more possible non-life-permitting universes than life-permitting universes. However, the life-permitting universes can still have Boltzmann brains—they have no complex structure, but can still have observers randomly blip into existence.
Second, the most extreme kind of fine-tuning is for the universe’s low entropy state. But high-entropy universes can still have Boltzmann brains—they have random poorly ordered chaos, more than capable of producing brains that randomly fluctuate into existence. Thus, even of the universes finely tuned in other ways, most of them will have high entropy, and thus make tons of Boltzmann brains. As Pruss says of the multiverse objection to fine-tuning for low-entropy:
This doesn't apply to the entropy argument, however, because globally low entropy isn't needed for the existence of an observer like me. All that's needed is locally low entropy. What we'd expect to see, on the multiverse hypothesis, is a locally low entropy universe with a big mess outside a very small area--like the size of my brain. (This is the Boltzmann brain problem>)
Therefore, the multiverse can’t explain the most troubling kind of fine-tuning—fine-tuning for low entropy. For it to not produce mostly Boltzmann brains it assumes, rather than explains, a solution to the low entropy problem.
Third, this can be just seen conceptually. The multiverse is analogous to explaining why there’s a book like Shakespeare’s that exists somewhere by saying that there are an infinite number of monkeys typing on typewriters—some of whom are guaranteed to type of out. But before you get a full book of Sheakespeare, you’ll get a lot of individual paragraphs or sentences of Shakespeare surrounded by random chaos. If you’re trying to explain order, positing that there’s a randomization effect that produces lots of different outcomes makes small, local regions of order—like Boltzmann brains—way more likely than vast, global regions of order.
So for these reasons a multiverse probably implies that there are a—to use the technical term—shit ton of Boltzmann brains. But if there are tons and tons of Boltzmann brains, then you’re probably a Boltzmann brain: the huge majority of people with your experiences are shortly lived observers fluctuating into existence.
So if there are tons of Boltzmann brains then you should think you’re probably a Boltzmann brain. But if you think that then you shouldn’t trust your reasoning, because most Boltzmann brains have defective reasoning, so the belief is self-defeating. Additionally, Boltzmann brains die after a few seconds, so if you notice yourself not dying quickly then you know you’re not a Boltzmann brain.
Therefore, a multiverse gives reason to think that there are many Boltzmann brains which gives you reason to think that you are a Boltzmann brain which gives you reason to think that your reasoning is defective and you’re about to die, but you’re not rapidly dying, so you should doubt that there’s a multiverse.
Now, there might be multiverse models that don’t produce many Boltzmann brains. But for the reasons described, the vast majority of multiverse models do, and producing mostly Boltzmann brains is the default, so this will increasingly narrow the range of viable multiverse models.
Additionally, it’s plausible that if a theory by default implies skepticism then that’s a reason to reject it. For instance, if there was a theory that the universe was created by a demon who was very likely to deceive you, that theory should be rejected even if you can posit that the demon doesn’t like deception. If the default version and most likely version of it implies skepticism then arguably that’s a reason to reject it.
Fourth, even if a multiverse explains why we exist, it doesn’t explain fine-tuning for scientific discovery. Robin Collins has argued that various features of the universe are finely-tuned for scientific discovery. Many parameters fall in a very narrow range ideal for scientific discovery. In his piece in the book Two Dozen (or So) Arguments for God, Collins points to various different features of the universe ideal for science (this is my best understanding from reading it, though I might butcher some details—those who know about physics should feel free to correct me).
The fine structure constant determines the strength of the electromagnetic force. If it were stronger fires and biofuel wouldn’t work and so there would be no practical way of harnessing energy. If it were weaker fires would burn through all the wood and harnessing energy would also be impractical. It falls in a very narrow range needed for viable science.
We use the cosmic microwave background radiation to discover how the universe works and that the big bang happened. Our ability to do this depends on the Baryon to photon ratio “which is just the ratio of the number of baryons (i.e., protons and neutrons) to that of photons (particles of light) per unit volume of space.” The ratio is one to one billion which is precisely optimal for fine-tuning, representing a share of the parameter space at probability 1 in 1 billion.
In particle physics, lots of things are in a narrow range important for discoverability. For instance, Collins originally thought his thesis was refuted by the Higgs Boson, and that it was outside of a narrow range needed for discoverability. He later, however, realized that he’d made an error in his calculation and that it was precisely ideal for discovery. He then tested 8 other parameters in particle physics to see if they were optimal and discovered they all were, writing:
This led me to focus on the discoverability effects of varying parameters in the SM that meet two criteria: (1) within a well-defined range, they do not have effects on life in the present universe; and (2) we can make reasonable determinations of the discoverability effects of varying these parameters. Such parameters would provide a near-ideal test case of the discoverability optimality hypothesis. Eight fundamental parameters of the SM met these criteria, such as the mass of the Higgs boson and the masses of the particles in bold in Table E.1. As far as I can tell, each parameter appears to be in its respective discernable-discoverability optimality range (DDOR), defined above as the range for which it is clear that we have more reason than not to think that some value of the parameter in that range is optimal for scientific discovery.
This is particularly impressive because Collins made predictions in advance that later turned out to be confirmed. He didn’t just point to things that seemed good for science: he guessed in advance that the particle physics parameters would be that way and was correct.
Now, Collins’ research is pretty speculative, though it has been looked over by some other physicists. But if it’s right, it poses a real challenge for the multiverse. While a multiverse can explain why we find ourselves in a finely-tuned universe—we couldn’t have been anywhere else—it can’t explain why we find ourselves in a discoverable universe; after all, we could have been in a nondiscoverable universe. In universes without disocverable parameters in particle physics, we’d still exist, but would simply be unaware of the fundamental laws and the big bang.
He could be wrong, but I trust his stuff because: 1) it’s been looked over by other people, 2) he’s a pretty skeptical guy and he is convinced by it, and 3) he’s ridiculously intelligent! This, while not as firmly established as traditional fine-tuning, does help reduce the probability of a multiverse model being viable.
Fifth, if atheism’s only way out of the problem is to invoke a multiverse, then that still favors theism. Imagine if there was somehow a naturalistic explanation of fine-tuning that invoked the fact that all the atoms said made by God—somehow them saying that would fix the other laws and constants. Well even if that solved fine-tuning, it would still favor theism because the probability of the atoms saying that is much higher on theism than on atheism.
Similarly, if a multiverse is naturally predicted by theism, then if it’s the only atheistic explanation of fine-tuning but is likelier on theism than on atheism, theism gets a big boost from that. Best case scenario for atheism is that we discover that there’s a multiverse, but discovering that there’s a multiverse favors theism.
This is because God would be likely to create a multiverse. It’s good to create a happy person! God wouldn’t just stop at one universe’s worth of happy people—he’d keep creating. So if there is a God then a multiverse is very likely.
We can phrase this as a dilemma: either there is a multiverse or there’s not. If there’s not, then fine-tuning is a fatal problem. If there is, then because a multiverse is very likely on theism unlikely on naturalism, then theism gets a big boost.
Sixth, even if a multiverse explains it, it still favors theism. The argument from fine-tuning to theism is that fine-tuning is much likelier conditional on theism than on naturalism. Naturalists respond that they can explain it too! But, so what? The fact that a theory can explain something doesn’t mean it doesn’t favor another theory. The theory that aliens faked the OJ evidence can explain all the relevant data, but because it’s unlikely if OJ didn’t commit the murder, the evidence still majorly favors OJ having done it.
Similarly, for the multiverse to eliminate the force of fine-tuning, the odds of a multiverse has to be high on atheism. But why in the world would one think that? It’s a very specific and well-ordered way for reality to be that isn’t especially simple. So the probability can’t be that high.
The task for the multiverse explanation is, therefore, momentous! It has to avoid the nomological fine-tuning problem or take a major hit in terms of probability, somehow avoid the Boltzmann brain problem by, among other things, having a principled way of predicting a low entropy state (which is very unlikely, because for every conceivable multiverse model which predicts a low entropy state being ubiquitous there’s one that predicts every specific high entropy state being ubiquitous), explain fine-tuning for discoverability, be able to vary the laws, constants, and initial conditions so that some are finely-tuned, and even after it all does that, it still majorly favors theism for two separate reasons.
The exciting update
At this point, I believe in God. I don’t like deeply feel it in my heart or anything like that, there was no dramatic moment where I felt the divine presence, but over time, it’s just increasingly seemed like God explains the world better than alternatives and that there isn’t a satisfactory Godless picture of ultimate reality.
> So if there are tons of Boltzmann brains then you should think you’re probably a Boltzmann brain. But if you think that then you shouldn’t trust your reasoning, because most Boltzmann brains have defective reasoning, so the belief is self-defeating. Additionally, Boltzmann brains die after a few seconds, so if you notice yourself not dying quickly then you know you’re not a Boltzmann brain.
This is a pretty terrible argument. You have no evidence that you're not a Boltzmann brain. You can't "notice yourself not dying" because every Boltzmann brain would also not notice dying, since noticing yourself dying instantly is impossible. So you have no evidence that distinguishes your current experience from such a brain.
Your argument just boils down to "I'm going to take as an article of faith that I'm not a Boltzmann brain, therefore god", which is not a serious logical argument.
Secondly, you are gerrymandering what counts as an arbitrary constraint. According to you God is not arbitrary because its "perfect goodness". A multiverse is arbitrary because it requires lots of randomness. You give no criteria as to what is "simpler". You also appear to be saying that *any* constraint is "arbitrary", which obviously favors god because once you define every constraint as arbitrary you essentially stipulate there must be an unconstrained being.
What is the source of these definitions of simplicity or arbitraryness? You have never provided one in all of your blog posts. The categories are simply made-up!
Very well written. I've studied the science side of the fine-tuning argument a fair bit. I find it very impressive. You've helped me to see that the more conceptual-philosophical side is also impressive. I'm very glad that you've come to believe in God. It is not unlikely that a God that created a universe with creature like us in mind would want to communicate with us. You're getting closer to Christianity. I know there are still a lot of options and questions and issues. But you're getting closer.