Reply To Scott Alexander Once More On Tegmark and The Multiverse
Comments on highlights from the comments on Tegmark's mathematical universe
1 Introduction
A few days ago, Scott Alexander wrote an article arguing that Tegmark’s mathematical universe—according to which every mathematical structure was concretely instantiated—deflated most arguments for God’s existence. I fired back with a response and now Scott has replied with a response to my response. Here is my response to his response to my response to him!
I quite enjoyed Scott’s article. While I think that there are some major things it gets wrong, Scott always has interesting things to say. When I was becoming a theist, there was a period when I seriously considered the Tegmark view and modal realism as solutions to the atheist’s anthropic predicament. But I’m no longer particularly sympathetic for reasons I’ll explain in more detail in this essay.
2 Consciousness, moral knowledge, and psychophysical harmony
The title of Scott’s original article was Tegmark's Mathematical Universe Defeats Most Proofs Of God's Existence. In reply, I argued that Tegmark’s view defeats almost none of the arguments Scott listed and also had nothing to say in response to several of the best arguments for God—particularly the moral knowledge argument and the argument from consciousness. Replying to me, Scott writes:
Bulldog mentions consciousness, psychophysical harmony, and moral knowledge as proofs he especially likes which MUH doesn’t even begin to respond to. I agree consciousness is the primary challenge to any materialist conception of the universe and that I don’t understand it. I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one. I admit that in rejecting this, I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences.
So it seems Scott and I agree on the argument from consciousness being pretty good and Tegmark’s view having nothing to say about it. That’s progress!
Regarding the argument from moral knowledge, Scott isn’t convinced by it because he’s not a moral realist, and especially not a moral non-naturalist. But as I noted in my reply to Scott, the argument applies just as much to our priors—on naturalism, the fact that some states of affairs consistent with our experience are likelier than others doesn’t explain why we think they’re likelier. By definition, whichever of the hypotheses that are consistent with our experience are true, we’d have the same intuitions and priors. Those are part of experience! For that reason, on naturalism, we have no basis for trusting any priors. But you need priors to think the universe is more than five minutes old.
To give an analogy, imagine that there are some goblins who worship a being called the absolute. The goblins have an intuition that worlds with the absolute have a higher prior than worlds without it. But they know that they’d have that exact intuition whether or not there was the absolute, and that intuition isn’t explained by the likelihood of the absolute. There was some specific selection pressure for having an intuition that the absolute is likely to exist. It seems they lose their justification for believing in the absolute. The same thing is true with all of our credences on atheism.
I elaborated a lot more on this in this post. See also Huemer’s article on a similar point. I even made sure to include the sentence in my reply to Scott:
The same point, by the way, applies just as much to other facts about math, modality, logic, and what priors are reasonable. Thus, being a moral skeptic doesn’t get you out of the argument’s threatening tentacles. Here, Tegmark’s view is no help.
Thus, I think what I already wrote explains why Scott’s claim that the argument is “ridiculous” is itself ridiculous! :) Being doubtful of robust moral realism is no help because there are symmetrical problems in other domains.
Scott next comments on the argument from psychophysical harmony. In a nutshell, this argument is about the harmonious mix between our behavior and our mental states. When you have a desire to move your limbs, your limbs move. This is surprising, given that there are infinite possible correlations between the mental and the physical, and most of them would not result in anything like a rich complex mental life. There could be someone behaviorally identical to you completely lacking in consciousness, or with radically inverted consciousness, or with very simple consciousness. It’s surprising that the mental pairs with the physical in a way that makes agents with rich internal lives possible. Of this argument, Scott writes:
Psychophysical harmony is in the in-between zone where it’s interesting. The paper Bulldog links uses pain as its primary example - isn’t it convenient that pain both is bad (ie signals bodily damage, and evolutionarily represents things we’re supposed to try to avoid) and also feels bad? While agreeing that qualia are mysterious, I think it’s helpful to try to imagine the incoherence of any other option. Imagine that pain was negatively reinforcing, but felt good. Someone asks “Why did you move your hand away from that fire?” and you have to say something like “I don’t know! Having my hand in that fire felt great, it was the best time of my life, but for some reason I can’t bring myself to do this incredibly fun thing anymore.” And it wouldn’t just be one hand in one fire one time - every single thing you did, forever, would be the exact opposite of what you wanted to do.
It sounds prima facie reasonable to say qualia aren’t necessarily correlated with the material universe. But when you think about this more clearly, it requires a total breakdown of any relationship between the experiencing self, the verbally reporting self, and the decision-making self. This would be an absurd way for an organism to evolve (Robert Trivers’ work on self-deception helps formalize this, but shouldn’t be necessary for it to be obvious). Once you put it like this, I think it makes sense that whatever qualia are, evolution naturally had to connect the “negative reinforcement” wire to the “unpleasant qualia” button.
Scott is incorrect that if pain was negatively reinforcing but felt good, you’d say “I don’t know! Having my hand in that fire felt great, it was the best time of my life, but for some reason I can’t bring myself to do this incredibly fun thing anymore.” Instead, you’d say exactly what you say in the actual world. Only your subjective states would be different.
The core insight behind psychophysical harmony: there are three stages in a conscious perception. First, there’s some brain state. Next, there’s some conscious state. Lastly, there’s some physical response. For instance, you might have a brain state A (C fibers firing), which gives rise to a conscious state B (being in pain), which gives rise to a physical state C (pulling your hand away and saying “please stop blowtorching my hand—that’s not a nice thing to do. It’s not considerate at all. It violates the categorical imperative. Now you might object…”)
The core insight behind psychophysical harmony is that it’s conceivable that B could be replaced with anything else. You could replace B with D—having the experience of eating a pickle. Or with E—having the experience of skydiving. Or with F—having the experience of pleasure. So long as you keep A and C the same, B is totally irrelevant. It’s effectively an epiphenomenon, even if epiphenomenalism is false—even if B causes stuff, it’s perfectly conceivable that some other mental state would do the same causing.
(If you’re thinking “no if physicalism is right this isn’t really imaginable I’d suggest you read the paper on the subject, as this worry is addressed at length).
In light of this, it’s utterly surprising that B involves a state that fits harmoniously with A and C. In other words, it’s surprising that the mental state produced by A involves feeling pain, rather than one of the other infinite conceivable experiences.
You could also have the psychophysical different in other ways. The simplest sets of correlations would just involve every physical state of a certain kind producing some very simple consciousness. For instance, you could get every instance of integrated information simply produce an experience of seeing a red wall—with intensity proportional to integrated information. The fact we don’t get those rubbish psychophysical laws, but instead one of the valuable ones, is evidence against naturalism.
Scott says that evolution had to connect unpleasant qualia with negative reinforcement. But negative reinforcement is behavioral. You could easily have exactly the same behavior with different conscious states. By definition, evolution doesn’t select against that. So it’s totally useless in solving the problem. Crummett and Cutter explain this in more detail as does my friend Amos.
3 A few loose ends
Scott didn’t reply to most of my arguments. Now, I completely get this—there were a lot of commenters he had to reply to. I’m just briefly flagging this for anyone trying to track down whether Tegmark’s view actually resolves most theistic arguments. In particular, I argued that it doesn’t help with any cosmological arguments or with the argument from discoverability.
Now, I don’t think cosmological arguments are very good. For this reason, I don’t really care if Tegmark’s view resolves them. But I do care about the argument from the universe’s discoverability. In his original post, Scott claimed that Tegmark’s view resolves this:
Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones; therefore, the average conscious being exists in a universe close to the simplest one possible that can host conscious beings.
But this is not what the argument from discoverability is about. What its proponents argue is that certain features of the world are improbably discoverable even without being simpler. For instance, Robin Collins argues that certain constants fall in a very narrow range ideal for allowing us to discover the universe. Tegmark’s view here is no help.
Scott repeated his earlier claims in his more recent post. In replying to Ross Douthat, he said the argument from discoverability is refuted by Tegmark. But he didn’t address my objections to this.
4 Inductive collapse
Edit 2/25: Apparently I misunderstood Scott’s view. For my objections to his view, skip to “Now, it’s possible I’m misunderstanding Scott.”
This section will talk about math. As such, I would like to establish my mathematical bona fides: I did not get a 1 on my AP exam in calculus. Exactly what I did get will be up to the guess of the reader.
In my article, I raise the objection that Tegmark’s view collapses induction. The basic idea: on Tegmark’s view, there will be infinite people with every conceivable property. There’s no coherent way to talk about proportions of people with the different properties—after all, you could arrange worlds so that each world is filled with any percent of people with any property. You could make every world filled almost entirely with Schizophrenics named Phillis who evolved from bats but have cognitive capacities similar to humans. Thus, all probabilities come out undefined.
Anyway, Scott writes:
That is, suppose that there are one billion real people for every Boltzmann brain. If there are infinite universes, then the ratio becomes one-billion-times-infinity to infinity. But one billion times infinity is just infinity. So the ratio is one-to-one. So you should always be pretty suspicious that you’re a Boltzmann brain. The only way you can ever be pretty sure you’re not a Boltzmann brain is if nobody is a Boltzmann brain, presumably because God would not permit such an abomination to exist.
I’ve talked about this with Bulldog before, and we never quite seem to connect, and I worry I’m missing something because this is much more his area of expertise than mine - but I’ll give my argument again here and we can see what happens.
I don’t think I expressed myself clearly because almost all of Scott’s commenters also, along with Scott, seemed to misinterpret my argument. So let me first be clear on what I’m not saying:
Nobody is a Boltzmann brain. For all I know, maybe some people are. I’m not, but maybe you are.
It’s impossible to take a mathematical measure over infinites. I’m aware that there are sophisticated ways of calculating probabilities over infinite sets. I can’t describe the math behind it, but I know there is math behind it. (I will note though that on Tegmark’s view the number of people is too large to be a set, and constructing non-insane measure in that case might not be possible. Though I don’t know the math behind this, so don’t place too much stock in it—I’ve just heard it from my mathematician friends).
My objection is that under the Tegmark view, there are infinite people with every property. The core problem is that if there are infinite people with every property, probabilities do not play nice. While you can construct an artificial mathematical function that gets them to play nice, they do not really play nice. Thus, when Scott says:
Consider various superlatives like “world’s tallest person”, “world’s ugliest person”, “world’s richest person”, etc. In fact, consider ten categories like these.
If there are a finite number of worlds, and the average world has ten billion people, then your chance of being the world’s richest person is one-in-ten-billion.
But if there are an infinite number of worlds, then your chance is either undefined or one-in-two, as per the argument above.
But we know that it’s one-in-ten-billion and not one-in-two, because in fact you possess zero of the ten superlatives we mentioned earlier, and that would be a 1-in-1000 coincidence if you had a 50-50 chance of having each. So it seems like the universe must be finite rather than infinite in this particular way.
But both Bulldog and I think infinite universes make more sense than finite ones. So how can this be?
We saw the answer above: there must be some non-uniform way to put a measure on the set of universes, equivalent to (for example), 1/2 + 1/4 + 1/8 + … Now there’s a finite total amount of measure and you can do probability with it again.
I completely agree. I think by default infinite universes don’t just break down the probability of you being a Boltzmann brain. They make all the probabilities come out undefined—of you being tall, short, and so on. Contrary to what Scott says I say, I don’t think they’d become 50%. It’s much worse: they’d be radically undefined.
Let me first illustrate this with aleph null people. Suppose that there are aleph null galaxies, each of which has five people with red shirts and one with a blue shirt. At what odds should you think that you have a red shirt (assume it’s too dark for you to see your shirt color)?
The intuitive answer is 5/6. But this answer is wrong! The way to see this is the following: imagine moving the first five blue shirted people to galaxy one, the next five blue shirted people to galaxy two, and so on. Next move the first red shirted person to galaxy one, the next red shirted person to galaxy two, and so on. Well now every galaxy has five blue shirted people and one red shirted person.
Thus, having a procedure like this doesn’t work. You can’t determine probabilities of your having some property by taking the limit of the share of people with that property as you cover more space. Because if you do that, then you’ll get the result that if there are two worlds with the exact same people with all the same properties, just distributed differently, your odds of having that property change across the worlds. But this is nuts! The odds I have a red shirt doesn’t depend on where people are located! Moving people around shouldn’t change my guess regarding my shirt color.
The same point will apply with bigger cardinalities of infinity. You can’t coherently hold that you’re probably in a simpler world, because each of the worlds can be arranged so that every world is filled almost exclusively with people (originally pre arrangement) from more complex worlds (if you move the people around). Even if you hold that you’re likelier to be in a simpler world, your probability of having any of the properties had by people in the simpler worlds turns out undefined.
Note: this reasoning argues for undefined probabilities rather than probabilities of .5. To see this, imagine that there are aleph null people with red shirts, aleph null people with blue shirts, and aleph null people with green shirts. Presumably, on the view that these probabilities are not undefined, I should regard the probability of my having a red shirt as 1/3. But if instead, all the people with green shirts had blue shirts, then the probability of having a red shirt would be .5. This is obviously wrong: the odds I have a red shirt shouldn’t change when you change the shirt color of those who don’t have red shirts.
The core problem is that the way measures are taken depends on how the inputs are ordered. The reason we say that, for instance, there are more even numbers than primes is based on how we count numbers (1, 2, 3, 4, etc). When we do this, we notice that as you get to higher and higher numbers, primes get rarer but half of the numbers remain even.
But this is all a function of the order in which you count stuff. In an infinite world, the order in which you count is arbitrary. There are infinite worlds of all kinds no matter how you count them. If we counted the number line so that we counted three primes and then a composite, then we’d conclude primes are more numerous than even numbers (and we’d end up counting every number using this procedure!)
Now the question becomes: how does God solve this problem? If there’s a God, aren’t we still in complete inductive skepticism? After all, there will still be infinite people with every property. So why don’t probabilities break down in the same way.
The answer: God cares about us. He places each of us in a world ideal for our flourishing. Thus, there’s something—namely, the conditions ideal for our flourishing—that constrains the probabilities.
Suppose you’re in Hilbert’s hotel. Everyone rolls dice. You should think at 5/6 odds that you’ll get 1-5. This isn’t because more people get 1-5 than 6. Instead, it’s because you know some chancy process happened that probably turned out 1-5. After all, as you repeat the process, it will be 1-5 most of the time.
Here’s another analogy closer to the scenario in the real world if theism is right. Suppose that you’re in Hilbert’s hotel. You want to figure out the probability that your room has white sheets. By default, this will be undefined—there will be infinite people with white sheets and infinite people without. You can take a measure over the rooms, but this will be subject to how they’re arranged.
But now suppose that God is behind things. God cares about you. He knows you like green sheets. So now you have basis for thinking that probably God will give you green sheets. It’s not guaranteed, but it’s likely.
The way you conclude that you probably have green sheets isn’t by looking at the portion of observers with green sheets. There is no coherent portion of observers with green sheets. Instead, you know that something was done to you which made it so that you probably had green sheets.
The same point applies to other probabilities. Because God cares about us and places us in a situation ideal for our flourishing, the probabilities don’t break down. It’s not undefined how likely we are to be Boltzmann brains. Though there are infinite Boltzmann brains and infinite non-Boltzmann brains, every particular person is less likely to be a Boltzmann brain than not a Boltzmann brain.
One easy and simple model of this is via the preexistence theodicy which is independently motivated. God gives us the choice to voluntarily enter the world and be placed in a randomish location in some finite subset of the world. He’ll maybe have there be some chance that we’d end up as Boltzmann brains, but make sure it’s low because he cares about us. Thus, when Scott says:
This isn’t just necessary for Tegmark’s theory. Any theory that posits an infinite number of universes, or an infinite number of observers, needs to do something like this, or else we get paradoxical results like that you should expect 50-50 chance of being the tallest person in the world.
It’s not 50-50. It’s undefined. And yes, I agree with this claim. My core argument is that infinite numbers of people break probabilities by default. Fortunately, on theism there’s a guy on our team who doesn’t want us to be in a skeptical scenario, so probabilities don’t break. In fact, I’ve argued here that if space is infinite then the same problem arises.
I think this problem is even worse for Tegmark’s view. I don’t think it’s coherent to talk about there being more worlds of type A than type B if they’re spatiotemporally disconnected. What does this mean? In both cases there are aleph null. It maybe makes a bit of sense if one collection is the subset of another but if not it doesn’t make any sense. You could pair them one to one, two to one, three to one, etc, in either direction. Given that, what sense does it make that there are more of the first than the second? It’s all a matter of how you arbitrarily choose to count the inputs. If I understand the terminology correctly, this means it violates something called permutation invariance.
Next Scott says:
So when Bentham says:
The simplest version of the Tegmark view would hold simply that all mathematical structures exist. But this implies that you’d probably be in a complex universe, because there are more of them than simple universes. To get around this, Tegmark has to add that the simpler universes exist in greater numbers. I’ll explain why this doesn’t work in section 3, but it’s clearly an epicycle! It’s an extra ad hoc assumption that cuts the cost of the theory.
… I disagree! Not only is it not an epicycle artificially added to the Tegmark theory, but Bulldog’s own theory of infinite universes falls apart if he refuses to do this! The fact that everything with Tegmark works out beautifully as soon as you do this thing (which you’re already required to do for other reasons) is a point in its favor.
This is actually a slightly different claim. Scott’s view seems to be that the simpler worlds exist in greater numbers which is why you’re likelier to be in simpler worlds. My objections to this are:
I think this is incoherent as described before. As long as there are the same cardinality of people in different worlds, I don’t think it makes sense to talk about the portion of people with different properties.
Even if this is coherent, I think it’s an epicycle. The most natural version of the Tegmark view would just have every possible world exist once, rather than simpler worlds exist in greater numbers. Now, putting aside the first worry, on theism it’s not an epicycle—on theism we actively expect God to put most observers in well-off, flourishing worlds. The fact that probabilities break down if this happens doesn’t make it not the default version of the Tegmark view—it just means that the default version of the Tegmark view breaks probabilities.
Edit 2/25: Scott has confirmed that the UDASSA view I’m about to describe is in fact his view.
Now, it’s possible I’m misunderstanding Scott. Maybe the proposal is not that there are more people in simpler worlds but that you have a greater probability of being in simpler worlds. This would make sense of Scott’s claim:
But I would also add that we should be used to dealing with infinity in this particular way - it’s what we do for hypotheses. There are an infinite number of hypotheses explaining any given observation. Why is there a pen on my desk right now? Could be because I put it there. Could be because the Devil put it there. Could be because it formed out of spontaneous vacuum fluctuations a moment ago. Could be there is no pen and I’m hallucinating because I took drugs and then took another anti-memory drug to forget about the first drugs. Luckily, this infinite number of hypotheses is manageable because most of the probability mass is naturally in the simplest ones (Occam’s Razor). When we do the same thing to the infinity of possible universes, we should think of it as calling upon an old friend, rather than as some exotic last-ditch solution.
But this is precisely because we have a normalizable probability distribution on hypotheses. A normalizability probability distribution is one where the sum of all the probabilities equals one. Rather than each of the hypotheses being equally likely, some are likelier than others. Thus, maybe Scott is proposing that you’re just inherently likelier to be in a simple world, independent of the number of observers in it, and that this distribution is normalizable. This is the basic idea behind a view called UDASSA. I won’t explore this possibility in detail because Carlsmith already produced a series of objections to it that are, in my judgment, completely decisive.
If the UDASSA view is to maintain a normalizable probability distribution—one like the probability distribution .5, .25, .125, etc that sums to 1—it must be that there exist some finite number of worlds such that you should think with probability 99.999999999% that you’re in one of those worlds. However, literally 100% of people do not find themselves in those worlds. This is an odd result. (I have a friend who knows math quite well and has a view like UDASSA; he confirmed this result). For the values to asymptotically approach 1, the probabilities of further events have to get smaller and smaller, as they do in the above case. But that means that means that you should be 99.99999999% sure that you’re in some specific finite number of worlds, rather than the infinitely greater number of other worlds.
If you bite the bullet here, there’s another even weirder result. Suppose that you take some action that benefits the people in that finite slice of worlds but harms everyone else by roughly equal amounts. On this picture, every single person would rationally want you to take the action, even though the action will harm infinite people and benefit a finite number of people. But it’s often supposed that if an action is good for everyone in expectation then it’s good overall. This would therefore imply that an action that harms infinite people but benefits a finite number by an equal amount, the action is good overall. That’s crazier.
I also just find the UDASSA view really weird. Why is it that the odds I’m in some world would depend on the simplicity of the world? If there’s a copy of me in a really complex world and another copy in a really simple world, why the heck would I be likely to be the one in the simpler world? UDASSA implies that if there’s a hotel, and one observer has a complicated symbol on the door, and the other has a simpler symbol, I’m likelier to be the one with the simpler world. What??
(This gets even messier if there are infinite worlds of each kind, but even ignoring this, the view is very crazy).
Lastly, Scott says:
Finally, I admit an aesthetic revulsion to the particular way Bentham is using “God” - which is something like “let’s imagine a guy with magic that can do anything, and who really hates loose ends in philosophy, so if we encounter a loose end, we can just assume He solved it, so now there are no loose ends, yay!” It’s bad enough when every open problem goes from an opportunity to match wits against the complexity of the universe, to just another proof of this guy’s existence and greatness. But it’s even worse when you start hallucinating loose ends that don’t really exist so that you can bring Him in to solve even more things (eg psychophysical harmony, moral knowledge). If there is a God, I would like to think He has handled things more elegantly than this, so that we only need to bring Him in to solve one or two humongous problems, rather than whining for His help every time there’s a new paradox on a shelf too high to reach unassisted.
I don’t quite get the complaint. I think God did set up some elegant system of psychophysical laws. I also think he had an elegant system for making it so that we’re each probably not Boltzmann brains. I don’t think God intervened to do a special miracle to solve each of these problems.
Note: these aren’t just random puzzles or stuff we don’t understand. They’re specific things about the world that are highly improbable but expected on theism. This is how evidence for stuff generally works—you take the stuff that’s likelier on some hypothesis than it’s negation.
5 Boltzmann brains
One objection to the Tegmark view—and any multiverse view—is the Boltzmann brain problem. A Boltzmann brain is a brain that randomly fluctuates into existence in outer space. Imagine random space gunk randomly drifts together to form a short-lived brain—that’s a Boltzmann brain.
If there are too many Boltzmann brains then this makes it quite likely that we’re Boltzmann brains. After all, this would mean that most people with our exact pattern of experience are Boltzmann brains.
By default it seems like multiverses produce huge numbers of Boltzmann brains. The basic reason for this is that Boltzmann brain universes can keep generating Boltzmann brains forever. Entropy goes up over time, so by default the universe would have an infinitely long period of being in a high entropy state. In addition, most universes have much higher entropy than ours, and thus will likely produce many Boltzmann brains.
Scott notes that it seems like it takes a while to generate a Boltzmann brain. But (if I’m understanding correctly) the basic Boltzmann brain problem is that even if they’re super rare, because the high entropy states last forever, by default most universes produce mostly Boltzmann brains. This gets even worse in a multiverse, because if there are Boltzmann brains that last forever, ~100% of observers end up being them.
I checked this over with my physicist friend Aron (author of one of the greatest blogs on the internet). Here’s what he said:
1. Naively, it seems like BBs infinitely out number ordinary observers in the spacetime of an ordinary single universe, because our universe seems to last forever (and exponentially grows) after the heat death of the universe.
2. In inflationary multiverse scenarios, in addition to time lasting forever, there are also exponentially large quantities of space caused by regions where inflation lasted longer. Those who want to use SSA-like approaches need to somehow place a measure on this multiverse. All ways of doing so seem somewhat arbitrary, and most of them don't lead to results that work well. E.g. if you just cut off after a given proper time, then this overwhelmingly favors brains that appear as early after inflation ends as possible. These will be BBs, as normal evolution takes longer.
3. And it is impossible to say whether BBs outweigh normal observers in Tegmark's MUH, unless you have a principled way to put a probability distribution on the space of all possible mathematical structures. "All mathematical structures are equal, but some of them are more equal than others".
So actually BBs are a potential problem on almost all cosmological views, but the multiverses have the (dubious) advantage that people can't agree on how to use them to make predictions...
Regarding Scott’s more specific response to Climenhaga’s worry about Boltzmann brains, Aron said:
I think Scott probably thought he got around this by saying:
"I think of this as one of many paradoxes of infinity. But I don’t think there’s an additional paradox around fine-tuning or the multiverse. Among universes still in their “early” phase of having matter and stars, Boltzmann brains are less likely than real universes that got the fine-tuning right."
as a way of ignoring the late time problems. But this seems somewhat evasive as there is no obvious reason why we should only look at universes in their early time phases.
Also, I don't think it is very reasonable to give the MUH a pass on "paradoxes of infinity" since if MUH is true, paradoxes of infinity end up being the entire ballgame! To give a silly analogy, suppose (counterfactually) there were a well-known problem where philosophers didn't know how to reason properly about snakes, without it leading to paradoxes. And somebody came along and said "I propose the whole world is made of nothing but snakes. And since we don't know how to reason about them, let's just propose that this makes it so that for everyday physics questions, simpler hypotheses are more likely to be true". I don't think it would be reasonable to sidestep all objections to this theory by saying "After all, my theory only has snake-related problems, which also afflict every other theory". No, your theory also has the additional problem that it contaminates every other question with our inability to reason about snakes! And then to reason with the theory, you just ignored those problems and did something else.
It also seems a little off that Scott claimed that any counterexample is enough to refute a "proof", since this seems inconsistent with Scott's usual Bayesian approach to reasoning. For example, the fine-tuning argument is a probabilistic / abductive argument, it isn't supposed to be a deductive proof.
I haven't said anything about the specific numbers Scott randomly pulled out of Wikipedia and Lee Smolin's, they seem off to me but maybe not in a way that affects these arguments. Wikipedia currently says 10^(10^50) which is a very different number from 10^500...
This by no means exhausts my complaints about the Tegmark hypothesis, but I'm not writing my own blog post here, just trying to give you some reactions from a physicist.
6 Conclusion
I think Tegmark’s mathematical universe is a genuinely interesting proposal. My problem with it is that it seems to break probabilities by default. Even if it doesn’t do that, having simpler worlds exist in greater numbers in an epicycle, and it doesn’t help with most of the theistic arguments (e.g. from consciousness). Thus, God is (still) the best explanation of the world.
Re: knowledge of priors - are you just rehashing the problem of skepticism? If you're infinitely skeptical (eg of logic itself), then not even God can solve your problem (any proof of the existence of God depends on logic). If you're less than infinitely skeptical, I think understanding that a few hyperpriors were instilled by evolution which had an incentive to get them right, plus https://www.lesswrong.com/posts/46qnWRSR7L2eyNbMA/the-lens-that-sees-its-flaws , gets you the rest of the way. I agree you can always argue that evolution somehow found a perfectly consistent set of priors which works for all pragmatic purposes but isn't true, but at some point this because its own "psychophysical harmony" problem (why are our false priors so perfectly consistent with true reality?) to the point where the probability becomes vanishingly low.
I'm not sure why you say conscious states are epiphenomenal even if epiphenomenalism is false - you just seem to be transparently pushing epiphenomenalism. But epiphenomenalism doesn't make sense, because we *know* our conscious states have actions in the world - most obviously, the action of philosophers writing "Hey, we have conscious states, what's up with that?" The same fact about pain that causes you to write the words "pain is unpleasant" in your essay on psychophysical harmony also causes you to behave as if pain is unpleasant - they're both downstream of the actual unpleasantness of pain.
Yes, I'm asserting the thing that you call UDASSA, and not the thing that obviously doesn't work. Your "objections" to it don't make sense. You say "If the UDASSA view is to maintain a normalizable probability distribution—one like the probability distribution .5, .25, .125, etc that sums to 1—it must be that there exist some finite number of worlds such that you should think with probability 99.999999999% that you’re in one of those worlds. However, literally 100% of people do not find themselves in those worlds. This is an odd result."
This isn't an "odd result" - it's just how math works! If you divide the interval from 0 to 1 into regions, such that one region includes the first half of the space, the next the next 1/4, the next the next 1/8, etc, then you can find a finite number of regions such that 99.9999999% of the interval is in those regions, even though literally 100% of regions aren't in the interval. This may be "odd", but it's just a natural consequence of you doing the odd thing of finding a way to make an infinite number of terms sum to a finite number.
I agree that if you try to make your moral weight reflect the probabilistic weight, you get weird results, but any set of infinite universes, including those with God, require you to do this. Consider a situation where you can choose to kill a puppy. If there are infinite universes, then (absent some force pushing against this), there are an infinite number that match current-Earth atom-for-atom, and of those, there are an infinite number where you do kill the puppy, and an infinite number where you don't kill the puppy. Therefore, it seems like your choice to spare the puppy doesn't cause there to be any more living puppies than if you killed it! So who cares if you make the moral choice or not?! In order to avoid this, you need some sort of measure to make finite sense of the infinitude of worlds. Once you map worlds onto measure, sparing the puppy increases the puppy's measure (probably by quite a lot, since you're most likely to be in the most probable worlds) and can be justified again. I don't know how to justify acting morally in an infinite world without doing something like this, unless you abandon consequentialism entirely and say you should do it for the good of your soul or something.
This is part of the general concern that I don't think you've thought about the many ways that infinite worlds go wrong even *with* God and granting everything about God that you want to grant. Again, I'm not sure how you say that the probability of me being (my) world's tallest person is 1/10 billion, rather than 1/2 or undefined. In order to quantify likelihood of being in any class, including the very normal classes that we all think about every day, you need to do something about the infinite paradoxes. I'm not sure why you don't respond to this in the original post, but I think it's decisive and that a good response to this point would convince me much more than any of the other somewhat tangential things about psychophysical harmony or whatever.
Re: psychophysical harmony, if the physical aversive response of pain doesn't actually require the subjective qualia of pain, then isn't the fact that pain still exists actually a point against an omnibenevolent God? We could have had an aversive response that served our survival without the innate, immediate badness of emotional suffering, but God in his arrangement of our psychophysical harmony allowed that inherently negative mental state to exist. If the qualia of suffering is inherently bad, which I believe it is, then why would an all-good God saddle us with it when it in fact isn't necessary?