I'm going to ask an absurdly dumb question, because I just don't understand anthropic arguments and just want to give up when I hear them.
Don't SIA accepters accept the existence of a reference class? Otherwise wouldn't the SIA give us reason to accept panpsychism by saying it is more likely that there are experiencing things? If you accept the SIA, I'm assuming you don't think you believe this increases the chance electrons are conscious?
Again, I apologize, I have no idea what's even happening.
Just saw this extensively addressed in Joe Carlsmith's SIA > SSA Part 1 sections II-IV. I see now that SIA simply uses the grouping of observers in a given epistemic situation very differently from reference classes in SSA. If anyone else was having this problem I'd recommend: https://joecarlsmith.com/2021/09/30/sia-ssa-part-1-learning-from-the-fact-that-you-exist
I'm surprised by the amount of comments which are just "these questions are dumb and you should feel bad for asking them" rather than making any objective claim or response to anything.
Firstly, sometimes asking questions like these is just good fun.
Secondly, this does actually have some real world implications, e.g. doomsday argument, whether we should expect to live in a universe with lots of aliens, etc.
Longer answer: neither SIA nor SSA say the number of people is what matters but instead SSA says what matters is the % of people in your reference class that you might be and SIA says what matters is the number of people that you might be. To accommodate cases like the one where 10 guys with red shirts and one with blue shirts get created, you have to think that of the actually existing people compatible with your current evidence, you're randomly drawn from them.
Suppose you combined SSA and SIA in this way where you construct a reference class (say conscious beings) and then reason as if you're randomly selected from your reference class while also thinking that theories that predict that there are more people you might be are better. So what you do is you do in terms of anthropic updating is multiply by the number of observers that you might be and then multiply by the share of your reference class that's populated by observers.
Let's apply this to a case. A coin is flipped. Heads means one guy with a red shirt gets created. Tails means 2 guys with red shirts and 10 guys with blue shirts. I have a red shirt. Tails gives the probabilsitic value of 2x2/10=2/5. Heads gives the value of 1.
Now let's apply it to doomsday. Heads means one guy gets created, tails means 1,000,000 people get created. I'm the first guy. I would have a very massive update in favor of doom--it doesn't cancel.
Let's call this view double update SSA. It doesn't work to cancel out doom, for the reason described.
Now, there is a view that's a combination of SSA and SIA. That view is just called SIA! In fact, in Bostrom's original formulation, what I call SIA is called SSA+SIA. SSA+SIA says that you reason as if you're randomly drawn from the pool of people in your reference class--where you reference class is people you might be--and then your existence gives you evidence that there are more of them.
Let's say that God flips a coin, He always creates two redshirted guys. Only when it lands tails does he create blueshirted guys. I get created and am made aware of the rules of the game.
My view is that if I look down and notice that I am wearing a red shirt (I just did: turns out I am wearing a red shirt!), I should give even odds on the result of the coin flip, since I know that *something having the experience that I am having right now* is equally likely in either scenario.
It seems to rely on the number of people that can exist, but I only care about the number of red shirted people that can exist, because people wearing any other color of shirt don't affect my probabilities.
(of course you can make almost any mind wear a literal red shirt, I am using red-shirtedness as a stand in for any of my own identifiable characteristics. Most important of which is approximate complexity.)
If God makes unsetly many people and has a reason to make one pesron like you, he has a reason to make infinite people like you, so theism means more people like you exist. Obviously the number of other people is not intrinsically relevant.
I think my existence is quite plausible under naturalism. Sure, I am only one of infinite possible minds, but the relative simplicity of my particular mind seems to make it that *it* in particular could easily arise naturally.
> Shouldn't it somehow be possible to accept both so that they cancel out? Just spitballing here.
This is exactly what happens in Doomsday argument. Consider:
Suppose there are two undistinguishable bags with numbered pieces of paper. The first bag has 10 pieces of paper and the second has 10000. You were given a random piece of paper from one of the bags and it happens to have number 6. What should be your credence that you've just picked a piece of paper from the first bag?
Before you look at your paper your credence is 1:1 between the two bags. After you saw 6, you should update a lot in favor of the first bag. This is how SSA treats your existence: initially you are indifferent between short and long history. But after you've learned your birth rank you update a lot in favor of short history
Now consider this modification:
> You pick the piece of paper from the bag yourself. Bags are of such size so that when you put you hand in the second bag you always find a piece of paper immediately at the bottom of the bag, while when you put your hand in the first bag there is a lot of empty space so you may not find the piece of paper immediately. You immediately find a piece of paper in the bag. It is 6. What should be your credence that you've just picked a piece of paper from the first bag?
Here you are initially 1:1 before you put your hand into the bag. Immediately finding the piece of paper heavily updates you in favor of the second bag. And learning that the number is 6 heavily updates you in favor of the first back. This two updates exactly cancel out each other and therefore you are back to 1:1 between the two bags.
This is how SIA treats your existence: learning that you exist at all makes you very confident that there are infinite people, learning that you are only 60-billion-something person updates you back to normality, making you indifferent between short and long history. Pretty neat, right? But where are all the infinities coming from, then?
The thing is, there is a problem with such huge compensatory updates. Unless it's explicitly set in the conditions of the experiment, SIA follower will virtually never believe that they are merely 60-billion-something person, because they are initially so extremely sure to be somewhere around infinity. They would instead believe in multiverse/simulations/huge alien civilizations/God creating infinite people/all of it at once - anything that would explain why they are not actually among the first 60-billion people in existence even though it looks like it, and that in practice does not allow them to update back to normality. So outside of Doomsday Argument, SIA end up quite crazy.
However, as a matter of fact, there is a very simple way around all this. You can discard both SSA and SIA. Consider this modification of paper picking experiment:
Pieces of paper are not given at random. You were guaranteed to receive the paper with number 6, regardless of which bag it was picked from
Here you are 1:1 before you were given the paper, 1:1 after you were given it and before you looked at it and still 1:1 after you saw that it's indeed numbered 6. And this is how one actually supposed to reason about existence in Doomsday argument. You are not randomly selected among all people destined to exist or all possible people, whatever it means. Your mind is not an immaterial soul which could be instantiated in the world at any time. Your mind is the product of your brain, which is part of your body, which was created due to a very specific sexual intercourse between your parents. And the same logic applies to your parents and all your ancestors as well. Therefore, you couldn't exist at any other time but now.
If you want to do serious probability then you need to learn the axioms of a sigma algebra, specify a probability space, and do some measure theory. Contrary to intuition and popular use, the mathematical theory of probability is not some universally applicable guide to philosophical reasoning. “All possible observers” is simply not a class where it will be possible to apply the axioms of probability to. It’s like a philosopher saying, “assume you are in the set of all sets.” That sounds intuitively appealing, but mathematically such a thing is actually impossible and cannot exist.
Right so generally what SIA says to do is count up the number of observers that you might be and favor theories by a factor of N where N is the number of observers that you might be on a theory.
And what I am saying is that "the number of observers you might be" is always too large of a number in order to self consistently apply the axioms of probability theory. It is like the mathematicians at the turn of the 20th century. Unrestricted set theory was in vogue, until Russell discovered Russell's paradox, and they realized that they were going to have to give some stuff up. So now we have ZFC, which is very powerful, but still restricts what we can do with set theory.
One of the first things you learn when doing measure theory is that you aren't able to get the theory you wanted to start with in the first lpace. See the opening of Chapter 1 in Folland's Real Analysis 2nd edition textbook (edited for readability):
"Ideally, for n in the Natural numbers, we would like to have a function u that assigns each subset of R^n a positive, possibly infinite number u(E). Such a function should surely possess the following properties:
1. If E_1, E_2, .. is a finite or infinite sequence of disjoint sets, then u(E_1 U E_2 U ... ) = u(E_1) + u(E_2) + ...
2. If E is congruent to F (that is, if E can be transformed into F by translations, rotations, and reflections), then u(E) = u(F).
3. u(Q)=1, where Q is the unit cube.
Unfortunately, these conditions are mutually inconsistent."
No, I'm talking about the number of actually existing observers you might be. That will only be too large if you think there are infinite actually existing people.
Sure, but "actually existing observers you might be" is clearly infinite. There are an infinite number of scenarios where you are a brain in a vat, and so these are "actually existing observers you MIGHT be."
In other words, "observers you might be" is way too large, it is like the set of sets. You are going to end up with contradictions if you try to apply probability theory or set theory. In restricted circumstances like the Sleeping Beauty problem, you assume for the sake of the problem that you are guaranteed to be one of a finite number of observers. So you can speak meaningfully about these.
The most general version of SIA I can think of is something like this: suppose we apportion our ur-prior p(w) over all possible worlds w in a certain class; these are supposed to represent our chance-based credences in each of those worlds being actual "before we know that we exist," whatever that's supposed to mean. Then in each world, after learning you exist and have evidence V about your situation, your posterior p(w|V) on world w being actual should be updated to p(w)*E[number of conscious observers with evidence V | w is actual] / E[number of conscious observers with evidence V]. Here, (conditional) expectations are taken with respect to our ur-prior.
This works perfectly fine in toy scenarios where all these quantities are nice enough (e.g., all the expectations there are all finite, though this can be weakened a bit); this will define a new, valid probability distribution over our set of worlds. But I agree it seems to get really intractable when we open the floodgates and drop all simplifying cardinality assumptions about the number of worlds and the number of conscious observers. Most likely you'd try to patch it by simply confining yourself to some small-enough but still plausibly broad collection of worlds, e.g., those that have some finitely computable structure and initial conditions in some way. I doubt this can be made to work, but it would be interesting to see it attempted.
Right, we're thinking about this in the same way. And I don't think you will ever get a number of possible worlds which is 1) large enough to be interesting and 2) small enough to workable.
Also, while I am sympathetic to the Bayesian approach to probability, and think you can meaningfully talk about probabilities for events which will only happen once (like a specific election), it feels like we're overreaching with the theory perhaps when the event is literally "what possible world is the actual world."
Hypothetical scenarios where God randomly decided between two possible universes is one thing, but it is not clear that axioms from that hypothetical can be imported over to why our universe exists.
>Right, we're thinking about this in the same way. And I don't think you will ever get a number of possible worlds which is 1) large enough to be interesting and 2) small enough to workable.
It depends on what you mean by "workable." I think "worlds with computable laws of physics" may be workable in the sense that an ideal, logically omniscient reasoner could meaningfully try doing probability over it without running into size-related sigma algebra issues, etc. But it's non-workable for us mere mortals in that it's highly computationally intractable. Still, I'd guess Matthew would say that we don't actually need it to be tractable to draw certain qualitative inferences about what kinds of conclusions we'd probably arrive at if we *were* ideal observers and *could* use it, such as that we'd infer the probable infinitude of the number of people, or something.
Nevertheless, it does seem hard to avoid skepticism even if we take this route. Suppose you flip a light switch on in your kitchen. You might expect the lights to go on. But suppose (for example) you think there are worlds where there's guaranteed to be an infinite number of perceptual duplicates of you who flip the light switch on in their kitchens and nothing happens, for whatever reason. If you *also* think some set of those worlds has positive probability, however tiny, then the principle I enunciated above involves infinities that can't obviously be dispensed with and it seems we're completely stuck. And of course this applies to absolutely everything, since there's nothing special about turning on the lights! You'll most likely have to go instead with some approach based on taking limits of finite worlds, and this is going to get even hairier.
>It depends on what you mean by "workable." I think "worlds with computable laws of physics" may be workable in the sense that an ideal, logically omniscient reasoner could meaningfully try doing probability over it without running into size-related sigma algebra issues, etc.
I never learned that much theoretical comp-sci, and so I do not remember exactly what computable means here. The issue I see with this is that aren't there an infinite number of possible worlds which have computable laws of physics? Does "computable" imply that the infinity is at least countable?
So suppose you have some universe U with computable laws of physics. Now consider universe U', which is essentially two identical universe U's but separated by a space so large, and moving away so fast, that the light cones will never intersect. Now consider universe U'' which is three universe U's. There is never any observation you could make that would tell you that you are in universe U or U'''...
Also, what happens if you find evidence that the universe you inhabit is not in fact computable?
>I never learned that much theoretical comp-sci, and so I do not remember exactly what computable means here. The issue I see with this is that aren't there an infinite number of possible worlds which have computable laws of physics? Does "computable" imply that the infinity is at least countable?
It depends a little bit on how you actually flesh out the idea I was gesturing at, so I apologize for the ambiguity. Naively, yes, it's going to be a countable number, because there's only a countable number of finite strings/computer programs/whatever that successfully, uniquely, computably describe the physics of a possible world. And you'd attach a prior over them based on some measure of complexity, though even doing this the right way is not totally straightforward. However, for the record, you could also do more sophisticated things: for example, you could treat "the constant parameters" of some universe-description differently from "the laws" and assign continuous probability distributions to the former once you've fixed the latter, and this will give you uncountably many possible worlds where your credence is still reasonably well-behaved. You'd get a countably infinite mixture distribution of continuous probability distributions, which isn't too pathological as far as probability theory goes. Still obviously intractable in practice, of course.
>So suppose you have some universe U with computable laws of physics. Now consider universe U', which is essentially two identical universe U's but separated by a space so large, and moving away so fast, that the light cones will never intersect. Now consider universe U'' which is three universe U's. There is never any observation you could make that would tell you that you are in universe U or U'''...
Sure, but what's the problem with that?
>Also, what happens if you find evidence that the universe you inhabit is not in fact computable?
Yeah, that's definitely a limitation of the formalism, and really of trying to apply the mathematical concept of Kolomogorov complexity to areas of philosophy generally. All non-computable things automatically get priors of 0, at least in a certain sense. Everyone agrees this is an outstanding problem and hopes some research program will come along and fix it with some better generalization somehow.
The fact that you keep running into paradoxes and absurdities should be a sign to step back and question more basic premises, I think. The basic problem with all of this reasoning, especially when trying to reason about God, is that the state of the universe is determined by its prior states and the physical laws and systems involved, not by probabilities and random draws from a hat. The red/blue prisoner thought experiment is in principle totally possible, therefore unobjectionable. But many others are fantastical and aren’t tethered to reality. Instead, they are just formal probability problems and paradoxes, masquerading as statements about empirical reality. Reminds me of bit of how medieval ontological arguments made a priori claims about the world and inferred something must exist on that basis.
> The obvious answer is that you should be 90% sure your cell is blue. That’s because most people with your current evidence are in blue cells. You don’t know which of the people in the cells you are, but of the ones you might be, most are in blue cells.
I think even in this trivial example there is a potential confusion. The question "which person am I" may or may not be utter nonsense - what would it even mean to be me if I'm not myself? - It's not immediately clear how to approach it. But we could definetely reason about which room I'm in - which is just a classical probability theory problem and nothing really changes whether all the room are filled with some other people or you are the only person who is created in a random room among all these rooms.
> So your reference class is the class of entities that you should reason as if you’re randomly selected from. How do SSAers decide on a reference class? Answer: they just basically make it up to comport with their intuitions. There isn’t a principled basis for a reference class!
I really dislike the framework of reference classes because, in my opinion, it's creates extra confusion, but if we talk about the matter in these terms, then reference class is the class of entities you *actually could have been*, according to your knowledge of the causal process that led to your existence. In other worlds you should reason about your existence as if you are a random sample from a class that you *actually are* a random sample of to the best of your knowledge.
I don't understand how it doesn't immediately appear to be absolutely obviously true. When you blindly pick marbles from a bag you do not reason as if the marble you got is a random marble from all marbles in the multiverse. No, you reason about it as if it's a random marble specifically from the bag you are picking it. Likewise with your existence. The "principled basis for a reference class" is simply the nature of the causal process/the intention of the creator. If you were always intended to be created in a red jacket you do not get to update from being created in a red jacket - your reference class is red jacketed people. If you were inteded to be created in a jacket of any color - you do update when you see which color your jacket is - your reference class is people in any jacket.
> Okay, aside from the totally made-up reference class, what’s wrong with SSA? Seems to make sense of our intuitions. But unfortunately, it implies some crazy things.
It seems there is an easy fix - you can just outlaw drawing reference classes throughout time, unless the conditions of the setting explicitly state that this is the case. Then there is no Adam and Eve, no Doomsday Inference, no moving boulders with your mind and basically everything adds up to normality much better than with SIA.
> Here, I haven’t covered all the views in anthropics
You haven't even started to be creative, with your exploration of possible anthropic theories, still stuck in the SIA/SSA false dychotomy. As and example of completely non presumptious anthropic theory here is a thing that I call "Anthropic Agreement Theory" according to which one has to update on anthropic evidence only if both SSA and SIA agree about it. So you can update in all the simple cases, such as prisoners or God equal coin toss but not in weird ones that lead to presumptiousness.
> But we could definetely reason about which room I'm in - which is just a classical probability theory problem and nothing really changes whether all the room are filled with some other people or you are the only person who is created in a random room among all these rooms.
I don't see this? It seems knowing that other people are assigned to rooms as well tells you something about how people are distributed to cells. For example, you could imagine the observers are randomly ordered, and the first ten observers are assigned to the red cells. If you knew this fact and you knew you were alone, then you could be confident you were in a red cell. However, if you knew there are 100 other people then you could be sure even with this assignment mechanism you weren't likely to be in a red cell. This shouldn't impact your probability much, but surely it should influence it a little bit?
I get this is probably orthogonal to the point you are making, but I'm taking this opportunity to point out something I just don't understand about these arguments.
There are n exam tickets and n students. Tickets are picked at random and every student will answer their own ticket. You are a student and have prepared yourself to k of the exam tickets. What are your chances to get the ticket you know, if you go first? Can you improve these chances by going later?
If you go first your chances are simply k/n.
If you go second then there is k/n chance that the first student got the ticket you know, making your chances reduce to (k-1)/(n-1) and 1-k/n chance to get the ticket you don't know, thus improving your chances to k/(n-1). So your probability to get the ticket you know when you go second is:
Once again k/n. The same for probability to go third, fourth and so on. Your probability to get the ticket you know is always k/n, your order doesn't matter, the existence of other students does not matter, what matters is the ratio between tickets you know and tickets you don't.
Of course, if you knew something more about the ticket asignment process, for example that the first person to go always gets the first ticket, you could do much better. But usually in this kind of problems we do not have the extra information and have to reason simply on priors.
So I think I understand and agree with the first four paragraphs! I agree that if you use the uniform prior, these are the outcomes you wind up with.
I think my question is why we do we *always* assume that the outcomes will be distributed according to the uniform prior? Can't our prior be decomposed into a weighted sum of probability mass functions (which represent distribution methods?)
Let me state what I'm thinking clearly so it can be refuted.
Suppose there are three possible distribution functions and we don't know which one is chosen, so we use an uniform prior. The first assigns the first set of prisoners to red rooms until red rooms are filled. The second assigns the second set of prisoners to blue rooms until blue rooms are filled. The third just randomly assigns prisoners to rooms. If there are a hundred prisoners the odds of ending up in a blue room are 90%, as there is a 90 percent chance you are in the first 90 prisoners, a 90 percent chance you are in the last 90 prisoners, and a 90 percent chance of being placed in a blue room (according to the logic you include above.)
Now suppose there is only one person. Then there is a 1/3 + (1/3)*(1/10) probability you will end up in a red room, a 1/3 + (1/3)*(9/10) chance you will end up in a blue room.
Why shouldn't I expect my prior to change based on the presence or absence of other people? Is my example misleading?
Edit: Just to give a little more substance to respond to, I think there are at least three counterarguments to the point I'm making with my example. The first is that the number of people changes the probability of a given outcome for a specific mechanism but when you combine all mechanisms together they cancel out. My example shows the opposite because it does not account for all possible assignment mechanisms. The second is that the uniform prior is just a useful heuristic and we need some standard. The third is that there is some Occam's razor like principle which rationally prevents us from speculating about the distribution mechanism.
The first would be the most compelling to me, but I think it would be hard to demonstrate this is always the case. If you know of a paper showing this, that would completely satisfy me! The second is a good reason for using a uniform prior, but it makes anthropic arguments much less interesting to me, as the uniform prior could be quite far from the true distribution. While this is fine for inference with many data points, updating on a single fact will likely not bring the underlying distribution close to the underlying distribution. The third seems implausible to me because we know the prisoners must be assigned somehow. I'm not positing anything extra, just thinking about a mechanism we know took place. I admit it is speculative, in the sense that the problem does not specify the range of possible assignment mechanisms, but there must be some assignment mechanism. I'm sure there are more.
> The first is that the number of people changes the probability of a given outcome for a specific mechanism but when you combine all mechanisms together they cancel out
> The second is that the uniform prior is just a useful heuristic and we need some standard.
Yes, this is the case. We can treat this as a rule according to which we should reason with uniform prior about things we do not have any particular knowledge. And this rule is grounded in the fact that when you can't privilege any particular hypothesis, all the alternatives cancel out
Consider all possible rules of ticket assignment in regards to yourself. Lets assume that you are the only person passing the exam, for now. There are n mutually exclusive hypothesis:
1) you always get first ticket
2) you always get second ticket
3) you always get third ticket
...
i) you always get i-th ticket
...
n) you always get n-th ticket
When you don't have any information about which hypothesis is more likely then the other you end up in a situation where all of them are equally possible, every reason to think that you get i-th ticket equally applies to any other ticket as well.
Now consider a situation where there are additional n-1 people whom you are passing the exam with. For every hypothesis of which ticket you get there are (n-1)! sub-hypothesises of how all the other tickets allocated among other people. But, likewise, as soon as you don't have any way to privilege one over the other they are equally likely from your state of knowledge. And so we can reduce this situation back to the previous example, accounting for the additional people doesn't change anything about your probability to receive a specific ticket.
> the uniform prior could be quite far from the true distribution.
Yes, absolutely! Probability theory is about reasoning under uncertainty not some unlimited access to the pure truth of the universe. Sometimes you can reason correctly according to your state of knowledge but still be ridiculously off the mark, because your state of knowledge is just inadequate. This is true, regardless of whether you are reasoning about anthropics or not.
I see! I completely agree with your reasoning for the first few parts. I'll express my confusion about your last paragraph because I think this is where I still have some confusion. It's going to sound overconfident, so I want to disclaim that I am more uncertain than how I will sound and I welcome counterarguments.
My impression was that probability theory was about reasoning in a purely formal universe where everything is comprehensible and composed of a sigma algebra, a state space, and a probability measure. In contrast, statistical inference allows for some underlying uncertainty. In this way, reasoning about hypothetical scenarios, e.g. the sleeping beauty problem (which I agree with your take on by the way,) is intended to deliver truths about the world. In contrast, scientific research relies on priors which could be mistaken. But scientific research differs from anthropics in that the underlying probability distributions can be tested, challenged, and falsified. In contrast, the underlying assumption of indifference in anthropic reasoning cannot be challenged with empirical evidence. It is possible to perform sensitivity analyses on it with thought experiment evidence (which is what I see myself as doing when I talk about different assignment mechanisms.) And I suppose this is my problem: the results of anthropic reasoning seem to me to be very sensitive to assumptions which we cannot falsify and any sensitivity analysis is heavily dependent on assumptions which we also cannot falsify. Therefore, it seems imprudent to rely too heavily on these kinds of arguments.
I'm sure I'm probably missing a lot, but I find the above case somewhat convincing.
> My impression was that probability theory was about reasoning in a purely formal universe where everything is comprehensible and composed of a sigma algebra, a state space, and a probability measure. In contrast, statistical inference allows for some underlying uncertainty.
Oh, this is a fascinating problem you are talking about - how can math describe physical universe at all. Let's take a step back and look at simple example.
Consider arithmetics. It desribes a purely formal universe where everything is comprehensible and composed of numbers. Statement 1+1=2 is a formal tautology in this arithmetical universe. And yet, somehow it seems to describe the behaviour of physical objects in our universe! If I take one apple and put it to another apple there will be two apples. How comes?
Now, I'm not going to spoil the answer to this question for you - it's a rare opportunity to solve such philosophical conundrum yourself. For now it should be enough to understand that the same reason that allows arithmetics to describe the behaviour of apples allow probability theory to describe reasoning a rational agent should have under uncertanity.
Essentialy, we describe a mechanism that outputs outcomes from a sample space, and can be run indefinetely. Every iteration is statisticly independent from the previous ones, and the probability to output each outcome on each iteration equals probability of corrsponding elementary event from sigma-algebra over the sample space. And we make sure that this mechanism corresponds to our knowledge state about something that happens in the physical world. So when we have some physical process, we approximate it as some iteration of some probability experiment. The process doesn't have to be random, per se, the randomness usually comes up from the imperfection of approximation, which creates uncertanity about the actual nature of the process.
> But scientific research differs from anthropics in that the underlying probability distributions can be tested, challenged, and falsified. In contrast, the underlying assumption of indifference in anthropic reasoning cannot be challenged with empirical evidence.
The reason why we have troubles testing anthropic probabilities empirically is due to the fact that "your existence" happens only once. But this is the same problem as with other non-frequentist probabilities. Suppose that I'm to bet on a result of a specific coin toss. The coin is not necessary fair. Which odds should I name? Even though this particular coin toss happens only once we can still apply the framework of probability experiment and see where it points. The same with anthropics.
What steers me away from dealing with anthropics? Here's what I see:
*) Assume ridiculous situation with two possible ways of reaching it: A or B. Which one more likely happened?
*) If you choose A, it leads to this ridiculous presumption. If you choose B, it leads to this other ridiculous presumption. But A "solves" more ridiculous situations than B, so you should choose A, right?
Me: Not necessarily. We are postulating ridiculous situations, so by definition it will have ridiculous conclusions. Why should I bother thinking about ridiculous situations with ridiculous conclusions in the first place? Maybe the more ridiculous answer is correct, based on the ridiculous situation we began with, on the basis that "ridiculous is as ridiculous does".
This is almost certainly a dumb question, but suppose that there are infinite people in the universe (sure, let it be Beth 2 or whatever). How are there also so large numbers of other things in the universe? For example, there's a chair across the table from me, but there could be a person there. If there were a person there, there'd be more people in the universe. Thus the universe can't have all possible people. I'm sure this goes wrong somewhere, but where?
Thanks for the answer. I talked to a friend who knows more math than I do about this for a bit, and it seems like it makes sense so long as you accept some views about infinity.
But then, suppose that each of the Beth 2 people (philosophical persons, so incl. animals and whatnot). Suppose that each of the Beth 2 people has a single util U (they probably vary some, but I doubt the existence of negative lives; maybe this works even if there are negative lives). Therefore, by the same logic, we should have infinite utility, specifically Beth 2 utils. So there's no reason to try to help other people if this is true, because there's already Beth 2 utils so nothing you can do can make there be more utils, just like nothing God can do can make there be more people.
My guess is that you escape this counterintuitive result by denying that utils work like chairs and people do, but I'm not sure how exactly utils would work differently. Or is the solution to postulate negative lives?
All these hypothetical scenarios are self licking give cream cones for nerds. Similar to writing articles and papers about a hypothetical form of Chess with different rules that no one will ever play.
Nice post. Thanks for integrating hyperlinks. I wonder if you could explain how you calculate the relative “weirdness” of various positions beyond pure institutions (or if it is just institution, why that isn’t a major flaw). Impact calculus, ya know.
Also, are these *really* arguments based off your own existence? It seems like you always introduce additional facts beyond existence (beyond even existence and the hypothetical) in order for your intuitions to hold weight. Imagine you have a purely disembodied mind with no external experiences whatever. Then it gets informed that there are two theories. On theory 1, 5 people get created, on theory 2, only 1 person is created.
Now in the post you link, you say that “person A” should have 5x credence in theory 1. But this adds an additional fact! This person knows that they are “person A”. I feel like the halfer intuition gets much stronger if you remove that additional piece of evidence.
I'm going to ask an absurdly dumb question, because I just don't understand anthropic arguments and just want to give up when I hear them.
Don't SIA accepters accept the existence of a reference class? Otherwise wouldn't the SIA give us reason to accept panpsychism by saying it is more likely that there are experiencing things? If you accept the SIA, I'm assuming you don't think you believe this increases the chance electrons are conscious?
Again, I apologize, I have no idea what's even happening.
Just saw this extensively addressed in Joe Carlsmith's SIA > SSA Part 1 sections II-IV. I see now that SIA simply uses the grouping of observers in a given epistemic situation very differently from reference classes in SSA. If anyone else was having this problem I'd recommend: https://joecarlsmith.com/2021/09/30/sia-ssa-part-1-learning-from-the-fact-that-you-exist
Yeah, not a dumb question, I was going to respond to your comment but I forgot.
I'm surprised by the amount of comments which are just "these questions are dumb and you should feel bad for asking them" rather than making any objective claim or response to anything.
Firstly, sometimes asking questions like these is just good fun.
Secondly, this does actually have some real world implications, e.g. doomsday argument, whether we should expect to live in a universe with lots of aliens, etc.
It seems that SIA implies there are likely very many observers (presumptive philosopher), whereas the SSA implies fairly few (doomsday argument).
Shouldn't it somehow be possible to accept both so that they cancel out? Just spitballing here.
Good question! Short answer: no as I explain here https://benthams.substack.com/p/alternatives-to-sia-are-doomed
Longer answer: neither SIA nor SSA say the number of people is what matters but instead SSA says what matters is the % of people in your reference class that you might be and SIA says what matters is the number of people that you might be. To accommodate cases like the one where 10 guys with red shirts and one with blue shirts get created, you have to think that of the actually existing people compatible with your current evidence, you're randomly drawn from them.
Suppose you combined SSA and SIA in this way where you construct a reference class (say conscious beings) and then reason as if you're randomly selected from your reference class while also thinking that theories that predict that there are more people you might be are better. So what you do is you do in terms of anthropic updating is multiply by the number of observers that you might be and then multiply by the share of your reference class that's populated by observers.
Let's apply this to a case. A coin is flipped. Heads means one guy with a red shirt gets created. Tails means 2 guys with red shirts and 10 guys with blue shirts. I have a red shirt. Tails gives the probabilsitic value of 2x2/10=2/5. Heads gives the value of 1.
Now let's apply it to doomsday. Heads means one guy gets created, tails means 1,000,000 people get created. I'm the first guy. I would have a very massive update in favor of doom--it doesn't cancel.
Let's call this view double update SSA. It doesn't work to cancel out doom, for the reason described.
Now, there is a view that's a combination of SSA and SIA. That view is just called SIA! In fact, in Bostrom's original formulation, what I call SIA is called SSA+SIA. SSA+SIA says that you reason as if you're randomly drawn from the pool of people in your reference class--where you reference class is people you might be--and then your existence gives you evidence that there are more of them.
Let's say that God flips a coin, He always creates two redshirted guys. Only when it lands tails does he create blueshirted guys. I get created and am made aware of the rules of the game.
My view is that if I look down and notice that I am wearing a red shirt (I just did: turns out I am wearing a red shirt!), I should give even odds on the result of the coin flip, since I know that *something having the experience that I am having right now* is equally likely in either scenario.
That's SIA, my friend. Both theories predict 2 redshirted guys, so they're equal.
Explain how then how your anthropic theism argument works: https://benthams.substack.com/p/the-anthropic-argument-for-theism
It seems to rely on the number of people that can exist, but I only care about the number of red shirted people that can exist, because people wearing any other color of shirt don't affect my probabilities.
(of course you can make almost any mind wear a literal red shirt, I am using red-shirtedness as a stand in for any of my own identifiable characteristics. Most important of which is approximate complexity.)
If God makes unsetly many people and has a reason to make one pesron like you, he has a reason to make infinite people like you, so theism means more people like you exist. Obviously the number of other people is not intrinsically relevant.
I think my existence is quite plausible under naturalism. Sure, I am only one of infinite possible minds, but the relative simplicity of my particular mind seems to make it that *it* in particular could easily arise naturally.
> Shouldn't it somehow be possible to accept both so that they cancel out? Just spitballing here.
This is exactly what happens in Doomsday argument. Consider:
Suppose there are two undistinguishable bags with numbered pieces of paper. The first bag has 10 pieces of paper and the second has 10000. You were given a random piece of paper from one of the bags and it happens to have number 6. What should be your credence that you've just picked a piece of paper from the first bag?
Before you look at your paper your credence is 1:1 between the two bags. After you saw 6, you should update a lot in favor of the first bag. This is how SSA treats your existence: initially you are indifferent between short and long history. But after you've learned your birth rank you update a lot in favor of short history
Now consider this modification:
> You pick the piece of paper from the bag yourself. Bags are of such size so that when you put you hand in the second bag you always find a piece of paper immediately at the bottom of the bag, while when you put your hand in the first bag there is a lot of empty space so you may not find the piece of paper immediately. You immediately find a piece of paper in the bag. It is 6. What should be your credence that you've just picked a piece of paper from the first bag?
Here you are initially 1:1 before you put your hand into the bag. Immediately finding the piece of paper heavily updates you in favor of the second bag. And learning that the number is 6 heavily updates you in favor of the first back. This two updates exactly cancel out each other and therefore you are back to 1:1 between the two bags.
This is how SIA treats your existence: learning that you exist at all makes you very confident that there are infinite people, learning that you are only 60-billion-something person updates you back to normality, making you indifferent between short and long history. Pretty neat, right? But where are all the infinities coming from, then?
The thing is, there is a problem with such huge compensatory updates. Unless it's explicitly set in the conditions of the experiment, SIA follower will virtually never believe that they are merely 60-billion-something person, because they are initially so extremely sure to be somewhere around infinity. They would instead believe in multiverse/simulations/huge alien civilizations/God creating infinite people/all of it at once - anything that would explain why they are not actually among the first 60-billion people in existence even though it looks like it, and that in practice does not allow them to update back to normality. So outside of Doomsday Argument, SIA end up quite crazy.
However, as a matter of fact, there is a very simple way around all this. You can discard both SSA and SIA. Consider this modification of paper picking experiment:
Pieces of paper are not given at random. You were guaranteed to receive the paper with number 6, regardless of which bag it was picked from
Here you are 1:1 before you were given the paper, 1:1 after you were given it and before you looked at it and still 1:1 after you saw that it's indeed numbered 6. And this is how one actually supposed to reason about existence in Doomsday argument. You are not randomly selected among all people destined to exist or all possible people, whatever it means. Your mind is not an immaterial soul which could be instantiated in the world at any time. Your mind is the product of your brain, which is part of your body, which was created due to a very specific sexual intercourse between your parents. And the same logic applies to your parents and all your ancestors as well. Therefore, you couldn't exist at any other time but now.
If you want to do serious probability then you need to learn the axioms of a sigma algebra, specify a probability space, and do some measure theory. Contrary to intuition and popular use, the mathematical theory of probability is not some universally applicable guide to philosophical reasoning. “All possible observers” is simply not a class where it will be possible to apply the axioms of probability to. It’s like a philosopher saying, “assume you are in the set of all sets.” That sounds intuitively appealing, but mathematically such a thing is actually impossible and cannot exist.
Right so generally what SIA says to do is count up the number of observers that you might be and favor theories by a factor of N where N is the number of observers that you might be on a theory.
And what I am saying is that "the number of observers you might be" is always too large of a number in order to self consistently apply the axioms of probability theory. It is like the mathematicians at the turn of the 20th century. Unrestricted set theory was in vogue, until Russell discovered Russell's paradox, and they realized that they were going to have to give some stuff up. So now we have ZFC, which is very powerful, but still restricts what we can do with set theory.
One of the first things you learn when doing measure theory is that you aren't able to get the theory you wanted to start with in the first lpace. See the opening of Chapter 1 in Folland's Real Analysis 2nd edition textbook (edited for readability):
"Ideally, for n in the Natural numbers, we would like to have a function u that assigns each subset of R^n a positive, possibly infinite number u(E). Such a function should surely possess the following properties:
1. If E_1, E_2, .. is a finite or infinite sequence of disjoint sets, then u(E_1 U E_2 U ... ) = u(E_1) + u(E_2) + ...
2. If E is congruent to F (that is, if E can be transformed into F by translations, rotations, and reflections), then u(E) = u(F).
3. u(Q)=1, where Q is the unit cube.
Unfortunately, these conditions are mutually inconsistent."
No, I'm talking about the number of actually existing observers you might be. That will only be too large if you think there are infinite actually existing people.
Sure, but "actually existing observers you might be" is clearly infinite. There are an infinite number of scenarios where you are a brain in a vat, and so these are "actually existing observers you MIGHT be."
In other words, "observers you might be" is way too large, it is like the set of sets. You are going to end up with contradictions if you try to apply probability theory or set theory. In restricted circumstances like the Sleeping Beauty problem, you assume for the sake of the problem that you are guaranteed to be one of a finite number of observers. So you can speak meaningfully about these.
The most general version of SIA I can think of is something like this: suppose we apportion our ur-prior p(w) over all possible worlds w in a certain class; these are supposed to represent our chance-based credences in each of those worlds being actual "before we know that we exist," whatever that's supposed to mean. Then in each world, after learning you exist and have evidence V about your situation, your posterior p(w|V) on world w being actual should be updated to p(w)*E[number of conscious observers with evidence V | w is actual] / E[number of conscious observers with evidence V]. Here, (conditional) expectations are taken with respect to our ur-prior.
This works perfectly fine in toy scenarios where all these quantities are nice enough (e.g., all the expectations there are all finite, though this can be weakened a bit); this will define a new, valid probability distribution over our set of worlds. But I agree it seems to get really intractable when we open the floodgates and drop all simplifying cardinality assumptions about the number of worlds and the number of conscious observers. Most likely you'd try to patch it by simply confining yourself to some small-enough but still plausibly broad collection of worlds, e.g., those that have some finitely computable structure and initial conditions in some way. I doubt this can be made to work, but it would be interesting to see it attempted.
Right, we're thinking about this in the same way. And I don't think you will ever get a number of possible worlds which is 1) large enough to be interesting and 2) small enough to workable.
Also, while I am sympathetic to the Bayesian approach to probability, and think you can meaningfully talk about probabilities for events which will only happen once (like a specific election), it feels like we're overreaching with the theory perhaps when the event is literally "what possible world is the actual world."
Hypothetical scenarios where God randomly decided between two possible universes is one thing, but it is not clear that axioms from that hypothetical can be imported over to why our universe exists.
>Right, we're thinking about this in the same way. And I don't think you will ever get a number of possible worlds which is 1) large enough to be interesting and 2) small enough to workable.
It depends on what you mean by "workable." I think "worlds with computable laws of physics" may be workable in the sense that an ideal, logically omniscient reasoner could meaningfully try doing probability over it without running into size-related sigma algebra issues, etc. But it's non-workable for us mere mortals in that it's highly computationally intractable. Still, I'd guess Matthew would say that we don't actually need it to be tractable to draw certain qualitative inferences about what kinds of conclusions we'd probably arrive at if we *were* ideal observers and *could* use it, such as that we'd infer the probable infinitude of the number of people, or something.
Nevertheless, it does seem hard to avoid skepticism even if we take this route. Suppose you flip a light switch on in your kitchen. You might expect the lights to go on. But suppose (for example) you think there are worlds where there's guaranteed to be an infinite number of perceptual duplicates of you who flip the light switch on in their kitchens and nothing happens, for whatever reason. If you *also* think some set of those worlds has positive probability, however tiny, then the principle I enunciated above involves infinities that can't obviously be dispensed with and it seems we're completely stuck. And of course this applies to absolutely everything, since there's nothing special about turning on the lights! You'll most likely have to go instead with some approach based on taking limits of finite worlds, and this is going to get even hairier.
>It depends on what you mean by "workable." I think "worlds with computable laws of physics" may be workable in the sense that an ideal, logically omniscient reasoner could meaningfully try doing probability over it without running into size-related sigma algebra issues, etc.
I never learned that much theoretical comp-sci, and so I do not remember exactly what computable means here. The issue I see with this is that aren't there an infinite number of possible worlds which have computable laws of physics? Does "computable" imply that the infinity is at least countable?
So suppose you have some universe U with computable laws of physics. Now consider universe U', which is essentially two identical universe U's but separated by a space so large, and moving away so fast, that the light cones will never intersect. Now consider universe U'' which is three universe U's. There is never any observation you could make that would tell you that you are in universe U or U'''...
Also, what happens if you find evidence that the universe you inhabit is not in fact computable?
>I never learned that much theoretical comp-sci, and so I do not remember exactly what computable means here. The issue I see with this is that aren't there an infinite number of possible worlds which have computable laws of physics? Does "computable" imply that the infinity is at least countable?
It depends a little bit on how you actually flesh out the idea I was gesturing at, so I apologize for the ambiguity. Naively, yes, it's going to be a countable number, because there's only a countable number of finite strings/computer programs/whatever that successfully, uniquely, computably describe the physics of a possible world. And you'd attach a prior over them based on some measure of complexity, though even doing this the right way is not totally straightforward. However, for the record, you could also do more sophisticated things: for example, you could treat "the constant parameters" of some universe-description differently from "the laws" and assign continuous probability distributions to the former once you've fixed the latter, and this will give you uncountably many possible worlds where your credence is still reasonably well-behaved. You'd get a countably infinite mixture distribution of continuous probability distributions, which isn't too pathological as far as probability theory goes. Still obviously intractable in practice, of course.
>So suppose you have some universe U with computable laws of physics. Now consider universe U', which is essentially two identical universe U's but separated by a space so large, and moving away so fast, that the light cones will never intersect. Now consider universe U'' which is three universe U's. There is never any observation you could make that would tell you that you are in universe U or U'''...
Sure, but what's the problem with that?
>Also, what happens if you find evidence that the universe you inhabit is not in fact computable?
Yeah, that's definitely a limitation of the formalism, and really of trying to apply the mathematical concept of Kolomogorov complexity to areas of philosophy generally. All non-computable things automatically get priors of 0, at least in a certain sense. Everyone agrees this is an outstanding problem and hopes some research program will come along and fix it with some better generalization somehow.
The fact that you keep running into paradoxes and absurdities should be a sign to step back and question more basic premises, I think. The basic problem with all of this reasoning, especially when trying to reason about God, is that the state of the universe is determined by its prior states and the physical laws and systems involved, not by probabilities and random draws from a hat. The red/blue prisoner thought experiment is in principle totally possible, therefore unobjectionable. But many others are fantastical and aren’t tethered to reality. Instead, they are just formal probability problems and paradoxes, masquerading as statements about empirical reality. Reminds me of bit of how medieval ontological arguments made a priori claims about the world and inferred something must exist on that basis.
> The obvious answer is that you should be 90% sure your cell is blue. That’s because most people with your current evidence are in blue cells. You don’t know which of the people in the cells you are, but of the ones you might be, most are in blue cells.
I think even in this trivial example there is a potential confusion. The question "which person am I" may or may not be utter nonsense - what would it even mean to be me if I'm not myself? - It's not immediately clear how to approach it. But we could definetely reason about which room I'm in - which is just a classical probability theory problem and nothing really changes whether all the room are filled with some other people or you are the only person who is created in a random room among all these rooms.
> So your reference class is the class of entities that you should reason as if you’re randomly selected from. How do SSAers decide on a reference class? Answer: they just basically make it up to comport with their intuitions. There isn’t a principled basis for a reference class!
I really dislike the framework of reference classes because, in my opinion, it's creates extra confusion, but if we talk about the matter in these terms, then reference class is the class of entities you *actually could have been*, according to your knowledge of the causal process that led to your existence. In other worlds you should reason about your existence as if you are a random sample from a class that you *actually are* a random sample of to the best of your knowledge.
I don't understand how it doesn't immediately appear to be absolutely obviously true. When you blindly pick marbles from a bag you do not reason as if the marble you got is a random marble from all marbles in the multiverse. No, you reason about it as if it's a random marble specifically from the bag you are picking it. Likewise with your existence. The "principled basis for a reference class" is simply the nature of the causal process/the intention of the creator. If you were always intended to be created in a red jacket you do not get to update from being created in a red jacket - your reference class is red jacketed people. If you were inteded to be created in a jacket of any color - you do update when you see which color your jacket is - your reference class is people in any jacket.
> Okay, aside from the totally made-up reference class, what’s wrong with SSA? Seems to make sense of our intuitions. But unfortunately, it implies some crazy things.
It seems there is an easy fix - you can just outlaw drawing reference classes throughout time, unless the conditions of the setting explicitly state that this is the case. Then there is no Adam and Eve, no Doomsday Inference, no moving boulders with your mind and basically everything adds up to normality much better than with SIA.
> Here, I haven’t covered all the views in anthropics
You haven't even started to be creative, with your exploration of possible anthropic theories, still stuck in the SIA/SSA false dychotomy. As and example of completely non presumptious anthropic theory here is a thing that I call "Anthropic Agreement Theory" according to which one has to update on anthropic evidence only if both SSA and SIA agree about it. So you can update in all the simple cases, such as prisoners or God equal coin toss but not in weird ones that lead to presumptiousness.
> But we could definetely reason about which room I'm in - which is just a classical probability theory problem and nothing really changes whether all the room are filled with some other people or you are the only person who is created in a random room among all these rooms.
I don't see this? It seems knowing that other people are assigned to rooms as well tells you something about how people are distributed to cells. For example, you could imagine the observers are randomly ordered, and the first ten observers are assigned to the red cells. If you knew this fact and you knew you were alone, then you could be confident you were in a red cell. However, if you knew there are 100 other people then you could be sure even with this assignment mechanism you weren't likely to be in a red cell. This shouldn't impact your probability much, but surely it should influence it a little bit?
I get this is probably orthogonal to the point you are making, but I'm taking this opportunity to point out something I just don't understand about these arguments.
Sure. Consider this:
There are n exam tickets and n students. Tickets are picked at random and every student will answer their own ticket. You are a student and have prepared yourself to k of the exam tickets. What are your chances to get the ticket you know, if you go first? Can you improve these chances by going later?
If you go first your chances are simply k/n.
If you go second then there is k/n chance that the first student got the ticket you know, making your chances reduce to (k-1)/(n-1) and 1-k/n chance to get the ticket you don't know, thus improving your chances to k/(n-1). So your probability to get the ticket you know when you go second is:
(k/n)*(k-1)/(n-1) + (1 - k/n)*k/(n-1) = k/(n-1) * (k-1/n + 1 - k/n) = k/n(n-1) * ( k- 1 + n - k ) = k(n-1)/n(n-1) = k/n
Once again k/n. The same for probability to go third, fourth and so on. Your probability to get the ticket you know is always k/n, your order doesn't matter, the existence of other students does not matter, what matters is the ratio between tickets you know and tickets you don't.
Of course, if you knew something more about the ticket asignment process, for example that the first person to go always gets the first ticket, you could do much better. But usually in this kind of problems we do not have the extra information and have to reason simply on priors.
So I think I understand and agree with the first four paragraphs! I agree that if you use the uniform prior, these are the outcomes you wind up with.
I think my question is why we do we *always* assume that the outcomes will be distributed according to the uniform prior? Can't our prior be decomposed into a weighted sum of probability mass functions (which represent distribution methods?)
Let me state what I'm thinking clearly so it can be refuted.
Suppose there are three possible distribution functions and we don't know which one is chosen, so we use an uniform prior. The first assigns the first set of prisoners to red rooms until red rooms are filled. The second assigns the second set of prisoners to blue rooms until blue rooms are filled. The third just randomly assigns prisoners to rooms. If there are a hundred prisoners the odds of ending up in a blue room are 90%, as there is a 90 percent chance you are in the first 90 prisoners, a 90 percent chance you are in the last 90 prisoners, and a 90 percent chance of being placed in a blue room (according to the logic you include above.)
Now suppose there is only one person. Then there is a 1/3 + (1/3)*(1/10) probability you will end up in a red room, a 1/3 + (1/3)*(9/10) chance you will end up in a blue room.
Why shouldn't I expect my prior to change based on the presence or absence of other people? Is my example misleading?
Edit: Just to give a little more substance to respond to, I think there are at least three counterarguments to the point I'm making with my example. The first is that the number of people changes the probability of a given outcome for a specific mechanism but when you combine all mechanisms together they cancel out. My example shows the opposite because it does not account for all possible assignment mechanisms. The second is that the uniform prior is just a useful heuristic and we need some standard. The third is that there is some Occam's razor like principle which rationally prevents us from speculating about the distribution mechanism.
The first would be the most compelling to me, but I think it would be hard to demonstrate this is always the case. If you know of a paper showing this, that would completely satisfy me! The second is a good reason for using a uniform prior, but it makes anthropic arguments much less interesting to me, as the uniform prior could be quite far from the true distribution. While this is fine for inference with many data points, updating on a single fact will likely not bring the underlying distribution close to the underlying distribution. The third seems implausible to me because we know the prisoners must be assigned somehow. I'm not positing anything extra, just thinking about a mechanism we know took place. I admit it is speculative, in the sense that the problem does not specify the range of possible assignment mechanisms, but there must be some assignment mechanism. I'm sure there are more.
> The first is that the number of people changes the probability of a given outcome for a specific mechanism but when you combine all mechanisms together they cancel out
> The second is that the uniform prior is just a useful heuristic and we need some standard.
Yes, this is the case. We can treat this as a rule according to which we should reason with uniform prior about things we do not have any particular knowledge. And this rule is grounded in the fact that when you can't privilege any particular hypothesis, all the alternatives cancel out
Consider all possible rules of ticket assignment in regards to yourself. Lets assume that you are the only person passing the exam, for now. There are n mutually exclusive hypothesis:
1) you always get first ticket
2) you always get second ticket
3) you always get third ticket
...
i) you always get i-th ticket
...
n) you always get n-th ticket
When you don't have any information about which hypothesis is more likely then the other you end up in a situation where all of them are equally possible, every reason to think that you get i-th ticket equally applies to any other ticket as well.
Now consider a situation where there are additional n-1 people whom you are passing the exam with. For every hypothesis of which ticket you get there are (n-1)! sub-hypothesises of how all the other tickets allocated among other people. But, likewise, as soon as you don't have any way to privilege one over the other they are equally likely from your state of knowledge. And so we can reduce this situation back to the previous example, accounting for the additional people doesn't change anything about your probability to receive a specific ticket.
> the uniform prior could be quite far from the true distribution.
Yes, absolutely! Probability theory is about reasoning under uncertainty not some unlimited access to the pure truth of the universe. Sometimes you can reason correctly according to your state of knowledge but still be ridiculously off the mark, because your state of knowledge is just inadequate. This is true, regardless of whether you are reasoning about anthropics or not.
I see! I completely agree with your reasoning for the first few parts. I'll express my confusion about your last paragraph because I think this is where I still have some confusion. It's going to sound overconfident, so I want to disclaim that I am more uncertain than how I will sound and I welcome counterarguments.
My impression was that probability theory was about reasoning in a purely formal universe where everything is comprehensible and composed of a sigma algebra, a state space, and a probability measure. In contrast, statistical inference allows for some underlying uncertainty. In this way, reasoning about hypothetical scenarios, e.g. the sleeping beauty problem (which I agree with your take on by the way,) is intended to deliver truths about the world. In contrast, scientific research relies on priors which could be mistaken. But scientific research differs from anthropics in that the underlying probability distributions can be tested, challenged, and falsified. In contrast, the underlying assumption of indifference in anthropic reasoning cannot be challenged with empirical evidence. It is possible to perform sensitivity analyses on it with thought experiment evidence (which is what I see myself as doing when I talk about different assignment mechanisms.) And I suppose this is my problem: the results of anthropic reasoning seem to me to be very sensitive to assumptions which we cannot falsify and any sensitivity analysis is heavily dependent on assumptions which we also cannot falsify. Therefore, it seems imprudent to rely too heavily on these kinds of arguments.
I'm sure I'm probably missing a lot, but I find the above case somewhat convincing.
> My impression was that probability theory was about reasoning in a purely formal universe where everything is comprehensible and composed of a sigma algebra, a state space, and a probability measure. In contrast, statistical inference allows for some underlying uncertainty.
Oh, this is a fascinating problem you are talking about - how can math describe physical universe at all. Let's take a step back and look at simple example.
Consider arithmetics. It desribes a purely formal universe where everything is comprehensible and composed of numbers. Statement 1+1=2 is a formal tautology in this arithmetical universe. And yet, somehow it seems to describe the behaviour of physical objects in our universe! If I take one apple and put it to another apple there will be two apples. How comes?
Now, I'm not going to spoil the answer to this question for you - it's a rare opportunity to solve such philosophical conundrum yourself. For now it should be enough to understand that the same reason that allows arithmetics to describe the behaviour of apples allow probability theory to describe reasoning a rational agent should have under uncertanity.
The "bridging component" between math of probability theory and actual events in our world is the notion of "probability experiment" https://en.wikipedia.org/wiki/Experiment_(probability_theory)
Essentialy, we describe a mechanism that outputs outcomes from a sample space, and can be run indefinetely. Every iteration is statisticly independent from the previous ones, and the probability to output each outcome on each iteration equals probability of corrsponding elementary event from sigma-algebra over the sample space. And we make sure that this mechanism corresponds to our knowledge state about something that happens in the physical world. So when we have some physical process, we approximate it as some iteration of some probability experiment. The process doesn't have to be random, per se, the randomness usually comes up from the imperfection of approximation, which creates uncertanity about the actual nature of the process.
> But scientific research differs from anthropics in that the underlying probability distributions can be tested, challenged, and falsified. In contrast, the underlying assumption of indifference in anthropic reasoning cannot be challenged with empirical evidence.
The reason why we have troubles testing anthropic probabilities empirically is due to the fact that "your existence" happens only once. But this is the same problem as with other non-frequentist probabilities. Suppose that I'm to bet on a result of a specific coin toss. The coin is not necessary fair. Which odds should I name? Even though this particular coin toss happens only once we can still apply the framework of probability experiment and see where it points. The same with anthropics.
What steers me away from dealing with anthropics? Here's what I see:
*) Assume ridiculous situation with two possible ways of reaching it: A or B. Which one more likely happened?
*) If you choose A, it leads to this ridiculous presumption. If you choose B, it leads to this other ridiculous presumption. But A "solves" more ridiculous situations than B, so you should choose A, right?
Me: Not necessarily. We are postulating ridiculous situations, so by definition it will have ridiculous conclusions. Why should I bother thinking about ridiculous situations with ridiculous conclusions in the first place? Maybe the more ridiculous answer is correct, based on the ridiculous situation we began with, on the basis that "ridiculous is as ridiculous does".
This is almost certainly a dumb question, but suppose that there are infinite people in the universe (sure, let it be Beth 2 or whatever). How are there also so large numbers of other things in the universe? For example, there's a chair across the table from me, but there could be a person there. If there were a person there, there'd be more people in the universe. Thus the universe can't have all possible people. I'm sure this goes wrong somewhere, but where?
There can be Beth 2 people and Beth 2 chairs. ALso, they're not all in the universe but in the multiverse.
Thanks for the answer. I talked to a friend who knows more math than I do about this for a bit, and it seems like it makes sense so long as you accept some views about infinity.
But then, suppose that each of the Beth 2 people (philosophical persons, so incl. animals and whatnot). Suppose that each of the Beth 2 people has a single util U (they probably vary some, but I doubt the existence of negative lives; maybe this works even if there are negative lives). Therefore, by the same logic, we should have infinite utility, specifically Beth 2 utils. So there's no reason to try to help other people if this is true, because there's already Beth 2 utils so nothing you can do can make there be more utils, just like nothing God can do can make there be more people.
My guess is that you escape this counterintuitive result by denying that utils work like chairs and people do, but I'm not sure how exactly utils would work differently. Or is the solution to postulate negative lives?
All these hypothetical scenarios are self licking give cream cones for nerds. Similar to writing articles and papers about a hypothetical form of Chess with different rules that no one will ever play.
How many angels can dance on the head of a pin…
I like the ice cream. If you prefer cheesecake there are probably plenty of people willing to sell it to you elsewhere
Nice post. Thanks for integrating hyperlinks. I wonder if you could explain how you calculate the relative “weirdness” of various positions beyond pure institutions (or if it is just institution, why that isn’t a major flaw). Impact calculus, ya know.
Also, are these *really* arguments based off your own existence? It seems like you always introduce additional facts beyond existence (beyond even existence and the hypothetical) in order for your intuitions to hold weight. Imagine you have a purely disembodied mind with no external experiences whatever. Then it gets informed that there are two theories. On theory 1, 5 people get created, on theory 2, only 1 person is created.
Now in the post you link, you say that “person A” should have 5x credence in theory 1. But this adds an additional fact! This person knows that they are “person A”. I feel like the halfer intuition gets much stronger if you remove that additional piece of evidence.
You just use intuitions.