28 Comments

> Presumptious philosopher

Your defence here is isomorphic to how people usually defend electoral college. You simply explain how SIA arrives to its conclusion. But just like how explaining the way electoral college works doesn't make ellectoral college less unfair, explaining how SIA reasons in this situation, doesn't make its reasoning in this situation less crazy.

> But the only bit of it that is controversial is H1=T1.

No. The other controversy is whether T1 and T2 are different outcomes of the experiment or the same one.

In general there are three different types of "anthropic probability problems".

1. A coin is tossed. On Heads n people are randomly selected from some set of possible people and created. On Tails N people are randomly selected from the set of possible people and created. You were created among such people.

Here SIA reasoning is correct. Your existence were not guaranteed by the conditions of the experiment, and so learning that you are created gives you actual evidence about the state of the coin.

2. A coin is tossed. On Heads a person is put into Room 1. On Tails a clone of this person is created and then either the original is put in Room 1 and clone in Room 2 or vice versa. You are in a Room and you are unsure whether you are the clone or the original.

Here SSA reasoning is correct. Your existence is guranteed by the conditions of the experiment, so you do not learn anything from it. However, the room assignment on Tails is random, so learning that you are in Room 1 gives you actual evidence about the state of the coin.

3. A person is put into Room 1. A coin is tossed. On Tails a clone of the person is put into Room 2. You are in one of the Rooms, unsure whether you are a clone or the original.

Here both SIA and SSA are wrong. Neither there is a chance not to exist in the experiment, nor to be in a different room than the one you are. So if you learn that you are in Room 1 you do not update of the coin, similarly to Double Halfing in Sleeping Beauty.

The fact that manistream anthropic theories are trying to reason the same way in all of these completely different scenarious inevitably makes them crazy in general case.

Expand full comment

//You simply explain how SIA arrives to its conclusion. But just like how explaining the way electoral college works doesn't make ellectoral college less unfair, explaining how SIA reasons in this situation, doesn't make its reasoning in this situation less crazy.//

Sometimes explaining how a view arrives at the conclusion can make it less implausible. You can come to see the plausibility of the view by seeing that the reasoning which it uses to arrive at the conclusion is highly reasonable. Coming to see that the presumptuous philosopher result is simply what you get when you don't think the relative probability of you being one of two people is affected by the presence of other people makes it more reasonable.

//No. The other controversy is whether T1 and T2 are different outcomes of the experiment or the same one.//

No, what? There are three possible things that might be true. You might be the first person and the coin came up tails, you might be the second person and the coin came up tails, and you might be the first person and the coin came up heads. So long as they're equally likely 2/3ds in tails is right. This reasoning doesn't depend on whether you count them as different or the same outcomes of the experiment.

What in the world is 1 supposed to be? What anthropic problems involve this?

Re 2 and 3, they're obviously equivalent. If two clones are made on tails and one on heads, the rooms they're in should be irrelevant to probabilistic reasoning if you can't see your room number.

Would encourage you, if you truly have brilliantly revolutionized anthropics, to try to publish a paper on your solution.

Expand full comment

> Sometimes explaining how a view arrives at the conclusion can make it less implausible. You can come to see the plausibility of the view by seeing that the reasoning which it uses to arrive at the conclusion is highly reasonable.

Then you should empatize to people proving that electoral college is fair by explaining how it works. It can work as a rethorical trick, but I thought the point is in finding truth of the matter.

Presumptious philosopher as an argument works as a kind of reducto ad absurdum:

1. Your theory claims that X->Y is valid inference in general.

2. But if we apply this inference from X1, we get X1->Y1, and Y1 is absurd

3. Therefore X->Y is not a valid inference in general and your theory is wrong

You (non-)answer:

1. Y1 may seem absurd, but see, it follows from X1 which is true

2. And X->Y is a valid inference in general

3. Therefore we have to accept that Y1 is also true and not absurd

Whether X->Y is valid inference is the premise that is being challenged, you can't use it to add credibility to the claim that Y1 is not absurd, otherwise you are counting the evidence twice.

What you should've done instead is openly say, that you bite the bullet in this case. Instead of pretending that you have a refutal. Otherwise we are getting an infinite loop of "One man's modus ponens is another man's modus tollens", which doesn't forward the discussion in any way

> There are three possible things that might be true.

The experiment as described has only two outcomes: Heads&NoClone; Tails&Clone

https://en.wikipedia.org/wiki/Experiment_(probability_theory)

> This reasoning doesn't depend on whether you count them as different or the same outcomes of the experiment.

If T1 and T2 are the same outcome then H+T1+T2 != 1.

Consider a simple coin toss. Suppose I define H as "coin is Heads", T1 "as coin is Tails in the first second after it landed" T2 coin is Tails in second second after it landed; and so on up to Tn.

P(H) = P(T1) = P(T2) =... = P(Tn)

but as T1, T2, ... Tn are all the same outcome of the experiment, it doesn't mean that P(H) isn't 1/2

> Re 2 and 3, they're obviously equivalent. If two clones are made on tails and one on heads, the rooms they're in should be irrelevant to probabilistic reasoning if you can't see your room number.

The mechanism of assignment is relevant. It's the same principle as with only taking into account people you *could've been* and not every shrimp in the sea. If you couldn't possiby been in the Room 2, then you do not update on learning that you are in Room 1.

> Would encourage you, if you truly have brilliantly revolutionized anthropics, to try to publish a paper on your solution.

Sigh. I still want to believe that using probability theory lawfully isn't revolutionary at all, but yes, I suppose I should actually try to publish. Could you give me a tip where to start? Do I directly write to a journal? Is there some you would recommend?

Expand full comment

//Then you should empatize to people proving that electoral college is fair by explaining how it works. It can work as a rethorical trick, but I thought the point is in finding truth of the matter.//

No, what? The fact that sometimes showing how a view gets a result vindicates the result doesn't mean it always does. If you were previously skeptical that, say, the derivative of X^3 is 3x^2 and then saw how one got that, your skepticism should evaporate. Whether the presumptuous philosopher is a reductio depends on how bad the result is, but if the way the view arrives at it serves to make it look plausible then it's a less forceful reductio.

I do bite the bullet but I have things to say about why it's not a bad bullet bite and is independently plausible.

//

The mechanism of assignment is relevant. It's the same principle as with only taking into account people you *could've been* and not every shrimp in the sea. If you couldn't possiby been in the Room 2, then you do not update on learning that you are in Room 1.//

I still don't quite get your view. Is it that you take the fact that you exist in a scenario as a given and then look at the odds you'd have different properties given that you exist?

For the record, I also think my view is just being basically Bayesian about your existence--one attractive feature of it. And unlike your view, mine obeys conservation of evidence and updates in accordance with Bayes--you get an update by learning you're the first person.

Expand full comment

This is a coherent explanation of your view. I find this view to be rather crazy (probability, on the margin, is _about_ counting slices!), but it's coherent.

Expand full comment

Would be curious to hear about the arguments I gave for the view.

Expand full comment

Well, I see your arguments, and I believe that the answer you reject as "obviously" wrong is the right one in some cases, and in the others the premise already presumes some version of anthropics.

Expand full comment

"...the basic idea is that if a theory predicts N times more people that you might currently be, it predicts your present existence N times as well."

I don't get this part where you say, "more people that you might currently be". That notion is so odd to me, and I'm not sure it's meaningful.

Incidentally, who has the best argument against SIA? I'd be interested in reading that. Can you refer me to some articles or Substack posts or whatever? I feel like I need to get the opposite perspective on this issue.

Expand full comment

My friend Mark who often leaves comments on my posts about SIA has the best objections. All the published ones are just repeating the presumptuous philosopher argument.

//I don't get this part where you say, "more people that you might currently be". That notion is so odd to me, and I'm not sure it's meaningful.//

Why? Suppose that there are two people, Fred and Tom. I woke up with amnesia, not sure which of them I am. In this case, there are two people I might be. Nothing confusing about it!

Expand full comment

The scenarios are different. If you are a clone waking up among many, you may suspect the scenario where many clones woke up. That is not the same as being a person among many and positing more persons existing. The latter suggests there's some chance of not existing at all.

There is a tremendous metaphysical assumption buried in there about individual "souls" (or something like them) waiting to exist. But supposing some other metaphysical assumption about reality were true--like a single conscious viewpoint playing through all the slices, no matter how many or how few--the number of slices would not be relevant.

Expand full comment

"Note, it only makes sense to think a theory makes your existence likelier if the theory means there are more people you might currently be."

You already lost me there. If I wake up post-creation, I would know that I am specifically myself. From that I know, that I cannot (and therefore also might not) be more people. What would it even mean to be more than one person? The simplest case would be, that I am two persons. But since 2 != 1, that's a contradiction to the implicit premise that I am one person. So... should I have to assume, that I am not a person?

Expand full comment

Can you explain why you think a self-locating probability exists at all? Why should I expect there to be some probability associated with my own perspective?

Expand full comment

Well, if there are multiple ways you could be (maybe your the first person, maybe the second), etc, then you'll have to assign credenccecs to things.

Expand full comment

I understand assigning credences to something external, but I don’t think that I buy that it means anything for the first person perspective. I looked for some writing that matches my thinking and found this article, which has more thought put into it.

https://www.lesswrong.com/posts/heSbtt29bv5KRoyZa/the-first-person-perspective-is-not-a-random-sample

Expand full comment

'Self-locating probability' isn't quite right. It's not that there's some special kind of probability associated with your own perspective (however that might work) -- rather, one question Bayesians are interested in is how your credences should be revised in the light of self-locating (or 'de se') evidence, which is an issue that becomes especially salient in problems like Sleeping Beauty. (I say 'revised' advisedly, because on some views the only norms of rationality are synchronic, but even such 'time-slice' views will need some way to take self-location into account: see David Builes's 'Time-Slice Rationality and Self-Locating Belief'.)

Expand full comment

There's some probability that you are the first person.

Expand full comment

Finally, this is why I read your blog!!! I personally love the posts about the SIA! I've spent a ton of time trying to ponder this, and I have gotten myself very confused. But I think I disagree with you fundamentally. (Sorry for writing such a long comment!)

You wrote:

>>However, the non-SIAer ends up concluding that H1 has probability .5, while T1 has probability 0—>>it’s one of infinity equally probably options. This means that they should think that if they are the >>first person, it’s infinitely likelier that the future fair unflipped coin will come up heads than that it >>will come up tails. This is nuts—surely you shouldn’t think that if you’re going to flip a coin later, >>and you’re the first guy in a potentially long series, there’s a 100% chance it will come up heads >>just because if it comes up tails a bunch of people get created. The odds the coin will come up >>heads shouldn’t be affected by the presence of the other people.

I think this is incorrect(?) If you correctly believe you are the first person because you know the coin hasn't been flipped yet, then you have a feature which distinguishes you from every potential tail-clone in the future. Namely, you will observe the coin flip which creates them. Technically at this point in time you know you could be T1 or H1, so there is no reason to believe you are one or the other. I'm happy to argue at greater length about this and more detail, so just say the word if this is unconvincing!

Now there still is some strange stuff going on here. I think back to your betting argument that you made a while ago. In addition, let's assume that each clone and the original see a coin flip which is indistinguishable between them, but only the coin flip the original sees impacts the outcome. Before observing the coin flip it seems that the probability of first flip being heads is equal to the odds I am first and the odds I get heads. I think it is reasonable to treat these as independent, and under this assumption P(h1) = P(h* & 1) = .25. It's clear that if I see the coin come up as tails, I should take any odds that the result of the first coin flip was tails. However, if the coin comes up as heads, then the odds that the first coin flip results is heads is is the odds that I am first. To me, it seems reasonable to treat this as 50-50. This seems perfectly satisfactory to me. The math works out. This means that ex ante, P(t* & 1) = .25. This also is perfectly reasonable, it means that if you are the first person, there is an equal chance you will observe heads or tails, which seems right. What is weird is this seems to imply if you don't know anything about your position there is a 75 percent chance that tails is selected. I do not have a great intuition for this, except that it seems to fall in between non-SIA and SIA assumptions. Also weird is that this approach seems to reject the assumed uniform distribution of souls across possible beings. The intuition I have for this is that later beings experiences are 50 percent less likely to happen, and so should have a lower probability.

I don't actually see why this argument is incorrect. It doesn't seem to fit nicely in the dichotomy you presented either. I'm not sure what I am doing wrong here, but it seems basically right to me. I'm willing to accept the weird stuff which appears in the model.

If you want a counterargument to your model rather than an alternative, allow me to make another presumptuous philosopher argument. Isn't SIA strong evidence for eternal return?

Expand full comment

//Namely, you will observe the coin flip which creates them. Technically at this point in time you know you could be T1 or H1, so there is no reason to believe you are one or the other.//

We can imagine the coin is flipped after you're dead.

//To me, it seems reasonable to treat this as 50-50. This seems perfectly satisfactory to me. //

It's not reasonable for the reason that the view implies that if you're the first person then it's infinitely likelier that a fair unflipped coin that hasn't been flipped yet will come up heads than tails. But if that's true then if you learn you are the first person (suppose the first person will be told after a year of life, long before the coin is flipped) then you should be infinitely certain that a fair unflipped coin will come up heads.

The probability math is wrong.

"Before observing the coin flip it seems that the probability of first flip being heads is equal to the odds I am first and the odds I get heads. "

The odds that you're first is 2/3 for the reasons described in the post and the odds of heads given that is 1/2.

Expand full comment

Thanks for your reply! I was super tired when I wrote this, so I fully accept there is a good chance stuff could be wrong. I'm still pretty confused! That said, I do not fully understand your reply and I will present my disagreement and questions I have.

// We can imagine the coin is flipped after you're dead.

I agree the way I worded this makes my argument wrong after you update it. In that way it is not very robust. However, I'm not sure whether observation matters very much, as I was trying to get at with the next paragraph.

// But if that's true then if you learn you are the first person, then you should be infinitely certain that a fair unflipped coin will come up heads.

I believe this only follows if you think that you are equally likely to be any of the people who show up in the future. But that does not seem clear to me. An alternate theory could predict in this situation that there is a 50 percent chance you are first, and a fifty percent chance you are any other person. The reason is that there is 50 percent weighted probability that no other person exists, and 50 percent probability that infinitely many other people exist. This seems intuitive to me.

// The probability math is wrong.

This is really the only thing that I care about. If I understand your counterargument, then I think you are begging the question.

Consider the following scenario:

Someone you could potentially be is created, then a coin is flipped. If heads it is the only one created, if tails then infinitely many clones are created. Each observes a coin flip.

1. P(1st)= .5

2. P(Not 1st) = .5

3. P(H* (you observe heads) ) = .5

4. P(T* (you observe tails) ) = .5

5. P(H1 (probability 1st flip was heads) ) = .5

6. P(T1) = .5

7. P(H1) = P(H* & 1) because if the first flip is heads, then you will be the only one to observe heads and you will be first.

8. P(H* & 1) = P(H*)P(1) = .25 by independence because the result of the coin flip should be independent of when you appear.

9. P(H1 | T*) = 0, because if you observe T either you are first so H1 is false, or tails was flipped.

10. P(H1 | H*) = P(H* & 1)/P(H*) = P(H*)P(1)/P(H*) = P(1) = .5 by 7, 8, and 6 respectively.

So, having written it out, I'm pretty confident the math works. I think it would be helpful for me to see which numbered claim you disagree with and why. If I had to guess we disagree on 1 and 2. And for what it's worth if I thought the probability of being 1st was 0, then I would agree with SIA. But seems to be precisely what is in question here. For the time being, I've offered an alternative schema which works both with the scenario and allows the non-SIA person to satisfactorily say there is only a 50 percent chance of getting heads.

Again, sorry for writing a long comment and a long reply. I really appreciate your posts on anthropics because it gives me an opportunity to clarify my thinking and challenge my own view point. You have helped me to understand the SIA perspective significantly better!

Expand full comment

//I believe this only follows if you think that you are equally likely to be any of the people who show up in the future. But that does not seem clear to me. An alternate theory could predict in this situation that there is a 50 percent chance you are first, and a fifty percent chance you are any other person. The reason is that there is 50 percent weighted probability that no other person exists, and 50 percent probability that infinitely many other people exist. This seems intuitive to me.//

Remember we're talking about the conditional probability that you're first if the coin comes up tails. So this is the odds that you're first given that there are infinite people. But if there are infinite people with your exact evidence, the odds you're any particular one are zero.

I reject 1--I think P(1st) is not .5 but zero. The reason for that is that P(1st)|tails=P(first)|heads.

P(1st)|tails is 0% of P(tails).

Expand full comment

This is exactly where we disagree, and now you've illustrated to me why my proposed alternative is counterintuitive.

In my model, P(1st | T*) = .5 = P(1st| H*). If you observe the coin flip you have no information about your position. And this does entail that if tails is flipped, you are infinitely less likely to be 2nd or 3rd or nth than 1st. I agree this is really, uncomfortably weird. It seems strange that the chance you are first is dependent on the odds of the coin you decide to flip. But, SIA is really weird too!

I should say, I'm not actually super confident in the robustness of my example. I'm not sure how I would respond if the numbers were not infinite. And I worry that is doing a lot of work. When I applied the way I interpreted this model to the case where only one additional clone was created, I got that there was a 50 percent chance you would be second. I'm alright with this. When I considered flipping a fair coin each time a person is created, and if tails the procedure is repeated, I got P(1st | T*) = P(1st) = .5, P(2nd | T*) = P(2nd) = .25, P(3rd | T*) = P(3rd) = .125. But I don't know what happens if there are one original and four clones, and then if tails another infinite number of clones are created. I don't know what happens if the coin doesn't have even odds. I should work through these examples before I get too serious.

Expand full comment

Want to just talk about this over discord at some point? My discord is omnizoid.

Expand full comment

I'm interested! I do not know whether it will be worth your time. I'll reach out.

Expand full comment
Nov 7Edited

>If there are two people, John and Fred, why should your credence in you being them depend on the presence of other people? It seems the default view should be that the relevant probabilities aren’t affected.

I don't see why that's the default view! In general, this thesis doesn't seem to be more obvious than SIA. You go on to give an example which you think motivates the default-view claim:

>If I know that I’m Jack, and heads and tails will both result in me being created, why should I think either is likelier than the other?

But the same thing applies here. This is no more or less obvious to me than SIA, and is in a certain sense an identical claim to SIA. The non-SIA people will argue that one is likelier because it was surprising that you ended up discovering yourself to be Jack in worlds where there were lots of other people you could have ended up discovering yourself to be.

The situation here kind of reminds me of all the various ontological arguments that rely on premises which seem on inspection to be closely equivalent to, and thus at least as dubious as, the conclusion of God's existence. One will probably be able to find lots of creative ways to do this.

For example, another way of characterizing SIA is to posit that self-locating evidence alone can inform us about the outcome of past chance events but never give us probabilistic information about the outcome of future chance events relative to our priors. I don't see why that should be the default versus, say, the other way around. Most of our best physical theories seem to work well both backwards and forwards, so why not be skeptical by default of extreme temporal asymmetries in anthropic theories?

Expand full comment

If I'm reading you correctly, this works out to be (equivalent to) the thirder rule in Briggs (2010): https://philpapers.org/rec/BRIPAV-2

Expand full comment

Yes if this is right one should third.

Expand full comment

I’m not going to pretend I followed any of this. But:

Say you flip a fair coin. If it comes up heads I’m me, down to the smallest detail. If it comes up tails I could be anything else, with even the slightest deviation from Me counting as an alternate possibility - maybe I’ll still be Lasagna and a lawyer, but I’ll be a half inch shorter, or maybe I’ll be an Australian woman named Jill who teaches scuba diving. Every slight deviation from Me indicates Tails, and there are an infinite number of possible deviations.

Doesn’t that mean under SIA that it’s impossible that heads could have ever been flipped? And yet we know it was a fair coin.

Expand full comment

No. SIA cares about the number of time slices you might currently be. You know you're not any time slices that will exist in the future, so they don't affect the SIA calculus--whether they're of you or a woman named Jill. That's why SIA doesn't say, for instance, that if a coin is flipped that will create a bunch of clones of you if it comes up heads, probably it will come up heads.

Expand full comment