This is a coherent explanation of your view. I find this view to be rather crazy (probability, on the margin, is _about_ counting slices!), but it's coherent.
Well, I see your arguments, and I believe that the answer you reject as "obviously" wrong is the right one in some cases, and in the others the premise already presumes some version of anthropics.
Your defence here is isomorphic to how people usually defend electoral college. You simply explain how SIA arrives to its conclusion. But just like how explaining the way electoral college works doesn't make ellectoral college less unfair, explaining how SIA reasons in this situation, doesn't make its reasoning in this situation less crazy.
> But the only bit of it that is controversial is H1=T1.
No. The other controversy is whether T1 and T2 are different outcomes of the experiment or the same one.
In general there are three different types of "anthropic probability problems".
1. A coin is tossed. On Heads n people are randomly selected from some set of possible people and created. On Tails N people are randomly selected from the set of possible people and created. You were created among such people.
Here SIA reasoning is correct. Your existence were not guaranteed by the conditions of the experiment, and so learning that you are created gives you actual evidence about the state of the coin.
2. A coin is tossed. On Heads a person is put into Room 1. On Tails a clone of this person is created and then either the original is put in Room 1 and clone in Room 2 or vice versa. You are in a Room and you are unsure whether you are the clone or the original.
Here SSA reasoning is correct. Your existence is guranteed by the conditions of the experiment, so you do not learn anything from it. However, the room assignment on Tails is random, so learning that you are in Room 1 gives you actual evidence about the state of the coin.
3. A person is put into Room 1. A coin is tossed. On Tails a clone of the person is put into Room 2. You are in one of the Rooms, unsure whether you are a clone or the original.
Here both SIA and SSA are wrong. Neither there is a chance not to exist in the experiment, nor to be in a different room than the one you are. So if you learn that you are in Room 1 you do not update of the coin, similarly to Double Halfing in Sleeping Beauty.
The fact that manistream anthropic theories are trying to reason the same way in all of these completely different scenarious inevitably makes them crazy in general case.
Finally, this is why I read your blog!!! I personally love the posts about the SIA! I've spent a ton of time trying to ponder this, and I have gotten myself very confused. But I think I disagree with you fundamentally. (Sorry for writing such a long comment!)
You wrote:
>>However, the non-SIAer ends up concluding that H1 has probability .5, while T1 has probability 0—>>it’s one of infinity equally probably options. This means that they should think that if they are the >>first person, it’s infinitely likelier that the future fair unflipped coin will come up heads than that it >>will come up tails. This is nuts—surely you shouldn’t think that if you’re going to flip a coin later, >>and you’re the first guy in a potentially long series, there’s a 100% chance it will come up heads >>just because if it comes up tails a bunch of people get created. The odds the coin will come up >>heads shouldn’t be affected by the presence of the other people.
I think this is incorrect(?) If you correctly believe you are the first person because you know the coin hasn't been flipped yet, then you have a feature which distinguishes you from every potential tail-clone in the future. Namely, you will observe the coin flip which creates them. Technically at this point in time you know you could be T1 or H1, so there is no reason to believe you are one or the other. I'm happy to argue at greater length about this and more detail, so just say the word if this is unconvincing!
Now there still is some strange stuff going on here. I think back to your betting argument that you made a while ago. In addition, let's assume that each clone and the original see a coin flip which is indistinguishable between them, but only the coin flip the original sees impacts the outcome. Before observing the coin flip it seems that the probability of first flip being heads is equal to the odds I am first and the odds I get heads. I think it is reasonable to treat these as independent, and under this assumption P(h1) = P(h* & 1) = .25. It's clear that if I see the coin come up as tails, I should take any odds that the result of the first coin flip was tails. However, if the coin comes up as heads, then the odds that the first coin flip results is heads is is the odds that I am first. To me, it seems reasonable to treat this as 50-50. This seems perfectly satisfactory to me. The math works out. This means that ex ante, P(t* & 1) = .25. This also is perfectly reasonable, it means that if you are the first person, there is an equal chance you will observe heads or tails, which seems right. What is weird is this seems to imply if you don't know anything about your position there is a 75 percent chance that tails is selected. I do not have a great intuition for this, except that it seems to fall in between non-SIA and SIA assumptions. Also weird is that this approach seems to reject the assumed uniform distribution of souls across possible beings. The intuition I have for this is that later beings experiences are 50 percent less likely to happen, and so should have a lower probability.
I don't actually see why this argument is incorrect. It doesn't seem to fit nicely in the dichotomy you presented either. I'm not sure what I am doing wrong here, but it seems basically right to me. I'm willing to accept the weird stuff which appears in the model.
If you want a counterargument to your model rather than an alternative, allow me to make another presumptuous philosopher argument. Isn't SIA strong evidence for eternal return?
"...the basic idea is that if a theory predicts N times more people that you might currently be, it predicts your present existence N times as well."
I don't get this part where you say, "more people that you might currently be". That notion is so odd to me, and I'm not sure it's meaningful.
Incidentally, who has the best argument against SIA? I'd be interested in reading that. Can you refer me to some articles or Substack posts or whatever? I feel like I need to get the opposite perspective on this issue.
My friend Mark who often leaves comments on my posts about SIA has the best objections. All the published ones are just repeating the presumptuous philosopher argument.
//I don't get this part where you say, "more people that you might currently be". That notion is so odd to me, and I'm not sure it's meaningful.//
Why? Suppose that there are two people, Fred and Tom. I woke up with amnesia, not sure which of them I am. In this case, there are two people I might be. Nothing confusing about it!
I’m not going to pretend I followed any of this. But:
Say you flip a fair coin. If it comes up heads I’m me, down to the smallest detail. If it comes up tails I could be anything else, with even the slightest deviation from Me counting as an alternate possibility - maybe I’ll still be Lasagna and a lawyer, but I’ll be a half inch shorter, or maybe I’ll be an Australian woman named Jill who teaches scuba diving. Every slight deviation from Me indicates Tails, and there are an infinite number of possible deviations.
Doesn’t that mean under SIA that it’s impossible that heads could have ever been flipped? And yet we know it was a fair coin.
No. SIA cares about the number of time slices you might currently be. You know you're not any time slices that will exist in the future, so they don't affect the SIA calculus--whether they're of you or a woman named Jill. That's why SIA doesn't say, for instance, that if a coin is flipped that will create a bunch of clones of you if it comes up heads, probably it will come up heads.
This is a coherent explanation of your view. I find this view to be rather crazy (probability, on the margin, is _about_ counting slices!), but it's coherent.
Would be curious to hear about the arguments I gave for the view.
Well, I see your arguments, and I believe that the answer you reject as "obviously" wrong is the right one in some cases, and in the others the premise already presumes some version of anthropics.
> Presumptious philosopher
Your defence here is isomorphic to how people usually defend electoral college. You simply explain how SIA arrives to its conclusion. But just like how explaining the way electoral college works doesn't make ellectoral college less unfair, explaining how SIA reasons in this situation, doesn't make its reasoning in this situation less crazy.
> But the only bit of it that is controversial is H1=T1.
No. The other controversy is whether T1 and T2 are different outcomes of the experiment or the same one.
In general there are three different types of "anthropic probability problems".
1. A coin is tossed. On Heads n people are randomly selected from some set of possible people and created. On Tails N people are randomly selected from the set of possible people and created. You were created among such people.
Here SIA reasoning is correct. Your existence were not guaranteed by the conditions of the experiment, and so learning that you are created gives you actual evidence about the state of the coin.
2. A coin is tossed. On Heads a person is put into Room 1. On Tails a clone of this person is created and then either the original is put in Room 1 and clone in Room 2 or vice versa. You are in a Room and you are unsure whether you are the clone or the original.
Here SSA reasoning is correct. Your existence is guranteed by the conditions of the experiment, so you do not learn anything from it. However, the room assignment on Tails is random, so learning that you are in Room 1 gives you actual evidence about the state of the coin.
3. A person is put into Room 1. A coin is tossed. On Tails a clone of the person is put into Room 2. You are in one of the Rooms, unsure whether you are a clone or the original.
Here both SIA and SSA are wrong. Neither there is a chance not to exist in the experiment, nor to be in a different room than the one you are. So if you learn that you are in Room 1 you do not update of the coin, similarly to Double Halfing in Sleeping Beauty.
The fact that manistream anthropic theories are trying to reason the same way in all of these completely different scenarious inevitably makes them crazy in general case.
Finally, this is why I read your blog!!! I personally love the posts about the SIA! I've spent a ton of time trying to ponder this, and I have gotten myself very confused. But I think I disagree with you fundamentally. (Sorry for writing such a long comment!)
You wrote:
>>However, the non-SIAer ends up concluding that H1 has probability .5, while T1 has probability 0—>>it’s one of infinity equally probably options. This means that they should think that if they are the >>first person, it’s infinitely likelier that the future fair unflipped coin will come up heads than that it >>will come up tails. This is nuts—surely you shouldn’t think that if you’re going to flip a coin later, >>and you’re the first guy in a potentially long series, there’s a 100% chance it will come up heads >>just because if it comes up tails a bunch of people get created. The odds the coin will come up >>heads shouldn’t be affected by the presence of the other people.
I think this is incorrect(?) If you correctly believe you are the first person because you know the coin hasn't been flipped yet, then you have a feature which distinguishes you from every potential tail-clone in the future. Namely, you will observe the coin flip which creates them. Technically at this point in time you know you could be T1 or H1, so there is no reason to believe you are one or the other. I'm happy to argue at greater length about this and more detail, so just say the word if this is unconvincing!
Now there still is some strange stuff going on here. I think back to your betting argument that you made a while ago. In addition, let's assume that each clone and the original see a coin flip which is indistinguishable between them, but only the coin flip the original sees impacts the outcome. Before observing the coin flip it seems that the probability of first flip being heads is equal to the odds I am first and the odds I get heads. I think it is reasonable to treat these as independent, and under this assumption P(h1) = P(h* & 1) = .25. It's clear that if I see the coin come up as tails, I should take any odds that the result of the first coin flip was tails. However, if the coin comes up as heads, then the odds that the first coin flip results is heads is is the odds that I am first. To me, it seems reasonable to treat this as 50-50. This seems perfectly satisfactory to me. The math works out. This means that ex ante, P(t* & 1) = .25. This also is perfectly reasonable, it means that if you are the first person, there is an equal chance you will observe heads or tails, which seems right. What is weird is this seems to imply if you don't know anything about your position there is a 75 percent chance that tails is selected. I do not have a great intuition for this, except that it seems to fall in between non-SIA and SIA assumptions. Also weird is that this approach seems to reject the assumed uniform distribution of souls across possible beings. The intuition I have for this is that later beings experiences are 50 percent less likely to happen, and so should have a lower probability.
I don't actually see why this argument is incorrect. It doesn't seem to fit nicely in the dichotomy you presented either. I'm not sure what I am doing wrong here, but it seems basically right to me. I'm willing to accept the weird stuff which appears in the model.
If you want a counterargument to your model rather than an alternative, allow me to make another presumptuous philosopher argument. Isn't SIA strong evidence for eternal return?
"...the basic idea is that if a theory predicts N times more people that you might currently be, it predicts your present existence N times as well."
I don't get this part where you say, "more people that you might currently be". That notion is so odd to me, and I'm not sure it's meaningful.
Incidentally, who has the best argument against SIA? I'd be interested in reading that. Can you refer me to some articles or Substack posts or whatever? I feel like I need to get the opposite perspective on this issue.
My friend Mark who often leaves comments on my posts about SIA has the best objections. All the published ones are just repeating the presumptuous philosopher argument.
//I don't get this part where you say, "more people that you might currently be". That notion is so odd to me, and I'm not sure it's meaningful.//
Why? Suppose that there are two people, Fred and Tom. I woke up with amnesia, not sure which of them I am. In this case, there are two people I might be. Nothing confusing about it!
If I'm reading you correctly, this works out to be (equivalent to) the thirder rule in Briggs (2010): https://philpapers.org/rec/BRIPAV-2
I’m not going to pretend I followed any of this. But:
Say you flip a fair coin. If it comes up heads I’m me, down to the smallest detail. If it comes up tails I could be anything else, with even the slightest deviation from Me counting as an alternate possibility - maybe I’ll still be Lasagna and a lawyer, but I’ll be a half inch shorter, or maybe I’ll be an Australian woman named Jill who teaches scuba diving. Every slight deviation from Me indicates Tails, and there are an infinite number of possible deviations.
Doesn’t that mean under SIA that it’s impossible that heads could have ever been flipped? And yet we know it was a fair coin.
No. SIA cares about the number of time slices you might currently be. You know you're not any time slices that will exist in the future, so they don't affect the SIA calculus--whether they're of you or a woman named Jill. That's why SIA doesn't say, for instance, that if a coin is flipped that will create a bunch of clones of you if it comes up heads, probably it will come up heads.