I don’t know about autism, but man, you certainly are obsessive. Did you perhaps receive a chain letter that said you would burn in hell for eternity if you did not harvest a thousand souls for the Lord?
I'll have to think more about the argument for SIA... but even assuming it holds, you say:
> Now imagine that this “experiment,” so to speak, is going to be run a bunch in the future, and the results will be probabilistically independent. In the future, repeatedly both universes and multiverses will be generated.
Here you make explicit the conditions where anthropic reasoning holds. I agree with those conditions.
But crucially, the "God" issue doesn't meet them! If universes-made-by-God and universes-not-made-by-God were generated probabilistically, then you could presumably use SIA to argue which one you're more likely to find yourself in.
But as far as I know, no-one believes that universes-made-by-God and universes-not-made-by-God are generated probabilistically. Theists think that all universes are generated by God, and non-theists think that none are.
Which means we're not in the conditions of the experiment, and SIA doesn't tell you anything about God.
Which is a relief, because it would *massively weird* if some armchair reasoning gave you positive information about how reality itself came about.
I explain why this generalizes in section 5. Armchair reasoning all the time gives you evidence about how reality came about--for instance, the fact that I exist gives me evidence that reality is the sort of place that has people.
Section 5 of this article? That's where you go through the basic math of calculating Bayesian odds. It doesn't give any substantial reason why you can take arguments carefully constructed with probabilistic processes, and still draw valid conclusions about the real world while substituting those processes with subjective credences.
Anthropic reasoning with SIA amounts to doing stats among possible observers, within a set of possible worlds, to figure the odds of finding yourself in each branch.
I can (just barely) accept that much, when those are real modal branches, and those observers are observers you *could have been*, in a physically causal way. As in all your actual examples.
This completely breaks when you replace probabilities with epistemic credences. Probabilistic processes create observers in different modal branches of the real world. People's theories are just in their head, they don't actually create observers, in this branch or anywhere else.
Just because Pepe high on LSD comes up with a pet theory that implies some high transfinite numbers of observers, doesn't mean that we're highly likely to be in his imagined world.
I'm a big proponent of SIA, and I think this argument is very strong and very interesting. Nice work!
In trying to steelman the opposing argument, I'd probably go after premise 2 (which you talk about in section 3). Granted, it seems crazy that the probability of a coin flip that happened in the past depends on future events. But if you think in terms of a block universe where the future already exists in some sense, maybe it is less crazy. And I tend to think the doomsday argument already has a similar weird probability shift based on future events, so SSAers may feel okay biting the bullet here too. But just to be clear, I'm pro-SIA, and I personally would not even taste these bullets let alone bite them.
Yeah maybe that's the way to go. Still, it seems really bad. A block universe will make the past and future look more symmetrical, but the judgment also seems really bad in the past--and in the future--so establishing symmetry doesn't help much.
Most of your new arguments in favor of SIA are question begging, and this one is not an exception.
The reason why you think that "Repeat flipping unknown rank credence=1/11 or ~1/11" is already due to pro-SIA intuitions. A non-SIA follower would not (necessary) agree with this premise and therefore the argument would not work. All the other premises are red herrings here.
I gave several arguments for this premise and all non sia views in the literature agree with it (in the case of cc only if the people are clones, with ssa whether or not they are).
And none of them are new or particularly strong as far as I see. You kind of keep restating the same thing again and again without progressing the discourse further.
It's true that 10/11th of people created would be created on Tails, so if we select a random person among them the probability that this person was created on Tails is 10/11. The crux of disagreement is whether you can be treated as a random person in such setting. SIA claims yes, other approaches may disagree.
But as I said, every view in the literature agrees, and I gave several arguments for doing this. You said that this argument isn't new or particularly strong but then you don't address the arguments I give for the position. Who, pray tell, has made this exact argument before?
> Who, pray tell, has made this exact argument before?
I see similar mistakes by SIA followers all the time. The core inability to distinguish between setting with a random sampling and without it is the fundamental property of SIA, after all.
Not exactly. There is a possibility for reference class manipulations that can confuse SSA-followers into agreeing with this estimate - assuming that the reference class includes all the people created throughout all the flips - but the standard approach is to treat only one particular coin flip - in which you were actually created and people created in this coin flip as your reference class. Then according to SSA it's 1/2.
SSA assumes that "you" would've been created anyway, regardless of the outcome of the toss. SIA assumes that it's always possible that "you" wouldn't be created. And some experiments with people creation can satisfy SSA premises, while others - SIA ones.
Suppose the algorithm is such: there are 10 embrios on heads a random embryo is selected and incubated. On Tails all embryos are incubated. This people creating experiment seem to work according to SIA premises.
But suppose we are dealing with person splitting instead. That is we already have a person at the beginning of the experiment, on Heads nothing happens to a person but on Tails the person is split into ten people. Here SSA premise is correct.
The most obvious reference class here is "all people who ever exist," which in the thought experiment is all people created through all N coin flip experiments. That would be the standard approach, not looking at only one coin flip, your own. (If there are other people in distant galaxies who aren't created through this kind of process, then that would change SSA's verdict.) And on this standard approach, the number you get after doing the calculations goes to 1/11 as N goes to infinity.
Also, even if you restrict the reference class somewhat, SSAers will definitely think the reference class includes all the people who--for all you know--you might currently be.
Am I wrong to have the impression that SIA can be regarded as SSA, but with reference class that includes "all people who are in the same epistemic situation as me"? Or is there still a thing with possible people vs actual people here?
> (If there are other people in distant galaxies who aren't created through this kind of process, then that would change SSA's verdict.)
Exactly. What I'm saying is that people created on a different coin toss should be treated as people in distant galaxies. Coin tosses are independent of one another, information about i-th coin toss is irrelevant to the k-th coin toss for any different k and i. So people created on i-th coin toss might as well be in different galaxy.
The point of "reference class" is to capture my uncertainty about what kind of person I could've been.
So if all I know that I was created in some experiment where 10/11 of people are marked as "Tails" and 1/10 as "Heads". Then my reference class is "all people created in this experiment" and I naturally expect to be one of "Tails" people with 10/11 probability.
But if I know that I was created in a specific sub-part of the experiment - a particular coin toss, even if I don't know which exactly, now only other people who are created in the same coin toss matter. And so my "reference class" is different now. Which is simply bayesian conditionalization on the new information - all the talk about reference classes is just a more confusing way to capture the same idea.
Now there are possible experiment designs in which other coin tosses matter, for example suppose there are a 1000 embryos, and a hundred coin tosses. For each coin toss, on Tails 10 random embryos are selected to be incubated and on Heads only 1 random embryo is selected to be incubated. But this is not the design that was presented in the post.
>Exactly. What I'm saying is that people created on a different coin toss should be treated as people in distant galaxies.
But SSA doesn't say you should ignore people in distant galaxies and treat them as outside your reference class, just because they might end up being causally isolated from your galaxy. On the standard construal of it, you probably shouldn't.
I don't really see what argument you're giving for ignoring people created (epistemically) independently from you vis-a-vis reference class. I realize that you know more than that your existence is the outcome of the experiment, namely that your existence is the outcome of *this subcomponent* of the experiment, but it's not obvious why you want to disregard people with this extra portion of your evidence from being in your reference class. If you try to formulate SSA with a general reference class of "people who have your exact evidence," you end up violating ordinary conditionalization rules.
> If you try to formulate SSA with a general reference class of "people who have your exact evidence," you end up violating ordinary conditionalization rules.
Personally, I prefer to get rid of the notion of "reference class" entirely and instead talk about probability experiments as a mathematical approximation of some causal process taking place in reality and its conditionalization, which works the exact same way in anthropic and non-anthropic cases
But if you want to talk about it specifically in terms of "reference classes" then your reference class should be "people who you could've been" according to your knowledge about this causal process.
>you know more than that your existence is the outcome of the experiment, namely that your existence is the outcome of *this subcomponent* of the experiment
Is this even true? I mean, what knowledge over and above "I was created in the experiment" do I possess? After all, I don't know which subcomponent created me; but of course, the structure of the experiment logically entails that as soon as I know I was created in the experiment, I was created by some subcomponent. So these just seem to me to be completely equivalent statements (conditional on already knowing the structure of the experiment, which I am granting) and I don't know what further piece of evidence is being referred to.
This reasoning sounds good in general, but if you know you were created in a specific coin flip, but not which coin flip, isn't your class of "who you could have been" still everyone? Or am I missing something?
I only skimmed this article, so my apologies if my question was already addressed somewhere above. Back in January of this year, you appeared on Liron Shapira's podcast. He asked you about your P(God). You answered, "Like in the 60s, like maybe like 64% or something." Is that still your view?
Humble question from a lay person. I’m stuck on how we can know reasonably predict or know how probability works at the state where universes, including our own, are created. The coin flip is different if we use some kind of quantum wave logic than if we use classical logic; but we have a lot of observations to be confident that the coin will follow classical logic (well enough) and a quantum coin would follow some kind of wave logic). How do we establish the logic behind the probabilities in this argument? As far as I understand, we know close to nothing about the state in which the universe came into being and where we are the logic relevant to probability distributions is relative to scale. I'm left wondering what logical structure is applicable to the relevant event and whether it is one we know or one not yet observed. Toss me a link/reference /keyword concept if this is something already explored extensively or addressed elsehwere.
I think the true answer is, we have no idea what logic applies at the creation of universes. But, by looking at toy examples with coin flips, we can start getting some experience with the ideas we will need to tackle that question.
To that end, I think exercises like this are valuable: they help clarify our thinking, point to places where we're confused, make it easier for us to find cruxes between different views... But any attempt to apply these ideas outside of toy situations should be taken with a massive grain of salt.
This is a very good argument... I think it fails, but it's not immediately obvious why.
I think the issue is that it's ultimately just a question of how you frame the question/betting, same as with the sleeping beauty problem.
To change the angle, let's suppose I am not a person who was created after the coin toss, but I will be shown one baby for each coin toss (even though if it was tails, 10 babies were created). In that case, since I will be shown one baby for each coin toss, and the coin was fair, I will be shown 50% heads-babies and 50% tails-babies, so my evidence says there's a 50% chance for heads or tails for each baby I'm shown.
On the other hand, if I'm shown *all* the babies, or a random selection, then they will be 10/11 tails-babies, and for each baby it will be a 10/11 chance its coin landed tails. If I'm placing my bet on each baby, then I should bet on tails, but if I'm placing my bet on each coin toss, then I should rate them as equally likely.
I think the first case, where I'm shown one baby for each coin toss, is more like the scenario we find ourselves in when we try to use ourselves as evidence. The one baby for each toss is ourselves - we only get one baby as evidence either way. And since we are *guaranteed* to have one baby as evidence in either case, the evidence is equally likely in both scenarios, and so no Bayesian update.
The third premise. Your credence in the coin landing heads in Repeat Flipping Unknown Rank should be 0.5. That's because the prior probability is 0.5, and the likelihood of the evidence ("me"/a person exists) is 1 for both hypotheses.
But actually, I've given it some more thought and I think you are right. I, in all my specificity, am more likely to exist if it comes up tails, and this fact is relevant evidence. Although I do think this is equivalent to selecting a random baby, in my second scenario, since that baby too is more likely to exist if it landed tails.
Apologies for the up arrow emojis. I don't know if it's possible to use Latex in a substack comment.
In the coin flip example the SIA is perfectly valid. After all, the Everett Interpretation or quantum mechanics is the only interpretation that makes sense, so when we say that a coin flip has 50% probability of heads we are just saying that the coin flip land heads in 50% of the universes in which it occurs. All probabilistic reasoning reduces to anthropic reasoning about which universe I happen to be in.
I'm not so convinced that the SIA is valid in examples like this one:
If the 10⬆️⬆️100th digit of pi is even then 10⬆️⬆️100 people get created. Otherwise one person gets created.
However it would be quite surprising if changing the second sentence to 'Otherwise zero people get created' were to affect my subjective probability of the 10⬆️⬆️100th digit of pi being even but changing the second sentence to 'Otherwise 10⬆️⬆️100' get created' were to not affect my subjective probability of the 10⬆️⬆️100th digit of pi being even.
Now what about this one?
If the 10⬆️⬆️100th digit of pi is even then 10⬆️⬆️100 blue eyed people and one brown eyed person gets created. Otherwise one blue eyed person and one brown eyed person gets created.
If I were to observe myself to be brown eyed, I would be tempted to conclude that the 10⬆️⬆️100th digit of pi is more likely to be odd.
«Suppose that a coin is flipped that creates one person if heads and ten if tails. This coin creates you. However, after this coin is flipped, the same coin will be flipped over and over again. It will be flipped, let’s say, 1,000 times, and each time it will create one person if heads and ten people if tails. The question: what should your credence be in the coin that created you having come up tails?»
As an antedeluvian child of the first flip, my birth rank is 1. Whether I share that rank with 9 other flip-siblings, I have no indication for or against. Hence the answer is 0.5. The 1,000 flips after that, don't change credence as they are unrelated birth cohorts. Unless you want me to assume that it's not a fair coin, in which case I dunno. That gets a bit more complicated.
«Suppose that a coin is flipped a million times that creates one person if heads and ten if tails each time. You do not know your birth rank. What odds should you give to the coin having come up tails?»
Insufficient data. "not knowing my birth rank" does not imply "not knowing the entire coinmen population". If I observe that |coinmen| = |coinflips|, I would know that I am heads-child, since all coinmen are head-children. Vice versa with |coinmen| = 10*|coinflips|.
I guess you're assuming |Tailflips|=|Headflips|=1 million/2=500,000, making |AverageFlip|= 5,500,00 and P(I ∈ Tailchildren)= 500,000 * 10 / 5,500,000 = 10/11 exactly. Not approximately 10/11. But I don't whether a 50/50 is your assumption. Also I'm getting mighty sick of my own notation.
I find it pretty funny that your arguments for going from unknown rank to known rank are pretty similar to halfer arguments in sleeping beauty: what do you actually learn, when you experience something you were going to experience anyway? Why are your probabilities for heads unreflective: you'll think 1/2 before and after the experiment, but not during?!
This is a feature I really enjoy in anthropic problems: the same basic structure of argument can be marshalled on either side. Here, I'm inclined to agree with the thirders in SB: if you learn your coin flip's rank, the people in other coin flips are no longer "people you could have been"; you've learned a piece of information that distinguishes you from them. It's true it's a silly and seemingly inconsequential piece of information, but as I think the best argument from thirding is a variation on fully non-indexical conditioning, then I'm inclined to say with the thirders in SB: "if learning this info is irrelevant, then conditioning on it can't change anything, so you might as well condition on it".
Which is to say, I don't think you can dismiss the concern about learning potentially relevant information, and certainly if you do so I think it has repercussions for how you should evaluate the SIA argument for SB.
Now, it may turn out that properly conditionalizing on that information still yields an answer of 1/11; but I think one ought to conditionalize on it to make sure.
Random question: do you think a modified version of this argument could still work if we accept the ssa instead of the sia?
Let me outline it:
Instead of saying "the fact that 'I' exist is more likely if there are more people," say "the fact that 'some person' exists is more likely if there are more people." So you take the existence of a random observer instead of focusing on yourself. Now you can include your own existence in the background instead of counting it as evidence and the argument presumably still works.
I don’t know about autism, but man, you certainly are obsessive. Did you perhaps receive a chain letter that said you would burn in hell for eternity if you did not harvest a thousand souls for the Lord?
I did not
I'll have to think more about the argument for SIA... but even assuming it holds, you say:
> Now imagine that this “experiment,” so to speak, is going to be run a bunch in the future, and the results will be probabilistically independent. In the future, repeatedly both universes and multiverses will be generated.
Here you make explicit the conditions where anthropic reasoning holds. I agree with those conditions.
But crucially, the "God" issue doesn't meet them! If universes-made-by-God and universes-not-made-by-God were generated probabilistically, then you could presumably use SIA to argue which one you're more likely to find yourself in.
But as far as I know, no-one believes that universes-made-by-God and universes-not-made-by-God are generated probabilistically. Theists think that all universes are generated by God, and non-theists think that none are.
Which means we're not in the conditions of the experiment, and SIA doesn't tell you anything about God.
Which is a relief, because it would *massively weird* if some armchair reasoning gave you positive information about how reality itself came about.
I explain why this generalizes in section 5. Armchair reasoning all the time gives you evidence about how reality came about--for instance, the fact that I exist gives me evidence that reality is the sort of place that has people.
Section 5 of this article? That's where you go through the basic math of calculating Bayesian odds. It doesn't give any substantial reason why you can take arguments carefully constructed with probabilistic processes, and still draw valid conclusions about the real world while substituting those processes with subjective credences.
Anthropic reasoning with SIA amounts to doing stats among possible observers, within a set of possible worlds, to figure the odds of finding yourself in each branch.
I can (just barely) accept that much, when those are real modal branches, and those observers are observers you *could have been*, in a physically causal way. As in all your actual examples.
This completely breaks when you replace probabilities with epistemic credences. Probabilistic processes create observers in different modal branches of the real world. People's theories are just in their head, they don't actually create observers, in this branch or anywhere else.
Just because Pepe high on LSD comes up with a pet theory that implies some high transfinite numbers of observers, doesn't mean that we're highly likely to be in his imagined world.
I'm a big proponent of SIA, and I think this argument is very strong and very interesting. Nice work!
In trying to steelman the opposing argument, I'd probably go after premise 2 (which you talk about in section 3). Granted, it seems crazy that the probability of a coin flip that happened in the past depends on future events. But if you think in terms of a block universe where the future already exists in some sense, maybe it is less crazy. And I tend to think the doomsday argument already has a similar weird probability shift based on future events, so SSAers may feel okay biting the bullet here too. But just to be clear, I'm pro-SIA, and I personally would not even taste these bullets let alone bite them.
Yeah maybe that's the way to go. Still, it seems really bad. A block universe will make the past and future look more symmetrical, but the judgment also seems really bad in the past--and in the future--so establishing symmetry doesn't help much.
Most of your new arguments in favor of SIA are question begging, and this one is not an exception.
The reason why you think that "Repeat flipping unknown rank credence=1/11 or ~1/11" is already due to pro-SIA intuitions. A non-SIA follower would not (necessary) agree with this premise and therefore the argument would not work. All the other premises are red herrings here.
I gave several arguments for this premise and all non sia views in the literature agree with it (in the case of cc only if the people are clones, with ssa whether or not they are).
And none of them are new or particularly strong as far as I see. You kind of keep restating the same thing again and again without progressing the discourse further.
It's true that 10/11th of people created would be created on Tails, so if we select a random person among them the probability that this person was created on Tails is 10/11. The crux of disagreement is whether you can be treated as a random person in such setting. SIA claims yes, other approaches may disagree.
But as I said, every view in the literature agrees, and I gave several arguments for doing this. You said that this argument isn't new or particularly strong but then you don't address the arguments I give for the position. Who, pray tell, has made this exact argument before?
> every view in the literature agrees
As I show in other comments, you are wrong about it. See this comment in particular: https://benthams.substack.com/p/a-new-extremely-strong-argument-for/comment/164266308.
> Who, pray tell, has made this exact argument before?
I see similar mistakes by SIA followers all the time. The core inability to distinguish between setting with a random sampling and without it is the fundamental property of SIA, after all.
You haven't answered the second question. The argument is, in fact, original, even if other arguments give you a similar vibe.
I believe SSA also says it's ~= 1/11, if I'm understanding the setup properly, and converges to 1/11 exactly as the number of flips goes to infinity.
Not exactly. There is a possibility for reference class manipulations that can confuse SSA-followers into agreeing with this estimate - assuming that the reference class includes all the people created throughout all the flips - but the standard approach is to treat only one particular coin flip - in which you were actually created and people created in this coin flip as your reference class. Then according to SSA it's 1/2.
SSA assumes that "you" would've been created anyway, regardless of the outcome of the toss. SIA assumes that it's always possible that "you" wouldn't be created. And some experiments with people creation can satisfy SSA premises, while others - SIA ones.
Suppose the algorithm is such: there are 10 embrios on heads a random embryo is selected and incubated. On Tails all embryos are incubated. This people creating experiment seem to work according to SIA premises.
But suppose we are dealing with person splitting instead. That is we already have a person at the beginning of the experiment, on Heads nothing happens to a person but on Tails the person is split into ten people. Here SSA premise is correct.
The most obvious reference class here is "all people who ever exist," which in the thought experiment is all people created through all N coin flip experiments. That would be the standard approach, not looking at only one coin flip, your own. (If there are other people in distant galaxies who aren't created through this kind of process, then that would change SSA's verdict.) And on this standard approach, the number you get after doing the calculations goes to 1/11 as N goes to infinity.
Also, even if you restrict the reference class somewhat, SSAers will definitely think the reference class includes all the people who--for all you know--you might currently be.
Am I wrong to have the impression that SIA can be regarded as SSA, but with reference class that includes "all people who are in the same epistemic situation as me"? Or is there still a thing with possible people vs actual people here?
Which would normally be "people created in the same coinflip with me".
But you don’t know which coin flip you were so for all you know you might be any of the people
> (If there are other people in distant galaxies who aren't created through this kind of process, then that would change SSA's verdict.)
Exactly. What I'm saying is that people created on a different coin toss should be treated as people in distant galaxies. Coin tosses are independent of one another, information about i-th coin toss is irrelevant to the k-th coin toss for any different k and i. So people created on i-th coin toss might as well be in different galaxy.
The point of "reference class" is to capture my uncertainty about what kind of person I could've been.
So if all I know that I was created in some experiment where 10/11 of people are marked as "Tails" and 1/10 as "Heads". Then my reference class is "all people created in this experiment" and I naturally expect to be one of "Tails" people with 10/11 probability.
But if I know that I was created in a specific sub-part of the experiment - a particular coin toss, even if I don't know which exactly, now only other people who are created in the same coin toss matter. And so my "reference class" is different now. Which is simply bayesian conditionalization on the new information - all the talk about reference classes is just a more confusing way to capture the same idea.
Now there are possible experiment designs in which other coin tosses matter, for example suppose there are a 1000 embryos, and a hundred coin tosses. For each coin toss, on Tails 10 random embryos are selected to be incubated and on Heads only 1 random embryo is selected to be incubated. But this is not the design that was presented in the post.
>Exactly. What I'm saying is that people created on a different coin toss should be treated as people in distant galaxies.
But SSA doesn't say you should ignore people in distant galaxies and treat them as outside your reference class, just because they might end up being causally isolated from your galaxy. On the standard construal of it, you probably shouldn't.
I don't really see what argument you're giving for ignoring people created (epistemically) independently from you vis-a-vis reference class. I realize that you know more than that your existence is the outcome of the experiment, namely that your existence is the outcome of *this subcomponent* of the experiment, but it's not obvious why you want to disregard people with this extra portion of your evidence from being in your reference class. If you try to formulate SSA with a general reference class of "people who have your exact evidence," you end up violating ordinary conditionalization rules.
> If you try to formulate SSA with a general reference class of "people who have your exact evidence," you end up violating ordinary conditionalization rules.
Personally, I prefer to get rid of the notion of "reference class" entirely and instead talk about probability experiments as a mathematical approximation of some causal process taking place in reality and its conditionalization, which works the exact same way in anthropic and non-anthropic cases
https://www.lesswrong.com/s/TtBARjJ7sjxDjgjow
But if you want to talk about it specifically in terms of "reference classes" then your reference class should be "people who you could've been" according to your knowledge about this causal process.
>you know more than that your existence is the outcome of the experiment, namely that your existence is the outcome of *this subcomponent* of the experiment
Is this even true? I mean, what knowledge over and above "I was created in the experiment" do I possess? After all, I don't know which subcomponent created me; but of course, the structure of the experiment logically entails that as soon as I know I was created in the experiment, I was created by some subcomponent. So these just seem to me to be completely equivalent statements (conditional on already knowing the structure of the experiment, which I am granting) and I don't know what further piece of evidence is being referred to.
This reasoning sounds good in general, but if you know you were created in a specific coin flip, but not which coin flip, isn't your class of "who you could have been" still everyone? Or am I missing something?
See this: https://benthams.substack.com/p/a-new-extremely-strong-argument-for/comment/163883020
I only skimmed this article, so my apologies if my question was already addressed somewhere above. Back in January of this year, you appeared on Liron Shapira's podcast. He asked you about your P(God). You answered, "Like in the 60s, like maybe like 64% or something." Is that still your view?
https://lironshapira.substack.com/p/god-vs-ai-doom-debate-with-benthams
Humble question from a lay person. I’m stuck on how we can know reasonably predict or know how probability works at the state where universes, including our own, are created. The coin flip is different if we use some kind of quantum wave logic than if we use classical logic; but we have a lot of observations to be confident that the coin will follow classical logic (well enough) and a quantum coin would follow some kind of wave logic). How do we establish the logic behind the probabilities in this argument? As far as I understand, we know close to nothing about the state in which the universe came into being and where we are the logic relevant to probability distributions is relative to scale. I'm left wondering what logical structure is applicable to the relevant event and whether it is one we know or one not yet observed. Toss me a link/reference /keyword concept if this is something already explored extensively or addressed elsehwere.
I think the true answer is, we have no idea what logic applies at the creation of universes. But, by looking at toy examples with coin flips, we can start getting some experience with the ideas we will need to tackle that question.
To that end, I think exercises like this are valuable: they help clarify our thinking, point to places where we're confused, make it easier for us to find cruxes between different views... But any attempt to apply these ideas outside of toy situations should be taken with a massive grain of salt.
> reading a 78,294th thinkpiece about Sydney Sweeney
Shouldn’t it be “the” 78,294th thinkpiece?
After re-reading the sentence a few times, I realize I was wrong and that "a" is fine here. Sorry!
Your evidence gives you existence... Interesting! :)
Oopsies.
This is a very good argument... I think it fails, but it's not immediately obvious why.
I think the issue is that it's ultimately just a question of how you frame the question/betting, same as with the sleeping beauty problem.
To change the angle, let's suppose I am not a person who was created after the coin toss, but I will be shown one baby for each coin toss (even though if it was tails, 10 babies were created). In that case, since I will be shown one baby for each coin toss, and the coin was fair, I will be shown 50% heads-babies and 50% tails-babies, so my evidence says there's a 50% chance for heads or tails for each baby I'm shown.
On the other hand, if I'm shown *all* the babies, or a random selection, then they will be 10/11 tails-babies, and for each baby it will be a 10/11 chance its coin landed tails. If I'm placing my bet on each baby, then I should bet on tails, but if I'm placing my bet on each coin toss, then I should rate them as equally likely.
I think the first case, where I'm shown one baby for each coin toss, is more like the scenario we find ourselves in when we try to use ourselves as evidence. The one baby for each toss is ourselves - we only get one baby as evidence either way. And since we are *guaranteed* to have one baby as evidence in either case, the evidence is equally likely in both scenarios, and so no Bayesian update.
Not really following: which premise do you disagree with?
I don’t think it’s like selecting a randomly existing baby because you’re likelier to exist if the coin comes up heads
The third premise. Your credence in the coin landing heads in Repeat Flipping Unknown Rank should be 0.5. That's because the prior probability is 0.5, and the likelihood of the evidence ("me"/a person exists) is 1 for both hypotheses.
But actually, I've given it some more thought and I think you are right. I, in all my specificity, am more likely to exist if it comes up tails, and this fact is relevant evidence. Although I do think this is equivalent to selecting a random baby, in my second scenario, since that baby too is more likely to exist if it landed tails.
Cool!
Apologies for the up arrow emojis. I don't know if it's possible to use Latex in a substack comment.
In the coin flip example the SIA is perfectly valid. After all, the Everett Interpretation or quantum mechanics is the only interpretation that makes sense, so when we say that a coin flip has 50% probability of heads we are just saying that the coin flip land heads in 50% of the universes in which it occurs. All probabilistic reasoning reduces to anthropic reasoning about which universe I happen to be in.
I'm not so convinced that the SIA is valid in examples like this one:
If the 10⬆️⬆️100th digit of pi is even then 10⬆️⬆️100 people get created. Otherwise one person gets created.
However it would be quite surprising if changing the second sentence to 'Otherwise zero people get created' were to affect my subjective probability of the 10⬆️⬆️100th digit of pi being even but changing the second sentence to 'Otherwise 10⬆️⬆️100' get created' were to not affect my subjective probability of the 10⬆️⬆️100th digit of pi being even.
Now what about this one?
If the 10⬆️⬆️100th digit of pi is even then 10⬆️⬆️100 blue eyed people and one brown eyed person gets created. Otherwise one blue eyed person and one brown eyed person gets created.
If I were to observe myself to be brown eyed, I would be tempted to conclude that the 10⬆️⬆️100th digit of pi is more likely to be odd.
Ok, I'm seeing the appeal and spent the last couple hours pondering it. Thanks!
«Suppose that a coin is flipped that creates one person if heads and ten if tails. This coin creates you. However, after this coin is flipped, the same coin will be flipped over and over again. It will be flipped, let’s say, 1,000 times, and each time it will create one person if heads and ten people if tails. The question: what should your credence be in the coin that created you having come up tails?»
As an antedeluvian child of the first flip, my birth rank is 1. Whether I share that rank with 9 other flip-siblings, I have no indication for or against. Hence the answer is 0.5. The 1,000 flips after that, don't change credence as they are unrelated birth cohorts. Unless you want me to assume that it's not a fair coin, in which case I dunno. That gets a bit more complicated.
«Suppose that a coin is flipped a million times that creates one person if heads and ten if tails each time. You do not know your birth rank. What odds should you give to the coin having come up tails?»
Insufficient data. "not knowing my birth rank" does not imply "not knowing the entire coinmen population". If I observe that |coinmen| = |coinflips|, I would know that I am heads-child, since all coinmen are head-children. Vice versa with |coinmen| = 10*|coinflips|.
Otherwise c:= |coinmen| ∈ (1 million, 10 million). I calculate missed Potential P:=10 million - |coinmen|. |missed Tailflips| = P/9=|Headflips|. |Tailflips| = 1,000 - |Headflips|.
|AverageFlip| = |Tailflips|*10 + |Headflip|
P(I ∈ Tailchildren) = |Tailflips|*10 / |AverageFlip|
I guess you're assuming |Tailflips|=|Headflips|=1 million/2=500,000, making |AverageFlip|= 5,500,00 and P(I ∈ Tailchildren)= 500,000 * 10 / 5,500,000 = 10/11 exactly. Not approximately 10/11. But I don't whether a 50/50 is your assumption. Also I'm getting mighty sick of my own notation.
I find it pretty funny that your arguments for going from unknown rank to known rank are pretty similar to halfer arguments in sleeping beauty: what do you actually learn, when you experience something you were going to experience anyway? Why are your probabilities for heads unreflective: you'll think 1/2 before and after the experiment, but not during?!
This is a feature I really enjoy in anthropic problems: the same basic structure of argument can be marshalled on either side. Here, I'm inclined to agree with the thirders in SB: if you learn your coin flip's rank, the people in other coin flips are no longer "people you could have been"; you've learned a piece of information that distinguishes you from them. It's true it's a silly and seemingly inconsequential piece of information, but as I think the best argument from thirding is a variation on fully non-indexical conditioning, then I'm inclined to say with the thirders in SB: "if learning this info is irrelevant, then conditioning on it can't change anything, so you might as well condition on it".
Which is to say, I don't think you can dismiss the concern about learning potentially relevant information, and certainly if you do so I think it has repercussions for how you should evaluate the SIA argument for SB.
Now, it may turn out that properly conditionalizing on that information still yields an answer of 1/11; but I think one ought to conditionalize on it to make sure.
Random question: do you think a modified version of this argument could still work if we accept the ssa instead of the sia?
Let me outline it:
Instead of saying "the fact that 'I' exist is more likely if there are more people," say "the fact that 'some person' exists is more likely if there are more people." So you take the existence of a random observer instead of focusing on yourself. Now you can include your own existence in the background instead of counting it as evidence and the argument presumably still works.
What do ya think?
No, see section 4.7 https://benthams.substack.com/p/the-ultimate-guide-to-the-anthropic