I think you could judge the validity of the conclusion by adding uncertainty ranges for each of the ratios you state. It may be the case that the overall range is so large that the final term is effectively meaningless.
The probability for things existing and it doing interesting stuff, such that it creates life is 100%, because in every universe without this property, the question never comes up. This may sound like a silly nitpick, but you actually cannot assign probabilities to states that don't exist in within your system. Otherwise you will get paradoxical result.
The envelope paradox is a simple illustration of this. Let's say there are two envelops. One contains x amount of dollars, the other one contains double that. You blindly open one envelope and it contains 100$. Now, the other envelope may either contain 50$ or 200$, yielding an expected value of 125$. This means that it is always profitable to take the second envelope instead of the first one, which is paradoxical because you would get the same result of you started with the second envelope (then the first one would look favorable). The problem is that we cannot assign probabilities to the question of how much money is in the second envelope. If the second envelope contains 200$, that means there never was an envelope with 50$ in it. This is different from a coin toss, because even if it lands on heads, tails is still included in the system of probabilities. The 50$ envelope is outside of the system. Its probability values are undefined.
Like very many problems you pose for naturalism, the firing squad case also very heavily depends on what you can imagine or conceive. To this point, I don't believe you have ever adressed how that is possibly relevant. I can conceive that the second envelope either contains 50$ or 200$, but that is precisely what lead me astray when trying to associate those cases with probabilities. I have raised the point plenty of times that it's not possible to reason about the outsider-observable properties of systems when being fully restricted to the inside of a system, which is exactly what happens when a psychophysical being is trying to use its psychophysical capabilities to reason about psychophysicality (consciousness from the inside is just itself). It seems clear that you firmly disagree with this, but I would like to know why.
Well, that's false, as the firing squad case shows. This has been enough to convince basically all the philosophers who have thought about it that you don't just always assign a prior of 1 to your existence.
How did you do your equation? Plug and chug each successive step or one big equation? I’d like to do this myself, but my Big-Brain Bayesianism is mostly LARP and every time I’ve tried it’s gone horribly wrong
I too would like to see the calculation, but mostly because I'm worried about double counting. For example "The stuff it does is interesting and valuable" is a subcategory of "it does stuff" and (perhaps) divine hiddenness is a subcategory of the problem of evil. If this isn't accounted for you get problems with double counting.
Also did you use distributions? I would be interested to see how big the tails are (my guess is, pretty big) and whether they're symmetrical (my guess is, probably not).
Why'd you do my boy Graham Oppy like that? There are photos of him that don't look like a pixelated murderer :)
I think he looks cool there
Seconding that he looks like a psycho killer here.
I think you could judge the validity of the conclusion by adding uncertainty ranges for each of the ratios you state. It may be the case that the overall range is so large that the final term is effectively meaningless.
The probability for things existing and it doing interesting stuff, such that it creates life is 100%, because in every universe without this property, the question never comes up. This may sound like a silly nitpick, but you actually cannot assign probabilities to states that don't exist in within your system. Otherwise you will get paradoxical result.
The envelope paradox is a simple illustration of this. Let's say there are two envelops. One contains x amount of dollars, the other one contains double that. You blindly open one envelope and it contains 100$. Now, the other envelope may either contain 50$ or 200$, yielding an expected value of 125$. This means that it is always profitable to take the second envelope instead of the first one, which is paradoxical because you would get the same result of you started with the second envelope (then the first one would look favorable). The problem is that we cannot assign probabilities to the question of how much money is in the second envelope. If the second envelope contains 200$, that means there never was an envelope with 50$ in it. This is different from a coin toss, because even if it lands on heads, tails is still included in the system of probabilities. The 50$ envelope is outside of the system. Its probability values are undefined.
See the firing squad case.
Like very many problems you pose for naturalism, the firing squad case also very heavily depends on what you can imagine or conceive. To this point, I don't believe you have ever adressed how that is possibly relevant. I can conceive that the second envelope either contains 50$ or 200$, but that is precisely what lead me astray when trying to associate those cases with probabilities. I have raised the point plenty of times that it's not possible to reason about the outsider-observable properties of systems when being fully restricted to the inside of a system, which is exactly what happens when a psychophysical being is trying to use its psychophysical capabilities to reason about psychophysicality (consciousness from the inside is just itself). It seems clear that you firmly disagree with this, but I would like to know why.
Well, that's false, as the firing squad case shows. This has been enough to convince basically all the philosophers who have thought about it that you don't just always assign a prior of 1 to your existence.
How did you do your equation? Plug and chug each successive step or one big equation? I’d like to do this myself, but my Big-Brain Bayesianism is mostly LARP and every time I’ve tried it’s gone horribly wrong
I too would like to see the calculation, but mostly because I'm worried about double counting. For example "The stuff it does is interesting and valuable" is a subcategory of "it does stuff" and (perhaps) divine hiddenness is a subcategory of the problem of evil. If this isn't accounted for you get problems with double counting.
Also did you use distributions? I would be interested to see how big the tails are (my guess is, pretty big) and whether they're symmetrical (my guess is, probably not).
I think evil would be a subset of hiddenness if they do collapse into one another
I'd be happy to see you revisit this in a few months and update us either that you think this reasoning stands or that you have revisions.