Discussion about this post

User's avatar
Daniel Greco's avatar

"Now, this is hard. The numbers are a bit made up at times. But reasoning with made up numbers is often better than reasoning with no numbers at all. Human intuition isn’t good at figuring out probability, so it can often be improved by Bayesian analysis. Even though it often won’t be clear whether the odds ratio is 5:1 or 10:1, you can usually have a rough order of magnitude estimate."

I know you qualified this a lot, but I still think it dramatically understates the difficulties involved. Rough order of magnitude estimates cannot be taken for granted. There are plenty of questions where it would be a major scientific achievement to come up with well grounded rough order of magnitude estimates. E.g., what's the probability of life emerging on a habitable planet? If you could nail that down to within a few orders of magnitude, origins of life researchers would be thrilled. And that's given lots and lots of rich background information about what the universe is like. If I'm told to make analogous estimates (e.g., that there should exist matter) totally a priori--given no factual background assumptions at all--I think the honest answer is nothing I come up with should be taken seriously at all.

What's the alternative? I think we're probably a lot better at making empirically informed posterior probability judgments, and then working backwards to think what priors we'd have to have for those to be sensible posteriors. But if that's the approach you take, it's going to be a lot harder to take totally a priori constraints on prior probabilities as premises, and use them in a dialectically effective manner to impose constraints on posterior probabilities.

Expand full comment
Jack Miller's avatar

I have some pretty strong methodological worries here:

For one, you seem to be just throwing in a new ratios without updating the extent to which the data already included the prior credence function confirm the hypothesis. But that’s clearly not the right way to think about things. The problem of evil does directly disconfirm theism, but it dually undercuts the evidential value of the various arguments in favor of theism — they aren’t considerations we can just completely bifurcate and reason through independent of one another. We expect a benevolent god to fine-tune a *maximally good* world, not just any world at all. Likewise, the benevolent god theory predicts that god would create tons of *maximally good* lives, not a mixed bag with some pretty good lives and lots of horrible lives. When we learn evidence that suggests the world isn’t maximally good and that he hasn’t created an absurd amount of maximally good lives, we also need to significantly lower the extent to which fine-tuning considerations confirm theism because we realize we’ve observed a world that theism doesn’t actually predict. The value of stuff like fine-tuned laws, lots of life, consciousness, etc. is just instrumental to value creation, but if the latter doesn’t obtain, we shouldn’t expect think the former alone confirms theism. In other words, the predicted datum is <fine-tuning + maximal goodness>, not two separate data of <fine-tuning> and <maximal goodness> such that the former by itself would be strongly confirming. Two quick points of clarification:

A] Of course, we might think that our world is close enough to maximal goodness (perhaps because your credence in the problem of evil isn’t super close to 1) that our observations do give us at least some reason to believe that <fine-tuning + maximal goodness> has obtained in our world. But that’s a VERY different way of reasoning than the way you’ve reasoned in this post, since you are treating them as independent data.

B] Maybe you don’t share my view that value requires phenomenal consciousness - e.g. maybe you think a cool waterfall is valuable even if there’s literally zero minds to perceive it or appreciate it, but this is something you’d need to substantively justify to get your arguments off the ground. I also think this is probably wrong - Uriah Kriegel has some good arguments about why we should reject that view.

Second, I think trying to reason about really abstract, complicated matters like the probability of theism with formal Bayesian mathematics probably hurts more than it helps. Assigning numbers is always pretty arbitrary, and the complexity of mathematical modeling means we are more likely to make mistakes when updating our prior credences than we otherwise would be. As they say, a Bayesian is someone who reasons normally but moans Bayes name as they do it!

Finally, why think there’s an objective correct prior that we should all have? My impression is that most epistemic internalists tend to be subjective Bayesians, so I’m curious why you aren’t.

(As a side note, I think 4:1 is seriously underestimating the strength of arguments from evil, but that is a separate matter that I think would be really stupid to try to convince you of in a comment section here — I know you have a lot to say on this that you’ve written about elsewhere.)

Expand full comment
68 more comments...

No posts