Discussion about this post

User's avatar
Mark's avatar

>One other way to see that it isn’t ad hoc is that these kinds of infinities cause problems almost across the board. There are many different paradoxes that arise from normalizable probability functions—but they all result from something else relevant growing faster than the probabilities drop off.

Indeed, the culprit is hypotheses involving random variables with infinite expectation. And that's bad, because in the real world there are always such hypotheses lurking in the background for every decision, however non-saliently. But it's even worse than that, because even if you rather bluntly choose to ignore any such hypothesis, your decisions with respect to the remaining better-behaved hypotheses won't be continuous in your priors unless total utility is finite, which effectively means you (as a bounded reasoner who inevitably works with approximations) should massively distrust all of your decision-theoretic calculations. To deal with this, you can either 1. abandon anything like utility maximization (which means abandoning fanatacism), or 2. go with bounded utilities.

Expand full comment
James Yamada's avatar

This piece resonated with an argument I’ve been developing about epistemic stakes: if certain discoveries about the fundamental nature of reality could radically change what counts as “good” or how we should live, then even tiny chances of making those discoveries might outweigh more certain but smaller goods. I’ve sketched it here if anyone's curious:

https://heatdeathandtaxes.substack.com/p/find_purposeexe

Expand full comment
27 more comments...

No posts