>One other way to see that it isn’t ad hoc is that these kinds of infinities cause problems almost across the board. There are many different paradoxes that arise from normalizable probability functions—but they all result from something else relevant growing faster than the probabilities drop off.
Indeed, the culprit is hypotheses involving random variables with infinite expectation. And that's bad, because in the real world there are always such hypotheses lurking in the background for every decision, however non-saliently. But it's even worse than that, because even if you rather bluntly choose to ignore any such hypothesis, your decisions with respect to the remaining better-behaved hypotheses won't be continuous in your priors unless total utility is finite, which effectively means you (as a bounded reasoner who inevitably works with approximations) should massively distrust all of your decision-theoretic calculations. To deal with this, you can either 1. abandon anything like utility maximization (which means abandoning fanatacism), or 2. go with bounded utilities.
The problem isn't just messiness, it's literally impossible failing infinite computational power! On unbounded utility maximization, there's no reason to think that spending Graham's number worth of years computing the best action to take in the real world will get you any closer to the right answer than just guessing, even when you're arbitrarily excluding infinite-EV hypotheses!
You're hungry and deciding whether to get food from your kitchen for or remain where you are. Let O be the ordinary hypothesis (with high probability) that you'll get the food you expect from the kitchen - say, one util's worth - and that nothing weird happens in any case. But now consider exotic hypotheses (you can use your imagination to concoct these) X_1, X_2, ..., where X_k rewards you with k utils for going to the kitchen and zero otherwise; and also a different set of exotic hypotheses Y_1, Y_2, ..., which do the same thing for not going to the kitchen.
The X_i's and Y_i's are competing with each other and with O. The exotic X_i's and Y_i's also all have extremely small probabilities, which necessarily decay to zero as i goes to infinity (otherwise, the probabilities couldn't sum to 1). Nevertheless, even assuming they're well-behaved enough that the expected utility E[U(go to kitchen)] is finite, the exact value of that quantity depends very sensitively on (among other things) *how fast* P(X_i) and P(Y_i) decay to zero.
You can numerically change each of the P(X_i)'s and P(Y_i)'s by arbitrarily tiny absolute amounts (say, you tweak each one by a different quantity that's always less than one in a googolplex), while drastically affecting the decay rate. So in order to get a handle on expected utility, you need to get a handle on the decay rate of all the probabilities of all the exotic hypotheses in question as a function of their reward.
Realistically, given that there's way more exotic hypotheses to deal with than the X_i's and Y_i's of infinitely dizzying variety and character, there's no computationally finite way to do this! Some of them will even involve explicitly uncomputable things, like God promising you BB(30,000) utils for doing something.
I don't see what's wrong with saying that though we can't enumerate all the available options, it seems like you starving to death would be bad for your capacities and thus the odds of realizing arbitrarily valuable scnearios overall.
That shouldn’t matter. For one, the exotic hypotheses might outweigh the utility of anything I might realistically hope to do if I do or don’t starve to death. For another, the exotic hypotheses can affect the probability that I starve to death depending on whether I go to the kitchen or not (they’re competing with the ordinary hypothesis O!), thus they’re already incorporated in this kind of thinking.
This piece resonated with an argument I’ve been developing about epistemic stakes: if certain discoveries about the fundamental nature of reality could radically change what counts as “good” or how we should live, then even tiny chances of making those discoveries might outweigh more certain but smaller goods. I’ve sketched it here if anyone's curious:
I’m interested in how much your theoretical conclusions here have really “sunk in” to your mind at the level of practical reasoning.
Suppose a genie really did appear to you and give you the offer to extend your life by a googolplex years with one in a quadrillion probability, and otherwise will kill you instantly.
Would you find it easy to accept the offer, do you think? Or do you still have animal instincts yet to be overcome by philosophy which you think might actually carry the day here, albeit incorrectly?
(I didn’t finish reading to this, for full disclosure.)
I’m a utilitarian in spirit, but doesn’t fanatical utilitarianism always falter on the requirement of quantifiability and pure additivity?
I would allow an infinitely large number of people (7 billion, 100 billion, 300 billion, 10 to the 100th power, whatever) to each get pricked by a small needle than let one person get tortured to death.
That's not related to fanaticism. You could be a fanatic or not and hold that judgment or not (full disclosure, I don't think that judgment is defensible). Fanaticism is about risk not comparing guaranteed outcomes.
For more on why I reject your judgment see this article (but replace shrimp torture with "prevent n people from getting dust specks in their eyes."
Most of this post is just reiterating that expected value is defined as the mean of a distribution, not the mode. By presenting thought experiments where we get to enjoy complete disassociation with the consequences of the problem, it essentially allows us to pretend we can make this decision an infinite number of times, in which case clearly the mean is what we care about.
In the real world, as you point out in Section 4, we often only get to make decisions a handful of times or perhaps even once. The more important the decision, the less frequently we tend to be able to make it, and the more dominant the actual outcome is on our life. In other words, the modal outcome becomes increasingly important and the tails decreasingly important.
Another commenter asked about St. Petersburg Paradox- indeed this is what you have to solve to argue for expected value fanaticism in the real world, and you haven't addressed it at all. An EV fanatic like Sam Bankman-Fried says to never stop flipping the coin, maximizing EV at the cost of ever-increasing chance of ruin, which is basically the exact same argument you are using here to say you would choose ever decreasing odds of saving ever increasing magnitudes of value.
>As an analogy, there’s a pretty popular—and similar—objection to average utilitarianism. Average utilitarianism says that one should maximize average utility. But this implies that when deciding to have a kid, your decision should be majorly affected by the number of happy people in ancient Egypt and distant galaxies (for if far-away people are very happy, even a happy child will lower the average).
This isn't a defect of the theory, it's a personal objection that one doesn't like one of the entailments of the theory. A real objection would be that utils aren't measureable or averageable, which would render the theory inert.
> But if some weird result follows from a specific and weird mathematical property of certain infinite gambles, then we shouldn’t generalize it to other sorts of gambles.
You can just repeat this principle to any objection by taking the mathematical probability of the specific case and isolating it out as a "weird result."
Like there's an appeal here that we can do weird metaphysics even though it's not applicable to real life. But we can't do weird infinity metaphysics because it's not applicable to real life. Unless we talk about the aleph whatever amount of people that you should think exists given your existence. And somehow physics says an infinite universe is more plausible than not. These views you have previously expressed and probably presently jointly hold do not commute with one another.
The article is a beautiful reductio ad absurdum of its own premise. The entire argument hinges on taking a single, overly simplistic rule—expected utility maximization—as the ultimate criterion for rational decision-making. That's not a serious theory. Frankly, the issues the author grapples with are well-known, elementary problems in decision theory that show precisely why that single rule is flawed.
> The entire argument hinges on taking a single, overly simplistic rule—expected utility maximization—as the ultimate criterion for rational decision-making.
This article does the exact opposite of that, at rather insane length.
Your comment claimed his argument hinges on taking expected utility maximization as a rule and then deriving fanaticism. His article is instead about how a whole bunch of completely different, much weaker and more plausible principles than expected utility maximization imply fanaticism.
I'm going to apologize that my life situation doesn't allow me the focus to read the whole piece with the attention that it deserves — and thus should not ideally be posting a comment! — but am I correct in thinking that here, you're agreeing to something like Sam Bankman Fried's famous (infamous!) answer to the St. Petersburg question: that he'd continue to flip the coin ad infinitum? (Insofar as I'm asking a question that doesn't even make sense in this context, I apologize.)
That sounds like a very good question he asked though! From a expected value maximization/fanaticism standpoint, when exactly should you stop a Martingale/St. Petersburg situation, where you are being repeatedly given positive expected value bets? Such as $2^n for every tails you get (and losing it all at the first heads). It seems like fanaticism keeps you playing indefinitely because it cannot tell the step you should leave it at while invoking only expected value which is always positive.
>One other way to see that it isn’t ad hoc is that these kinds of infinities cause problems almost across the board. There are many different paradoxes that arise from normalizable probability functions—but they all result from something else relevant growing faster than the probabilities drop off.
Indeed, the culprit is hypotheses involving random variables with infinite expectation. And that's bad, because in the real world there are always such hypotheses lurking in the background for every decision, however non-saliently. But it's even worse than that, because even if you rather bluntly choose to ignore any such hypothesis, your decisions with respect to the remaining better-behaved hypotheses won't be continuous in your priors unless total utility is finite, which effectively means you (as a bounded reasoner who inevitably works with approximations) should massively distrust all of your decision-theoretic calculations. To deal with this, you can either 1. abandon anything like utility maximization (which means abandoning fanatacism), or 2. go with bounded utilities.
I had a section about how to apply probabilities in the real world. I agree it willbe pretty messy,
The problem isn't just messiness, it's literally impossible failing infinite computational power! On unbounded utility maximization, there's no reason to think that spending Graham's number worth of years computing the best action to take in the real world will get you any closer to the right answer than just guessing, even when you're arbitrarily excluding infinite-EV hypotheses!
Why?
You're hungry and deciding whether to get food from your kitchen for or remain where you are. Let O be the ordinary hypothesis (with high probability) that you'll get the food you expect from the kitchen - say, one util's worth - and that nothing weird happens in any case. But now consider exotic hypotheses (you can use your imagination to concoct these) X_1, X_2, ..., where X_k rewards you with k utils for going to the kitchen and zero otherwise; and also a different set of exotic hypotheses Y_1, Y_2, ..., which do the same thing for not going to the kitchen.
The X_i's and Y_i's are competing with each other and with O. The exotic X_i's and Y_i's also all have extremely small probabilities, which necessarily decay to zero as i goes to infinity (otherwise, the probabilities couldn't sum to 1). Nevertheless, even assuming they're well-behaved enough that the expected utility E[U(go to kitchen)] is finite, the exact value of that quantity depends very sensitively on (among other things) *how fast* P(X_i) and P(Y_i) decay to zero.
You can numerically change each of the P(X_i)'s and P(Y_i)'s by arbitrarily tiny absolute amounts (say, you tweak each one by a different quantity that's always less than one in a googolplex), while drastically affecting the decay rate. So in order to get a handle on expected utility, you need to get a handle on the decay rate of all the probabilities of all the exotic hypotheses in question as a function of their reward.
Realistically, given that there's way more exotic hypotheses to deal with than the X_i's and Y_i's of infinitely dizzying variety and character, there's no computationally finite way to do this! Some of them will even involve explicitly uncomputable things, like God promising you BB(30,000) utils for doing something.
I don't see what's wrong with saying that though we can't enumerate all the available options, it seems like you starving to death would be bad for your capacities and thus the odds of realizing arbitrarily valuable scnearios overall.
That shouldn’t matter. For one, the exotic hypotheses might outweigh the utility of anything I might realistically hope to do if I do or don’t starve to death. For another, the exotic hypotheses can affect the probability that I starve to death depending on whether I go to the kitchen or not (they’re competing with the ordinary hypothesis O!), thus they’re already incorporated in this kind of thinking.
This piece resonated with an argument I’ve been developing about epistemic stakes: if certain discoveries about the fundamental nature of reality could radically change what counts as “good” or how we should live, then even tiny chances of making those discoveries might outweigh more certain but smaller goods. I’ve sketched it here if anyone's curious:
https://heatdeathandtaxes.substack.com/p/find_purposeexe
I’m interested in how much your theoretical conclusions here have really “sunk in” to your mind at the level of practical reasoning.
Suppose a genie really did appear to you and give you the offer to extend your life by a googolplex years with one in a quadrillion probability, and otherwise will kill you instantly.
Would you find it easy to accept the offer, do you think? Or do you still have animal instincts yet to be overcome by philosophy which you think might actually carry the day here, albeit incorrectly?
Definitely still have instincts, overall judgment unclear.
(I didn’t finish reading to this, for full disclosure.)
I’m a utilitarian in spirit, but doesn’t fanatical utilitarianism always falter on the requirement of quantifiability and pure additivity?
I would allow an infinitely large number of people (7 billion, 100 billion, 300 billion, 10 to the 100th power, whatever) to each get pricked by a small needle than let one person get tortured to death.
That's not related to fanaticism. You could be a fanatic or not and hold that judgment or not (full disclosure, I don't think that judgment is defensible). Fanaticism is about risk not comparing guaranteed outcomes.
For more on why I reject your judgment see this article (but replace shrimp torture with "prevent n people from getting dust specks in their eyes."
https://benthams.substack.com/p/the-staggeringly-strong-case-that
A high-quality, through post as usual. I’ll set a reminder to line by line this with throwaway arguments when I wake up.
Most of this post is just reiterating that expected value is defined as the mean of a distribution, not the mode. By presenting thought experiments where we get to enjoy complete disassociation with the consequences of the problem, it essentially allows us to pretend we can make this decision an infinite number of times, in which case clearly the mean is what we care about.
In the real world, as you point out in Section 4, we often only get to make decisions a handful of times or perhaps even once. The more important the decision, the less frequently we tend to be able to make it, and the more dominant the actual outcome is on our life. In other words, the modal outcome becomes increasingly important and the tails decreasingly important.
Another commenter asked about St. Petersburg Paradox- indeed this is what you have to solve to argue for expected value fanaticism in the real world, and you haven't addressed it at all. An EV fanatic like Sam Bankman-Fried says to never stop flipping the coin, maximizing EV at the cost of ever-increasing chance of ruin, which is basically the exact same argument you are using here to say you would choose ever decreasing odds of saving ever increasing magnitudes of value.
>As an analogy, there’s a pretty popular—and similar—objection to average utilitarianism. Average utilitarianism says that one should maximize average utility. But this implies that when deciding to have a kid, your decision should be majorly affected by the number of happy people in ancient Egypt and distant galaxies (for if far-away people are very happy, even a happy child will lower the average).
This isn't a defect of the theory, it's a personal objection that one doesn't like one of the entailments of the theory. A real objection would be that utils aren't measureable or averageable, which would render the theory inert.
> But if some weird result follows from a specific and weird mathematical property of certain infinite gambles, then we shouldn’t generalize it to other sorts of gambles.
You can just repeat this principle to any objection by taking the mathematical probability of the specific case and isolating it out as a "weird result."
Like there's an appeal here that we can do weird metaphysics even though it's not applicable to real life. But we can't do weird infinity metaphysics because it's not applicable to real life. Unless we talk about the aleph whatever amount of people that you should think exists given your existence. And somehow physics says an infinite universe is more plausible than not. These views you have previously expressed and probably presently jointly hold do not commute with one another.
The article is a beautiful reductio ad absurdum of its own premise. The entire argument hinges on taking a single, overly simplistic rule—expected utility maximization—as the ultimate criterion for rational decision-making. That's not a serious theory. Frankly, the issues the author grapples with are well-known, elementary problems in decision theory that show precisely why that single rule is flawed.
> The entire argument hinges on taking a single, overly simplistic rule—expected utility maximization—as the ultimate criterion for rational decision-making.
This article does the exact opposite of that, at rather insane length.
How exactly? The whole thing is an insanely long defence of fanaticism/expected utility maximisation...
Your comment claimed his argument hinges on taking expected utility maximization as a rule and then deriving fanaticism. His article is instead about how a whole bunch of completely different, much weaker and more plausible principles than expected utility maximization imply fanaticism.
I'm going to apologize that my life situation doesn't allow me the focus to read the whole piece with the attention that it deserves — and thus should not ideally be posting a comment! — but am I correct in thinking that here, you're agreeing to something like Sam Bankman Fried's famous (infamous!) answer to the St. Petersburg question: that he'd continue to flip the coin ad infinitum? (Insofar as I'm asking a question that doesn't even make sense in this context, I apologize.)
No I don't think that's right. If you keep flipping the coin forever you're guaranteed to get nothing, so you shouldn't do that!
That sounds like a very good question he asked though! From a expected value maximization/fanaticism standpoint, when exactly should you stop a Martingale/St. Petersburg situation, where you are being repeatedly given positive expected value bets? Such as $2^n for every tails you get (and losing it all at the first heads). It seems like fanaticism keeps you playing indefinitely because it cannot tell the step you should leave it at while invoking only expected value which is always positive.