I think expected utility theory is sound. I’m ashamed to admit I just choose to selectively ignore it when tiny probabilities of great value suggest I do something really unintuitive. I do this because I’m unreasonable.
For example, I find Pascal’s Wager very compelling and think I should profess my belief in a god on the off chance he exists and cares about what I think. Why not? But I’d feel like such a phony, and would be so badly misrepresenting myself that I just can’t. So I admit, by not accepting god on spec, I’m simply being unreasonable.
But I think fanatics (people who say they adhere to tiny probability/great value EUT) also selectively ignore it. They just won’t admit it.
For instance, if you believe there’s even a slim chance that all people go to heaven when they die (a perfectly good place of infinite happiness), and you concede that life on earth is brutish and imperfect, then it is in humanity’s overwhelming best interest, from an expected value standpoint, to painlessly exterminate ourselves.
I admit this a strong argument from an EUT perspective. But I choose to be unreasonable and reject it. I think most fanatics reject it, too (they’re still with us, after all), but they won’t admit to their unreasonableness in doing so.
If everyone goes to heaven when they die, isn’t it in humanity’s interest, from an expected value standpoint, to maximize the number of humans there are? Wouldn’t exterminating ourselves preclude bringing about many more infinitely good lives?
Good question. To maximize there, I think what you want is a machine that spits out newborns with remarkable efficiency, then immediately euthanizes them to ensure minimal earthly suffering and maximum divine enjoyment. Maybe AI can help us get there.
LessWrong users are unusually good at scope sensitivity and decision theory, which means 10% of us understand the basic concepts rather than the population-wide base rate of 1%.
People have no choice but to ignore low probabilities if they want to live a normal life. There's chance that if I walk out the front door I'll get hit by a stray bullet or a meteorite or 20 ton roll of steel that fell off a truck. The most probable of those is at the one in a million level. That level of risk does not in the slightest deter me, and someone who would be deterred by it has a very good chance of being hopelessly neurotic.
1. The probability that I'll get hit by a stray bullet tomorrow morning is not remotely close to the probability that insects are conscious.
2. If I do the expected utility calculation on whether I should go outside tomorrow morning, I am pretty confident that the answer will be "yes". This scenario is not a puzzle for expected utility theory because expected utility theory agrees with intuition.
There was a line in the essay to the effect that people erred in ignoring small risks. Point 1 was a rejoinder to that position. The essay poses purely abstract quandaries and propositions that are fun, sort of, to ponder, but have only the slenderest of connections to the real physical world in which most of us reside.
Re: Insect consciousness. I think the jury is still out on whether a brain the size of the period at the end of this sentence can experience anything that might be called "consciousness," as in "I think therefore I am."
When I was a child, I was fairly certain they did, and announced one day at about the age of ten that I would trade my life to save a billion hypothetical insects. I have since retreated from that position.
The problem is that there is nothing that forces you to value a 1% probability at 1%. Plenty of people live rich lives ignoring 1% probabilities of ruin, and if worrying about that 1% would completely change their life in a more dramatic way than they would like, gambling that they will end up in the 99% is not unreasonable. It’s much easier to impugn this reasoning when it’s a repeated game, but if it’s a single unknown conjecture, it’s harder to do so.
The reason the market values assets differently from EV (in dollars) is precisely because of the probability of ruin though. Assets which are less or even negatively correlated with the market are more expensive, precisely because the marginal value of a dollar grows as the value of your portfolio falls. The price of gold is a testament to how much people care about small probabilities of ruin.
That’s a fair point. Said another way, market implied probabilities tend to be higher for tail events than real world probabilities. But there is nothing forcing that to be the case. The market is in turn undervaluing the median event. How do we adjudicate who is fundamentally right, if an investor wants to value a tail risk event lower than its actual probability? By what standard is that person wrong? One possible standard is money - if it’s a repeated game, getting the probabilities wrong costs you money (possibly rationally, as is the case when you are willing to overpay to hedge tail events). But what if it’s not a repeated game? By what standard is it incorrect to use something other than objective probability (if there even can be said to be such a thing in these Pascal’s wager examples)?
I am reminded of the Manhattan project. As they were preparing to test the implosion design, Enrico Fermi calculated that there was a low, but nonzero probability that the explosion would set fire to the atmosphere. Which led to a serious discussion. It was decided that the probability was low enough to be negligible.
The question isn't about relative magnitudes I don't think, it's more about whether there's an adversarial dynamic. Straightforward EU maximization is a bad idea against an adversary, you will get got. You have to do something else.
In particular you have to think about how much power the adversary has to push around the behavior of a naive EU maximizer by selectively revealing information. Which is extremely easy if you can assign arbitrarily big utilities to arbitrary events.
In the Pascal's mugging there's obviously an adversary. So also in Pascal's wager (they want to convert you). Note adversary isn't necessarily evil or even bad faith (a good Christian may try to convince you via Pascal's wager).
So also in utilitarian arguments from effective altruists (they want your money + to grow the movement). Sometimes those arguments are right, of course, but people are suspicious that if their credence were smaller, the utilities would have been bigger.
that’s a misinterpretation of the Pascal’s wager. he meant it to be a rebbutal of the current argument that people did not choose religion for rational motives, demonstrating the reason behind the lack of acceptance for religion is in people’s passions, not reason.
It wasn't a joke, to be clear. I was just saying that even if you discount risks, discounting risks of like 1% is crazy and identified 1/10,000 as maybe the point where it's fine to discount if you are going to discount, which I don't think you should.
I’ve always thought of the problems in Pascal’s wager/mugging stemming from the infinitude of the potential value. The infinitude making it the case that no matter how low a probability, the wager/mugging is always worth it.
Seems that a lot of philosophical arguments that utilize infinite quantities in their mathematical or probabilistic reasoning get wonky really fast.
Related aside: In general I think we often do not take risk into account properly. If 1000 people take the wager/mugging, you can have the average payoff for the group be arbitrarily high, while nearly every individual just loses everything. Is it really rational for them all as individuals to take it? Or are we applying the wrong model? See ergodic vs non-ergodic scenarios/games.
Perhaps speaking for myself, but one of the messages I get from Pascal's wager/mugging is not only that the mugger and I are having a quantitative disagreement about whether the value at hand is 1 or 100 or 10 million, but that the thought experiments are pre-asserting qualitative equivalencies and context that makes their experiment relevant, but would have to be proven in order to make the experiment generalizable to others, or me, or real world decisions.
Specifically, that the utility being implicitly compared in the mugging (me losing 100 bucks of utility, the mugger gaining an infinite amount) is actually comparable. That, in addition to whatever level of suspicion I have about whether his utility is actually infinite, I have (very high) levels of suspicion about the mechanism by which the money could actually accomplish that - or any number of other things. If the mugger says "give me all 10 of your apples, so that I may have infinite oranges" then my rejection of the deal (or Pascal's mugging as a useful type of thought experiment) is only partially about the numbers.
Same with the heaven thing: if you can set all the parameters for a thought experiment ahead of time you can get whatever results you want, within the thought experiment. I always assumed that was part of any Pascal-skepticism? Beyond the limited array handling, I mean....
I think the “correct” decision in Pascal’s Mugging is to give him the wallet, and a lot of oppositions to it are “that’s ridiculous, that can’t be correct”
What if I said that I am a God hanging out online for fun, and if you ever give in to a Pascal's Mugging from someone else, I'll cause even more suffering than they will, more than inversely proportional to how much less you trust my claim?
Pascal's Mugging evokes an incoherency: a mere sequence of words can make you perform an arbitrary action, any action at all. The idea that you can pump up the outcome arbitrarily high without correspondingly draining the probability arbitrarily low means you just listen to whoever talks to you. I'm not saying I have a stronger argument than "obviously that can't be right", but it does seem pretty obvious that if a token argument that fits in a single sentence can convince you to do absolutely anything, then you are not operating well.
So if I were to write a comment right now saying that I am secretly God and I will generate googolplex utility if you Venmo me $20, would you do it?
(I'm not going to actually write that comment because I would be lying and I don't think it's good to lie. But I want to know what would happen if I did write it.)
How do you know which mugger is the real one? Eventually you end up like the guy in the Mummy, so in the end you need to decide which low-probability negative-infinity to ignore.
It's correct if you adopt heuristics as metaphysical principles. But you can avoid the problem and many other philosophical dilemmas by just not doing that.
I think expected utility theory is sound. I’m ashamed to admit I just choose to selectively ignore it when tiny probabilities of great value suggest I do something really unintuitive. I do this because I’m unreasonable.
For example, I find Pascal’s Wager very compelling and think I should profess my belief in a god on the off chance he exists and cares about what I think. Why not? But I’d feel like such a phony, and would be so badly misrepresenting myself that I just can’t. So I admit, by not accepting god on spec, I’m simply being unreasonable.
But I think fanatics (people who say they adhere to tiny probability/great value EUT) also selectively ignore it. They just won’t admit it.
For instance, if you believe there’s even a slim chance that all people go to heaven when they die (a perfectly good place of infinite happiness), and you concede that life on earth is brutish and imperfect, then it is in humanity’s overwhelming best interest, from an expected value standpoint, to painlessly exterminate ourselves.
I admit this a strong argument from an EUT perspective. But I choose to be unreasonable and reject it. I think most fanatics reject it, too (they’re still with us, after all), but they won’t admit to their unreasonableness in doing so.
If everyone goes to heaven when they die, isn’t it in humanity’s interest, from an expected value standpoint, to maximize the number of humans there are? Wouldn’t exterminating ourselves preclude bringing about many more infinitely good lives?
Good question. To maximize there, I think what you want is a machine that spits out newborns with remarkable efficiency, then immediately euthanizes them to ensure minimal earthly suffering and maximum divine enjoyment. Maybe AI can help us get there.
Lol liking this before reading it because the whole post could just be this title; it’s such a hilarious move people make.
Been waiting for someone to say this — it’s so jarring!
A man of the people.
Weird to be lecturing the lesswrong crowd on scope sensitivity and decision theory.
I'll say it: "I notice I am confused".
What's going on?
LessWrong users are unusually good at scope sensitivity and decision theory, which means 10% of us understand the basic concepts rather than the population-wide base rate of 1%.
People have no choice but to ignore low probabilities if they want to live a normal life. There's chance that if I walk out the front door I'll get hit by a stray bullet or a meteorite or 20 ton roll of steel that fell off a truck. The most probable of those is at the one in a million level. That level of risk does not in the slightest deter me, and someone who would be deterred by it has a very good chance of being hopelessly neurotic.
1. The probability that I'll get hit by a stray bullet tomorrow morning is not remotely close to the probability that insects are conscious.
2. If I do the expected utility calculation on whether I should go outside tomorrow morning, I am pretty confident that the answer will be "yes". This scenario is not a puzzle for expected utility theory because expected utility theory agrees with intuition.
There was a line in the essay to the effect that people erred in ignoring small risks. Point 1 was a rejoinder to that position. The essay poses purely abstract quandaries and propositions that are fun, sort of, to ponder, but have only the slenderest of connections to the real physical world in which most of us reside.
Re: Insect consciousness. I think the jury is still out on whether a brain the size of the period at the end of this sentence can experience anything that might be called "consciousness," as in "I think therefore I am."
When I was a child, I was fairly certain they did, and announced one day at about the age of ten that I would trade my life to save a billion hypothetical insects. I have since retreated from that position.
The problem is that there is nothing that forces you to value a 1% probability at 1%. Plenty of people live rich lives ignoring 1% probabilities of ruin, and if worrying about that 1% would completely change their life in a more dramatic way than they would like, gambling that they will end up in the 99% is not unreasonable. It’s much easier to impugn this reasoning when it’s a repeated game, but if it’s a single unknown conjecture, it’s harder to do so.
As an example, the market values assets differently then their real world probability distribution would dictate.
The reason the market values assets differently from EV (in dollars) is precisely because of the probability of ruin though. Assets which are less or even negatively correlated with the market are more expensive, precisely because the marginal value of a dollar grows as the value of your portfolio falls. The price of gold is a testament to how much people care about small probabilities of ruin.
That’s a fair point. Said another way, market implied probabilities tend to be higher for tail events than real world probabilities. But there is nothing forcing that to be the case. The market is in turn undervaluing the median event. How do we adjudicate who is fundamentally right, if an investor wants to value a tail risk event lower than its actual probability? By what standard is that person wrong? One possible standard is money - if it’s a repeated game, getting the probabilities wrong costs you money (possibly rationally, as is the case when you are willing to overpay to hedge tail events). But what if it’s not a repeated game? By what standard is it incorrect to use something other than objective probability (if there even can be said to be such a thing in these Pascal’s wager examples)?
I am reminded of the Manhattan project. As they were preparing to test the implosion design, Enrico Fermi calculated that there was a low, but nonzero probability that the explosion would set fire to the atmosphere. Which led to a serious discussion. It was decided that the probability was low enough to be negligible.
The question isn't about relative magnitudes I don't think, it's more about whether there's an adversarial dynamic. Straightforward EU maximization is a bad idea against an adversary, you will get got. You have to do something else.
In particular you have to think about how much power the adversary has to push around the behavior of a naive EU maximizer by selectively revealing information. Which is extremely easy if you can assign arbitrarily big utilities to arbitrary events.
In the Pascal's mugging there's obviously an adversary. So also in Pascal's wager (they want to convert you). Note adversary isn't necessarily evil or even bad faith (a good Christian may try to convince you via Pascal's wager).
So also in utilitarian arguments from effective altruists (they want your money + to grow the movement). Sometimes those arguments are right, of course, but people are suspicious that if their credence were smaller, the utilities would have been bigger.
that’s a misinterpretation of the Pascal’s wager. he meant it to be a rebbutal of the current argument that people did not choose religion for rational motives, demonstrating the reason behind the lack of acceptance for religion is in people’s passions, not reason.
Extremely funny that at the end you gave a cutoff (1/10000) below which you should go ahead and ignore the risk anyway!
That’s not what I said!
I figured you were making a joke at the end. Great post in any case.
Thanks!
It wasn't a joke, to be clear. I was just saying that even if you discount risks, discounting risks of like 1% is crazy and identified 1/10,000 as maybe the point where it's fine to discount if you are going to discount, which I don't think you should.
I’ve always thought of the problems in Pascal’s wager/mugging stemming from the infinitude of the potential value. The infinitude making it the case that no matter how low a probability, the wager/mugging is always worth it.
Seems that a lot of philosophical arguments that utilize infinite quantities in their mathematical or probabilistic reasoning get wonky really fast.
Related aside: In general I think we often do not take risk into account properly. If 1000 people take the wager/mugging, you can have the average payoff for the group be arbitrarily high, while nearly every individual just loses everything. Is it really rational for them all as individuals to take it? Or are we applying the wrong model? See ergodic vs non-ergodic scenarios/games.
Perhaps speaking for myself, but one of the messages I get from Pascal's wager/mugging is not only that the mugger and I are having a quantitative disagreement about whether the value at hand is 1 or 100 or 10 million, but that the thought experiments are pre-asserting qualitative equivalencies and context that makes their experiment relevant, but would have to be proven in order to make the experiment generalizable to others, or me, or real world decisions.
Specifically, that the utility being implicitly compared in the mugging (me losing 100 bucks of utility, the mugger gaining an infinite amount) is actually comparable. That, in addition to whatever level of suspicion I have about whether his utility is actually infinite, I have (very high) levels of suspicion about the mechanism by which the money could actually accomplish that - or any number of other things. If the mugger says "give me all 10 of your apples, so that I may have infinite oranges" then my rejection of the deal (or Pascal's mugging as a useful type of thought experiment) is only partially about the numbers.
Same with the heaven thing: if you can set all the parameters for a thought experiment ahead of time you can get whatever results you want, within the thought experiment. I always assumed that was part of any Pascal-skepticism? Beyond the limited array handling, I mean....
Is voting a Pascal's mugging?
I think the “correct” decision in Pascal’s Mugging is to give him the wallet, and a lot of oppositions to it are “that’s ridiculous, that can’t be correct”
What if I said that I am a God hanging out online for fun, and if you ever give in to a Pascal's Mugging from someone else, I'll cause even more suffering than they will, more than inversely proportional to how much less you trust my claim?
Pascal's Mugging evokes an incoherency: a mere sequence of words can make you perform an arbitrary action, any action at all. The idea that you can pump up the outcome arbitrarily high without correspondingly draining the probability arbitrarily low means you just listen to whoever talks to you. I'm not saying I have a stronger argument than "obviously that can't be right", but it does seem pretty obvious that if a token argument that fits in a single sentence can convince you to do absolutely anything, then you are not operating well.
So if I were to write a comment right now saying that I am secretly God and I will generate googolplex utility if you Venmo me $20, would you do it?
(I'm not going to actually write that comment because I would be lying and I don't think it's good to lie. But I want to know what would happen if I did write it.)
How do you know which mugger is the real one? Eventually you end up like the guy in the Mummy, so in the end you need to decide which low-probability negative-infinity to ignore.
https://youtu.be/0mHoAKSRpgw?t=53 (note, we don't have the convenience of instant feedback like the guy had)
It's correct if you adopt heuristics as metaphysical principles. But you can avoid the problem and many other philosophical dilemmas by just not doing that.