(Note to readers of the blog: I’m at EA student summit in London. If you’re here, come say hi!)
1 Introduction
Money pump arguments purport to show that a position is wrong by demonstrating that it implies one should make an irrational pattern of trades. For example, suppose that someone values apples more than bananas, bananas more than oranges, and oranges more than apples. They have circular preferences!
A money pump argument can show their position to be irrational. You could take their money by offering them to trade an orange for a banana at the cost of a penny, a banana for an apple, and an apple for an orange. Now they’re back where they started, and down a penny! If you repeat this sequence of bets, they’ll end up losing money endlessly trading in circles. Probably the rational set of preferences would not result in one spending infinite money moving in circles.
Now, there are money pumps that purport to show all sorts of conclusions in anthropics. For a while, I was wary of them. The core problem is that money pumps get massively complicated very quickly. I agreed with Joe Carlsmith that the right view on money pumps is highly non-obvious, and so they’re a bad way of deciding on an anthropic theory.
I no longer think that. I now think that the self-indication assumption (SIA) has far and away the best response to money pumps. There’s only one money pump SIA seems vulnerable to, and all alternatives are vulnerable to the same money pump and worse! Furthermore, SIA has a plausible reply to that money pump—other views will probably have to make the same move as SIA to avoid the parallel money pump. However, when they make such a move, they end up opening themselves up to other money pumps. Even if they can somehow avoid that, each of the non-SIA views is vulnerable to money pumps of their own.
For those who don’t know, the self-indication assumption is the view that a theory which holds there are more candidates for being your present self makes your existence more likely. Being precise, it makes it more likely by a factor of X, where X is the number of times more candidates for being your present self.
This sounds pretty abstract, so let me give you some examples to illustrate it. The first is called the sleeping beauty problem. On Sunday, a fair coin is flipped. If it comes up heads, you’re just woken up one Monday. If it comes up tails, you’re woken up twice, once on Monday, then your memory is erased, and you’re woken up again on Tuesday. So the basic idea is if the coin comes up heads, you wake up once with no memory of any previous days, while if it comes up tails, you wake up twice with no memories of previous days.
Now, after waking up with no memories, what should your credence be in the coin having come up heads?
SIAers answer: 1/3. After all, if the coin comes up tails, there are twice as many wakeups. So there are twice as many candidates for being your present self. Thus, you should think tails is twice as likely as heads.
Or to give another example, suppose that a coin is flipped. If it comes up heads, one person is created. If it comes up tails, ten people are created. After being created by this process, SIA holds that you should think that it’s ten times likelier it came up tails than heads. This is because if the coin comes up tails, there are ten people that you might be.
A slight clarification: SIA doesn’t have to hold that in some deep metaphysical sense it’s possible that you could have been other people. Presumably I couldn’t really have been you. It just holds that if there are more people that, for all you presently know, you might currently be, that makes your present existence likelier. For a longer defense of SIA, see section 3.
So now that we’ve explained what makes SIA distinct, let’s see the money pumps its opponents are vulnerable to.
2 Money pumps for single halfers
Let’s return to the sleeping beauty problem, where a coin is flipped that wakes you up once if it came up heads and twice if it came up tails. SIA is basically the only view that holds that thirding is correct. If any view holds thirding is right, it must think that the existence of more candidates for being your present self makes the existence of your present self likelier. But then it will inevitably be the SIA or something similar.
So anyways, in the sleeping beauty problem, imagine that on the first day, Beauty will be informed that it’s presently day one. After being informed that it’s day one, what odds should she give to the coin having come up heads?
There are two kinds of answers people give. Some people answer: 2/3. These people are called single-halfers. The rationale is that Beauty previously believed that heads and tails were equally likely. Now, if the coin had come up tails, it could presently be either day, while if it came up heads, it was guaranteed to be the first day—after all, there only would be one day.
Thus, Beauty previously believed that heads and tails were equally likely. Then she learned something that was twice as likely given heads than tails. Therefore, she should now think that heads is twice as likely.
The other view is called double-halfing. Double-halfers think that, because whether the coin came up heads or tails, Beauty would be guaranteed to be awake on the first day, she gets no evidence either way from being awake on the first day.
The money pump for single-halfers is pretty straightforward. On Sunday, before the coin has been flipped, offer Beauty at 51% odds on whether the coin will come up heads—so she pays 49 cents and gets a dollar if the coin comes up tails. Then, after she’s informed that it’s Monday, offer to cancel the bet for 51 cents. Because she now thinks there’s a 2/3 chance it will come up heads, she has every reason to cancel the bet. So Beauty is now guaranteed to lose 2 cents. You could obviously switch the numbers around to make her lose an arbitrarily large amount of money.
Thus, single-halfing is out on account of its implication that you should hemorrhage money for no reason!
3 Double-halfing
Double-halfing—which says Beauty should think heads and tails have equal probabilities even after learning it’s presently day 1—has a money-pump of its own.
Suppose that Beauty wakes up on Monday. After she is put to sleep Monday night, a coin is flipped. If it comes up tails, she is woken up again on Tuesday with no memories. Then, at the end of Tuesday, a second coin is flipped just for funsies—it has no effect on anything. After waking up, what should Beauty’s credence be in the following proposition: the coin that will be flipped at the end of today will come up heads.
Now, being a halfer, Beauty will think that there’s a 1/2 chance that she only wakes up. For her to only wake up once, the coin has to come up heads, so she thinks that there’s a 1/2 chance that it’s presently day one and the coin that will be flipped at the end of today will come up heads. But she also thinks that there’s a 1/8 chance that it’s currently day two and the coin that will be flipped at the end of today will come up heads. After all, that requires three events occur, each with 1/2 probability: the first coin comes up tails, the present day is day two, and the second coin comes up tails.
Now, this is weird enough on its own. You should think the odds of a fair coin coming up heads are 1/2 if it hasn’t been flipped yet and you have no species information.
But this also sets up a money pump. After all, after learning which day it is, double-halfers will think that Beauty should think there’s a 1/2 chance the coin will come up heads. After learning it’s either day one or two, Beauty should think there’s a 1/2 chance that the coin that will be flipped will come up heads.
This makes for an extremely simple money pump! Offer Beauty a bet before she learns what day it is that pays out 46 cents if the coin that will be flipped at the end of today comes up heads, but costs 54 cents if it comes up tails. Naturally, Beauty thinks there’s a 5/8 chance that the coin will come up heads, so she takes the bet. Then, after she learns the present day—whether it’s day one or day two—she’ll be back to thinking that there’s a 1/2 chance of the coin coming up heads. So now offer to cancel the bet for one cent. Naturally, she takes the deal. Thus, Beauty is guaranteed to lose. Once again, by changing the numbers, you can make Beauty lose by an arbitrarily large amount.
Now, there is one view that can appear to have an avenue of escape. According to a view called compartmentalized conditionalization, upon having some experience, you should treat the relevant evidence as simply being that the experience is had. So if, for instance, your life consists of staring at a red wall for five minutes, theories are likelier to the probability, if they’re true, that someone would have that exact experience of staring out a red wall for five minutes.
If you adopt compartmentalized conditionalization, you might be able to get out of this money pump. CC holds that if you’re created in this scenario, whether you should treat this as evidence for tails depends on whether you have the same experience across the two days you wake up. If your experience is the same across the two days, then CC holds that double-halfing is correct, while if it’s not, CC holds that thirding is right.
What about the money pump in the scenario where you have the same experience across the two days? Can’t you still make the above money pump for that view? No, because CCers can hold that if there are two clones of you, there’s no fact of the matter about which one you presently are. So if, for instance, someone makes a clone of me in California and one in Paris, there’s no fact of the matter about which one you are. I think this position is insane, but it gets out of money pumps.
Fortunately, it leaves you open to another money pump. Suppose that a coin gets flipped. If it comes up heads, two people are created with identical experiences—both have the experience of being in the same color room. If it comes up tails, both people have the same experience for the first five minutes of their lives, but have different experiences after five minutes—perhaps then the lights come on and they see that they’re in different rooms.
After five minutes, CCers will think that tails is likelier than heads. After all, tails predicts more total experiences are had, so it’s likelier that your present set of experiences would be had. But before five minutes, CCers will think heads and tails are equally likely. So, during the first five minutes, offer each person a bet that the coin came up heads, so that they get 55 cents if it came up heads and lose 45 cents if it came up tails. Then, after five minutes, they think heads is twice as likely as tails. So, for five cents, offer to cancel their earlier bet. CCers will take this, and thus predictably lose.
Now CCers can try to get out of this by saying that because the clones actions are correlated, they’re twice as valuable if they’re clones. Thus, even after five minutes, you should bet on heads and tails at equal odds, because if it came up heads, your bet is twice as influential. But this doesn’t make any sense or help with anything. You only care about yourself, not your clone. The fact your action helps your clone is irrelevant.
Second of all, this would mean that CCer should, before the flip, bet on heads as twice as the odds they bet on tails. However, after five minutes, they no longer value heads twice as much as tails, but instead value them equally. So you can still set up a money pump where they pay to cancel their earlier bet.
The following chart shows the money pumps that inevitably lie in wait for the halfer. There is no escape. Halfers better start giving away their money now—ideally by purchasing a paid subscription—because we thirders are coming for it.
CC and most versions of double-halfing are also vulnerable to another money pump, but that one isn’t as general so I decided not to include it.
4 But what about Korzukhin-style money pumps
The philosopher Theodore Korzukhin came up with a clever money pump against thirding. For a while I thought it was one of the best objections to thirding. I no longer find it persuasive. The scenario I’ll give is a bit different from Korzukhin’s but the basic idea is the same.
The basic idea: a coin is flipped. If it comes up heads, you wake up just on Monday. If it comes up tails, you wake up on Monday and Tuesday, each time with no memories of previous days. On Sunday, you’re offered to bet on the result of the coinflip. If it comes up tails, you lose 5 dollars, while you get 6 dollars if it comes up heads. Obviously you take the bet.
Then, each day after you wake up, there’s a button you can press labeled “cancel.” If you press the button each day you wake up, you cancel the bet you made on Sunday but you lose ten cents. So if the coin came up heads, you’ll only end up pressing it once to cancel the bet, while if it came up tails, you’ll end up cancelling the bet by pressing it twice.
After waking up, thirders hold that you should think at 2/3 odds that the coin came up tails. Your decisions to press the button are correlated—if you press it the first day, you’re guaranteed to also press it the second day. Thus, there’s a 2/3 chance that your action to press the button commits you to a plan that results in you saving money. It seems like thirders should, therefore, pay to cancel the bet. But this is a money pump; they pay to cancel a bet that they made and are guaranteed to lose money.
As I’ll show, I think money pump, far from being an objection to thirding, is a point in favor of thirding.
First of all, I think that thirders have a way out of this money pump. Suppose you have two actions that are correlated, such that either action individually guarantees that you’ll take the other action. In this case, I think each action should only be given credit for half of the value that you get from the sequence of actions. If there are two buttons where if you press them both you’ll get 4 dollars, even if the button-pressings are correlated, they should each only count for two dollars.
In this case, you can get out of the money pump, because pressing each button only gets you $2.50 with 2/3 probability and has a 1/3 chance of costing you $6. Cancelling the Sunday bet at a cost no longer pays!
One should accept this even if they’re not a thirder. To see this, suppose that a coin is flipped on Sunday. If it comes up heads, you wake up in a red room on Monday, in a green room on Tuesday, and in a red room on Wednesday. You never have any memories of the previous days. If it comes up tails, you wake up in a red room on Monday, a green room on Tuesday, and then a green room on Wednesday. Once again, you never have memories of previous days.
Suppose that on Sunday, you’re offered a bet where you lose 49 cents if the coin comes up heads but get 51 cents if it comes up tails. You should obviously take the bet. Now suppose that after you wake up, there’s a button in every red room. The button is labeled “cancel.” If you press it each time you’re in a red room, you pay a cent and cancel the bet.
After waking up in a red room, you should obviously think at 2/3 odds that the coin came up heads. So it seems you should press the button—it’s in your interest to cancel a bet that has a 2/3 chance of losing 49 cents and a 1/3 chance of getting 51 cents. Thus, alternatives to SIA, so long as they agree that waking up in a red room should cause you to think at 2/3 odds the coin cam up heads, are vulnerable to money pumps unless they adopt the same strategy as SIA.
There’s another class of cases that draws out the problem even more clearly. It shows that this is actually an even bigger problem for non-SIA views than for SIA.
Suppose that you flip a coin. If it comes up heads, one person gets created. If it comes up tails, two people get created. But now suppose that you keep doing this once every million years for many trillions of years. Each time you flip a new coin and create one person if it comes up heads and two if it comes up tails.
Now, both SIA and alternatives will agree that upon being created, you should think it’s twice as likely the coin came up tails as that it came up heads. After all, ~2/3 of people get created by tails coin flips.
This means that there’s a case where non-SIA views imply the same results as SIA views. We can use this case to establish that they’re vulnerable to the same money pump as SIA.
To see this, let’s imagine this same experiment with one twist: the two people who are created each day, despite not being the same person, are always psychologically disposed to make the same bets. They each bet the same way even though they’re different people. I’ll call them behavioral clones to denote that they behave the same way but are not the same.
Now, each person can press a button. If they and their behavioral clones both press the button, $4 is deposited into their shared account. Note: $4 in their shared account is as valuable to them as getting $4 individually. However, if they just press the button and the coin came up heads, so they have no behavioral clones, then they lose $5.
It seems that halfers should accept this—at least if thirders should accept the Korzukhin money pump. There’s a 2/3 chance that the coin came up tails, and if it came up tails, pressing the button commits them to a plan that gets them $4 of value. But, of course, taking the bet is irrational—it results in them losing $5 half the time and getting $4 half the time.
To make things simple, we can imagine that they spend their money on ducks that they place on the Eiffel tower. The only thing they care about is how many ducks there are. Each duck costs a dollar. So half the time they lose the ability to put 5 ducks on the Eiffel tower and half the time they get 4 ducks on the Eiffel tower.
We can make this a guaranteed money pump by telling people their payouts, erasing their money, and then allowing them to pay to cancel their bets. Everyone would, behind this veil of ignorance, have an interest in doing so! And because everyone knows the setup, this isn’t an impermissible money pump.
Thus, this money pump is as much of a problem for non-SIA views as for SIA. Non-SIA views have a structurally identical money pump to SIA views, so long as you iterate the situation.
However, things are actually worse for non-SIA views than SIA views. But explaining why is a bit tricky. Let’s first explain the simplest money pump for non-SIA views, and how they avoid it.
Suppose that you recreate the sleeping beauty problem. On Sunday, you offer people a bet where they have to pay 1.20 if the coin comes up heads but they get 1.40 if it comes up tails. Then you offer people bets each day on whether the coin came up heads or tails. If it came up heads, they get a dollar. If it came up tails, they pay 90 cents. If it came up heads, they get a dollar.
It would seem like halfers should take the bets. After all, they they think there’s a 1/2 chance the coin came up heads, and if it came up heads they get a dollar. But if they take both bets—the obviously good one on Sunday and the one they’re offered each day after waking up—then they’re guaranteed to lose. If the coin came up heads, they get $1 on Monday but lose $1.20 from the Sunday bet, to overall lose 20 cents. If it came up tails, they get 1.40 on Sunday but lose 90 cents on both Monday and Tuesday—thus, they overall lose 40 cents. They can’t win!
However, most people agree that this money pump isn’t very compelling. Halfers have a pretty easy reply, along the following lines. Even though there’s only a 1/2 chance that the coin came up tails, if it came up tails, they’ll make the same bet twice! Thus, even though they only think there’s a 1/2 chance the coin came up tails, they should bet as if they think there’s a 2/3 chance it came up tails, because if they bet wrong when the coin came up tails, they lose twice as much.
The core lesson of this case is: if two decisions might be correlated, halfers should bet on scenarios where they’re correlated at twice the odds they believe the scenarios. In the sleeping beauty problem, even though they only think there’s a 1/2 chance that the coin came up tails, they should bet on it at 2/3 odds.
But now let’s apply this core lesson to the earlier case, with a slight modification. A coin gets flipped. If it comes up heads, one person is created. If it comes up tails, two people are created, who each are disposed to bet the same way. The people also share a bank account and regard a dollar in the shared bank account as just as valuable as a dollar had by one of them. This scenario is repeated over and over again—so different coins keep being flipped and people keep being created.
Upon being created, these people should think at 2/3 odds that the coin came up tails—around 2/3 of people in total get created by tails coinflips. But they should also bet on tails at twice the odds they’d bet on heads at, because if the coin came up tails, their actions are correlated. So in total, it seems they should bet on tails at 80% odds.
Not only do they have the same money pumps as thirders, therefore, they have a worse one. Suppose you offer them a bet where they lose 75 cents if the coin came up heads but get 25 cents if it came up heads. It seems they should take the bet! After all, they bet like there’s an 80% chance of tails. But betting this way means that half the time they lose 75 cents and half the time get 50 cents in their shared bank account. On average, they’re losing a quarter of a duck!
This problem is worse for halfers than thirders. In order to get out of the very simple money pump, halfers have to hold you should bet on tails at twice the odds you actually believe the coin came up tails at. But this results in absurdly betting on tails in the above case at 80% odds.
They have the same result as thirders in the case where you keep creating people with coin flips, as well as another, even worse result. This is thus much worse news for halfers—and non-SIAers—than thirders and SIAers.
Now, earlier I mentioned a way that thirders can get out of this puzzle. They can simply hold that if there are two actions that together get you one payout, then you should only treat each individual action as responsible for half the payout. Can halfers solve the problem by doing this?
Sadly not. If they do this, then they get screwed by the very basic halfer money pump—where you set up the sleeping beauty problem and offer them bets on each day. They can no longer hold that you should bet at twice the odds you actually believe the coin to have come up tails at, and thus no longer have any response to the very simple money pump.
Thus, the money pump that’s commonly cited to undermine SIA is an own goal. Not only do SIAers have a plausible response, the money pump is a point in favor of SIA. This is because
By iterating the scenario, you can give halfers the same money pump.
Halfers other commitments result in even greater absurdity in this case—where they bet at 80% odds on tails. This is much worse than betting at 2/3 odds.
If halfers try to copy the thirder response, they get screwed by other money pumps.
5 Conclusion
Here, I’ve argued that thirders and SIAers are not vulnerable to money pumps. The one serious money pump on the market doesn’t work against SIA, and non-SIAers are just as vulnerable. However, every non-SIA view gets wrecked by some money pump or another. SIA is the only view that doesn’t hold you should pointlessly and irrationally lose money for no reason. That’s because it’s the only true view.
Was a bit worried about the pump-against-thirders there, but I think that objection makes sense. Somehow two of my favourite problems (Sleeping Beauty, and Newcomb's) have gotten all tangled up. For reference, playing Prisoner's Dilemma against your clone is basically Newcomb's problem. And so this pump-against-thirders works only against CDT thirders, I think. EDT and FDT would refuse to cancel their bet, saying "if I cancel, then I'd cancel in all 3 worlds, which is bad". That being said, I saw a money pump (-ish) where CDT beats EDT, so thirder+FDT is looking like the best combo here. (My thirder argument: modify the problem so that Tuesday+Heads instead involves waking up but being told "and the experiment is over, you don't need to answer anything". So upon waking up, you have 25% on all 4 worlds, then when you don't get told the experiment's over, you eliminate the Tuesday+Heads world, leaving 2/3 chance of Tails)
It makes me somewhat emotional to say that I've seen Bentham's Bulldog typing a post, in the flesh! (And also heard him talk, and was surprised to realize that he talks in a way that's very similar to the way he writes) Anyway, the handful of minutes I got to spend with BB seems to confirm my impression that he's not only an effective altruist, but also a cool guy all around.