15 Comments

> "it can sometimes be worth being the type of agent who acts irrationally."

Yeah, I was always frustrated by the LW conflation of rational choice and desirable dispositions. (I once tried explaining it to them, but they weren't very receptive: https://www.lesswrong.com/posts/mpzoBMkayfQnaiKZK/desirable-dispositions-and-rational-actions )

Expand full comment

I fail to see how Newcomb's Problem is a real dilemma for decision theory at all. Rather, it's a proper logical paradox that serves as a reductio for one of its premises. It's basically this:

Let x be the payoff from two-boxing and y be the payoff from one-boxing.

P1: x = 1,000

P2: y = 1,000,000

P3: x = y + 1,000

C: 1,000,000 = 0

The only plausible resolutions seem to be either that such a predictor is (synthetic a priori) impossible, or that concepts like "choice", "decide" or "option" don't make sense in the face of such a predictor.

Expand full comment

“After all, if there’s a demon who pays a billion dollars to everyone who follows CDT or EDT then FDTists will lose out. The fact you can imagine a scenario where people following one decision theory are worse off is totally irrelevant—the question is whether a decision theory provides a correct account of rationality.”

This seems flawed. What decision theory you follow isn't a fact of the world, it's a summary of behaviour. So if it's worse to do it you'd pretend not to.

Expand full comment

"Once the predictor has run their course"

What does that mean?

"But you only make decisions after you exist. Of course, your decisions influence whether or not you exist but they don’t happen until after you exist."

I think you are just fighting the hypothesis there. The hypothesis is that you can make a decision before you exist , because the predictor runs a simulation of you that makes a decision.

Heighn says so clearly:

"The point is that your decision procedure doesn't make the decision just once. Your decision procedure also makes it in the predictor's head, when she is contemplating whether or not to create you"

If you want to say explicitly that simulation is impossible, that's fine..if you want to say explicitly that the real you has free will, that's fine too...but it's implicitly saying that simulation is.impossible...you can't predict a freely willed agent. But neither shows that there FDT is wrong, just that you don't accept the terms of the puzzle.

Expand full comment