Ugh, reading the comments on that post is so depressing! It’s like they don’t have a concept of rationality distinct from what in fact gets a person more utility.
One thing I'd be curious--do you agree that FDT is crazy, like creationism, for instance, in the sense of being indicative of just abject confusion rather than a mere divergence in intuitions?
Hmm, probably not. I think it's pretty interesting and well worth thinking about. But it's not entirely clear how to draw the line between "ultimately deeply misguided" and "indicative of abject confusion", so maybe "being worth thinking about" is ultimately compatible with being technically crazy in your sense? (Maybe Anselm's ontological argument would be an example of this: crazy in an illustrative way.) fwiw, I think FDT is much more credible than the ontological argument!
I remember a while back you said hedonism was crazy, though you fortunately changed your mind. I agree it's wrong in an illustrative sense--I think thinking hard about FDT takes away a lot of the motivation for EDT. I guess when I say crazy, I mean something like, there is no procedurally ideal person with merely divergent intuitions (that aren't bizarre, unjustified, or ad hoc--wouldn't include a person who has the intuition that pain is good) who would adopt the view. Instead, the view rests flatly on a mistake--though a slightly subtle one.
Oh, I never doubted that there could be procedurally ideal hedonists. I just thought it was *substantively* crazy (for which I think you could reasonably claim the same here).
Generally speaking, I think all kinds of philosophical views are procedurally defensible. So I'm hesitant to make claims about a view's being indefensible in that sense. It does seem to me that the view rests on a mistake. But I think a smart person who reflects carefully on the matter could reasonably come to a different conclusion.
Yeah, thinking more about this, I guess I don't have a great definition of crazy. But still, I think FDT is crazy in a way that flat eartherism is, logical positivism is, cultural relativism is, but deontology is not.
I fail to see how Newcomb's Problem is a real dilemma for decision theory at all. Rather, it's a proper logical paradox that serves as a reductio for one of its premises. It's basically this:
Let x be the payoff from two-boxing and y be the payoff from one-boxing.
P1: x = 1,000
P2: y = 1,000,000
P3: x = y + 1,000
C: 1,000,000 = 0
The only plausible resolutions seem to be either that such a predictor is (synthetic a priori) impossible, or that concepts like "choice", "decide" or "option" don't make sense in the face of such a predictor.
There doesn't seem to be any decision theory that tells you when a question just doesn't make sense...yet it is always a possibility. So much for decision theory.
Directly rewarding irrationality is bound to create paradoxes as well.
“After all, if there’s a demon who pays a billion dollars to everyone who follows CDT or EDT then FDTists will lose out. The fact you can imagine a scenario where people following one decision theory are worse off is totally irrelevant—the question is whether a decision theory provides a correct account of rationality.”
This seems flawed. What decision theory you follow isn't a fact of the world, it's a summary of behaviour. So if it's worse to do it you'd pretend not to.
No, it’s a fact of the world. It is a fact that some people think in their brain “hmm what action causes the most utility,” and others think “what action gives me evidence of lost utility after I take the action.”
"But you only make decisions after you exist. Of course, your decisions influence whether or not you exist but they don’t happen until after you exist."
I think you are just fighting the hypothesis there. The hypothesis is that you can make a decision before you exist , because the predictor runs a simulation of you that makes a decision.
Heighn says so clearly:
"The point is that your decision procedure doesn't make the decision just once. Your decision procedure also makes it in the predictor's head, when she is contemplating whether or not to create you"
If you want to say explicitly that simulation is impossible, that's fine..if you want to say explicitly that the real you has free will, that's fine too...but it's implicitly saying that simulation is.impossible...you can't predict a freely willed agent. But neither shows that there FDT is wrong, just that you don't accept the terms of the puzzle.
> "it can sometimes be worth being the type of agent who acts irrationally."
Yeah, I was always frustrated by the LW conflation of rational choice and desirable dispositions. (I once tried explaining it to them, but they weren't very receptive: https://www.lesswrong.com/posts/mpzoBMkayfQnaiKZK/desirable-dispositions-and-rational-actions )
Ugh, reading the comments on that post is so depressing! It’s like they don’t have a concept of rationality distinct from what in fact gets a person more utility.
One thing I'd be curious--do you agree that FDT is crazy, like creationism, for instance, in the sense of being indicative of just abject confusion rather than a mere divergence in intuitions?
Hmm, probably not. I think it's pretty interesting and well worth thinking about. But it's not entirely clear how to draw the line between "ultimately deeply misguided" and "indicative of abject confusion", so maybe "being worth thinking about" is ultimately compatible with being technically crazy in your sense? (Maybe Anselm's ontological argument would be an example of this: crazy in an illustrative way.) fwiw, I think FDT is much more credible than the ontological argument!
I remember a while back you said hedonism was crazy, though you fortunately changed your mind. I agree it's wrong in an illustrative sense--I think thinking hard about FDT takes away a lot of the motivation for EDT. I guess when I say crazy, I mean something like, there is no procedurally ideal person with merely divergent intuitions (that aren't bizarre, unjustified, or ad hoc--wouldn't include a person who has the intuition that pain is good) who would adopt the view. Instead, the view rests flatly on a mistake--though a slightly subtle one.
Oh, I never doubted that there could be procedurally ideal hedonists. I just thought it was *substantively* crazy (for which I think you could reasonably claim the same here).
Generally speaking, I think all kinds of philosophical views are procedurally defensible. So I'm hesitant to make claims about a view's being indefensible in that sense. It does seem to me that the view rests on a mistake. But I think a smart person who reflects carefully on the matter could reasonably come to a different conclusion.
Yeah, thinking more about this, I guess I don't have a great definition of crazy. But still, I think FDT is crazy in a way that flat eartherism is, logical positivism is, cultural relativism is, but deontology is not.
I fail to see how Newcomb's Problem is a real dilemma for decision theory at all. Rather, it's a proper logical paradox that serves as a reductio for one of its premises. It's basically this:
Let x be the payoff from two-boxing and y be the payoff from one-boxing.
P1: x = 1,000
P2: y = 1,000,000
P3: x = y + 1,000
C: 1,000,000 = 0
The only plausible resolutions seem to be either that such a predictor is (synthetic a priori) impossible, or that concepts like "choice", "decide" or "option" don't make sense in the face of such a predictor.
There doesn't seem to be any decision theory that tells you when a question just doesn't make sense...yet it is always a possibility. So much for decision theory.
Directly rewarding irrationality is bound to create paradoxes as well.
“After all, if there’s a demon who pays a billion dollars to everyone who follows CDT or EDT then FDTists will lose out. The fact you can imagine a scenario where people following one decision theory are worse off is totally irrelevant—the question is whether a decision theory provides a correct account of rationality.”
This seems flawed. What decision theory you follow isn't a fact of the world, it's a summary of behaviour. So if it's worse to do it you'd pretend not to.
No, it’s a fact of the world. It is a fact that some people think in their brain “hmm what action causes the most utility,” and others think “what action gives me evidence of lost utility after I take the action.”
But under FDT you could decide to stop following FDT but that would still be FDT but you wouldn't think so.
Humans can believe things for reasons like that. "I guess I'll commit to being a EDTist"
But you would still believe it's the rational decision theory which could be punished.
Nah feels like you are creating magic now. Not sure I but it
"Once the predictor has run their course"
What does that mean?
"But you only make decisions after you exist. Of course, your decisions influence whether or not you exist but they don’t happen until after you exist."
I think you are just fighting the hypothesis there. The hypothesis is that you can make a decision before you exist , because the predictor runs a simulation of you that makes a decision.
Heighn says so clearly:
"The point is that your decision procedure doesn't make the decision just once. Your decision procedure also makes it in the predictor's head, when she is contemplating whether or not to create you"
If you want to say explicitly that simulation is impossible, that's fine..if you want to say explicitly that the real you has free will, that's fine too...but it's implicitly saying that simulation is.impossible...you can't predict a freely willed agent. But neither shows that there FDT is wrong, just that you don't accept the terms of the puzzle.