Newcomb's problem splits people 50/50 in two camps, but the interesting thing is that both sides think the answer is obvious, and both sides think the other side is being silly. When I created a video criticizing Veritasium's video This Paradox Splits Smart People 50/50 I received a ton of feedback particularly from the two-box camp and I simply could not convince anyone of why they were wrong.
That lead me to believe there must be some cognitive trap at play: someone must be not seeing something clearly. After a ton of debates, reading the literature, considering similar problems, discussing with LLMs, and just thinking deeply, I believe the core of the problem is recursive thinking.
Some people are fluent in recursivity, and for them certain kind of problems are obvious, but not everyone thinks the same way.
My essay touches Newcomb's problem, but the real focus is on why some people are predisposed to a certain choice, and I contend free will, determinism, and the sense of self, all affect Newcomb's problem and recursivity fluency predisposes certain views, in particular a proper understanding of embedded agency must predispose a particular (correct) choice.
I do not see how any of this is not obvious, but that's part of the problem, because that's likely due to my prior commitments not being the same as the ones of people who pick two-boxes. But I would like to hear if any two-boxer can point out any flaw in my reasoning.

Jump in the discussion.
No email address required.
Notes -
I can't be arsed to do it, but it seems pretty trivial to plug whatever parameters you like into some simulation code and let it run a few million times? Some cases you will take one box but the alien predicted wrong and you get nothing; sometimes you take two and he was wrong and you get 1M + 1K. So long as he is mostly correct I don't see how the EV is not strongly driven by the cases where you pick the mystery box and get $1M -- no loss of free will required.
My sense tracks with that of @MathWizard. If you add some particular assumptions about the form of the problem, you can code it up, and likely, for a wide range of parameters, 1-boxing is higher EV.
I think the criticism of Wolpert/Benford is also similar in type. (Again, not really having spent sufficient time with it.) That is, they construct two possible interpretations. Either of them, you could just sit down and code. It may even be the case that for a wide range of parameters, EV still points to 1-boxing for both versions. However, my understanding of their claim is that those two codes will be very different. Even the strategy spaces are fundamentally different in their claim. And for a similarly wide range of parameters, the joint distributions will be contradictory. The point is not that the sign may be the same for this particular ratio of prizes; it's that there are just multiple contradictory ways to construct it.
Of course, someone could take the time and search out what ratio of prizes in the respective boxes produces maximum tension between the two interpretations, so that rather than having the two EV calcs mostly pointing in the same direction, we could maximize how often they conflict. That's kind of not the point of the critique, but I suppose it could be done if one found it necessary to really grok the difference between a well-posed and ill-posed problem. Though, like you put it, I probably can't be arsed to do it.
That said, I am almost motivated enough to try it (but it would probably have to wait a few weeks, and then, I'll probably be bored with it). I certainly don't know that we can for sure find parameters where the two possible games differ in terms of sign. If this problem was actually relevant to my research interests, I would absolutely just do it, because it's one where I have a vague sense of, "Wouldn't it have to be amazingly coincidental if the values were different, but the signs were always the same?" And when I sniff at the possibility that there could be an amazing coincidence like that, it's usually an indicator of a really interesting theoretical opportunity.
More options
Context Copy link
More options
Context Copy link