Newcomb's problem splits people 50/50 in two camps, but the interesting thing is that both sides think the answer is obvious, and both sides think the other side is being silly. When I created a video criticizing Veritasium's video This Paradox Splits Smart People 50/50 I received a ton of feedback particularly from the two-box camp and I simply could not convince anyone of why they were wrong.
That lead me to believe there must be some cognitive trap at play: someone must be not seeing something clearly. After a ton of debates, reading the literature, considering similar problems, discussing with LLMs, and just thinking deeply, I believe the core of the problem is recursive thinking.
Some people are fluent in recursivity, and for them certain kind of problems are obvious, but not everyone thinks the same way.
My essay touches Newcomb's problem, but the real focus is on why some people are predisposed to a certain choice, and I contend free will, determinism, and the sense of self, all affect Newcomb's problem and recursivity fluency predisposes certain views, in particular a proper understanding of embedded agency must predispose a particular (correct) choice.
I do not see how any of this is not obvious, but that's part of the problem, because that's likely due to my prior commitments not being the same as the ones of people who pick two-boxes. But I would like to hear if any two-boxer can point out any flaw in my reasoning.

Jump in the discussion.
No email address required.
Notes -
Which side in the Newcomb debate is supposed to have the hangup about free will? Yudkowsky for example is a two boxer, and I don’t think he would perceive himself to have any psychological obstacles regarding free will in this case.
As others pointed out: Yudkowsky is a one-boxer. Can you provide a single two-boxer that doesn't believe in libertarian free will? I doubt there's any.
More options
Context Copy link
Where's yud's 2 box argument? I'm not sure what the very smart 2 boxers believe but one of the most common two box explanations boils down to a disbelief that their actions can be predicted at all because they imagine some kind of free will break after the boxes are set.
https://www.lesswrong.com/w/newcomb-s-problem
https://plato.stanford.edu/entries/decision-causal/
I didn't notice the first link was yud himself but unless I'm reading the post wrong he seems like a one boxer? does he take a definitive side elsewhere?
https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality yeah I may have been wrong, my bad
All good, I was really confused because it feels like being a two boxer would have super conflicted with everything he believes in.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The two-box-takers. I don't know what Yudkowsky is thinking.
More options
Context Copy link
More options
Context Copy link