Newcomb's problem splits people 50/50 in two camps, but the interesting thing is that both sides think the answer is obvious, and both sides think the other side is being silly. When I created a video criticizing Veritasium's video This Paradox Splits Smart People 50/50 I received a ton of feedback particularly from the two-box camp and I simply could not convince anyone of why they were wrong.
That lead me to believe there must be some cognitive trap at play: someone must be not seeing something clearly. After a ton of debates, reading the literature, considering similar problems, discussing with LLMs, and just thinking deeply, I believe the core of the problem is recursive thinking.
Some people are fluent in recursivity, and for them certain kind of problems are obvious, but not everyone thinks the same way.
My essay touches Newcomb's problem, but the real focus is on why some people are predisposed to a certain choice, and I contend free will, determinism, and the sense of self, all affect Newcomb's problem and recursivity fluency predisposes certain views, in particular a proper understanding of embedded agency must predispose a particular (correct) choice.
I do not see how any of this is not obvious, but that's part of the problem, because that's likely due to my prior commitments not being the same as the ones of people who pick two-boxes. But I would like to hear if any two-boxer can point out any flaw in my reasoning.

Jump in the discussion.
No email address required.
Notes -
Honestly, I agree with this framing, and I think it's a strong argument for two-boxing.
If I already lost (or won) the "get Omega to make a beneficial prediction" game, then all that remains to do is two-box and collect consolation prize (perhaps on top of the jackpot). My decision doesn't impact what's in the second box, only my personality at the time of Omega's prediction does, but that's factor I can't influence because the game from my perspective starts after that.
The question isn't "Omega will choose you for Newcomb's problem in one year, do you try to pre-commit to one-boxing just this once?" It's "you're sitting in a room, Omega has explained the rules to you, the box is already filled." If I one-box now, it won't improve my outcome (in fact it will reduce my payout by $1000 either way). Only already being, per Omega's judgment, the kind of person who would one-box to begin with will, which I can't change retroactively.
If you choose to one box after the decision period by reasoning it out then you are in fact the kind of person who would one box. If you say fuck it, it's too late then you're in fact the kind of person to two box. Thus it still hinges on your decision, albeit the concept of libertarian free will is questionable.
No, it doesn't, because my decision doesn't retroactively change what kind of person I am. The causality goes in the other direction.
Basically, depending on what kind of person I am, Omega offers me a different game.
Your choice reveals what kind of person you are, which omega already knew. If you didn't know what you were going to choose ahead of time that's a mark of your ignorance, not omega's.
I.e. my choice doesn't change anything, it just "reveals" information already known to the relevant player Omega.
What I know or don't know ahead of time doesn't matter, because I'm not making a decision ahead of time.
You can also just model it as omega knowing whether or not you're smart or lucky enough to come up with the right answer to get the $1m. If you pick the right answer you get $1m if you don't then you don't. It's a bit of a brain twister but it works out.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link