Newcomb's problem splits people 50/50 in two camps, but the interesting thing is that both sides think the answer is obvious, and both sides think the other side is being silly. When I created a video criticizing Veritasium's video This Paradox Splits Smart People 50/50 I received a ton of feedback particularly from the two-box camp and I simply could not convince anyone of why they were wrong.
That lead me to believe there must be some cognitive trap at play: someone must be not seeing something clearly. After a ton of debates, reading the literature, considering similar problems, discussing with LLMs, and just thinking deeply, I believe the core of the problem is recursive thinking.
Some people are fluent in recursivity, and for them certain kind of problems are obvious, but not everyone thinks the same way.
My essay touches Newcomb's problem, but the real focus is on why some people are predisposed to a certain choice, and I contend free will, determinism, and the sense of self, all affect Newcomb's problem and recursivity fluency predisposes certain views, in particular a proper understanding of embedded agency must predispose a particular (correct) choice.
I do not see how any of this is not obvious, but that's part of the problem, because that's likely due to my prior commitments not being the same as the ones of people who pick two-boxes. But I would like to hear if any two-boxer can point out any flaw in my reasoning.

Jump in the discussion.
No email address required.
Notes -
IF Omega's predictions have an independent p probability of being wrong, and the ratio of the big box to the small box is R, then two boxing is worth it if p > R/(R+1), which for the original problem where the big box is 1000 times larger would mean that you should only two box if p > 99.9% , meaning it's almost always wrong. Which obviously makes sense. The extra box is a thousand times smaller, so only worth it if you are risking less than 1/1000 chance of the first box.
The problem is not so much that the problem is mean and unfair and not letting me two box because I'm willing to risk my one box. The problem is that it's not specified to be random. It's not specified at all. You can't solve logical and mathematical problems that aren't well-specified. You can shrug and say "I dunno, If I don't know what's going on I guess one boxing seems more likely to work out for me." If I make Mathwizard's Paradox V2 and say
"There are two boxes. The left box either has $0 or $10. The right box either has $0 or $10000. You can only pick one box to open and keep, which do you pick?" You'd probably pick the right box, because might as well, but this is not a logical deduction which must be a correct answer. Maybe Box 1 has money with higher probability because I'm more willing to give up $10 than $10,000. If I did this demonstration in real life in a classroom, the right box would guaranteed be empty because no way am I sacrificing that much for a demonstration. But if I haven't specified probabilities or anything within the problem then you can only guess. There is no unique solution because there is no unique problem, it's actually a broad class of problems that all satisfy the wording in the premises. Most models that satisfy the axioms of Newcomb's problem have one boxing as the correct solution, but some have two boxing as the correct solution. The notion that you are randomizing between all possible variants of an underspecified problem can't be mathematically resolved without applying a measure to that space, which is not generally how people solve logical problems, and itself still involves semi-arbitrary choices that can result in different answers.
More options
Context Copy link