Newcomb's problem splits people 50/50 in two camps, but the interesting thing is that both sides think the answer is obvious, and both sides think the other side is being silly. When I created a video criticizing Veritasium's video This Paradox Splits Smart People 50/50 I received a ton of feedback particularly from the two-box camp and I simply could not convince anyone of why they were wrong.
That lead me to believe there must be some cognitive trap at play: someone must be not seeing something clearly. After a ton of debates, reading the literature, considering similar problems, discussing with LLMs, and just thinking deeply, I believe the core of the problem is recursive thinking.
Some people are fluent in recursivity, and for them certain kind of problems are obvious, but not everyone thinks the same way.
My essay touches Newcomb's problem, but the real focus is on why some people are predisposed to a certain choice, and I contend free will, determinism, and the sense of self, all affect Newcomb's problem and recursivity fluency predisposes certain views, in particular a proper understanding of embedded agency must predispose a particular (correct) choice.
I do not see how any of this is not obvious, but that's part of the problem, because that's likely due to my prior commitments not being the same as the ones of people who pick two-boxes. But I would like to hear if any two-boxer can point out any flaw in my reasoning.

Jump in the discussion.
No email address required.
Notes -
No it doesn't. That's what two-boxers claim in order to fit the problem into their view of reality, but that commits an appeal to incredulity fallacy.
That's not the Newcomb's problem: 100% success rate is never specified, it's "almost certainly". That means close to 100%, not 100%.
You are saying it's not possible for Omega to have such accuracy unless the future affects the past, but you don't provide any justification for that. You are basically saying: "I don't see how X is possible, therefore X is not possible". That is not a valid argument, that's an appeal to incredulity fallacy.
That is precisely why I devised my sunscreen problem. Your argument is the same as saying: "I don't see how efficacy against skin cancer and eating ice cream could be causally related, therefore they are not causally related".
Just because you don't see how Omega could predict your choice almost certainly without backwards causality doesn't mean that it can't.
This is why I complain about it being underspecified. If omega can be wrong then the entirety of the problem hinges on when/how/why it can be wrong. If it's possible for someone to get away with two boxing and get both boxes, and you can put yourself in that scenario, then you can win by two boxing. If omega attempts to minimize its failed prediction rate, maybe you can employ a mixed strategy (flip a very slightly weighted coin) which randomizes and then you could one box with probability 50.01% and two box with probability 49.99%, causing omega to predict you will one box, and you always get the one box plus almost half the time you get a bonus. Can it predict coin tosses before they're made? Can it predict radioactive decays? This is not mere psychology. I'm not saying it's impossible for someone to cold read you and make educated guesses. If I read psychological profiles on people I could guess that sneakier, greedier, more disagreeable people are more likely to two box while straightforward, naive, or chill people are more likely to one box, and probably get like a 70-80% success rate. Is that what omega is doing? Because then I'm just screwed: I overthink things and seem like a two boxer and if I bit the bullet and decided to one box I would end up getting nothing because it would false guess me as a two boxer.
Literally none of this is explained in the premise. The problem very much depends on information is not present. If I give you "MathWizard's Paradox" and say
"There are two boxes. The left box has some money. The right box has a different amount of money not equal to the left box. You only get one box, which one do you pick?"
This likewise is going to lead to disagreement (or would, if people cared and tried to argue about it). If I added a whole bunch of window dressing to disguise the obvious stupidity of this problem, a bunch of superficial characteristics that made it seem more interesting and less obvious, it wouldn't change the underlying symmetry and lack of information. I have, in my head, decided how much money is in each box. There is a correct answer. But I haven't told you enough information for you to deduce it, and there are infinite variations of this problem, half of which have the opposite correct answer.
It's not that I can't see a way for this to happen, it's that I can imagine a dozen hypothetical ways it could try to do this, and half of them let me two box anyway while half of them don't.
Let's put a number on it -- what successful prediction rate would Omega need to have for you to consider taking both boxes? Depends how badly you need a thousand bucks I guess?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link