Newcomb's problem splits people 50/50 in two camps, but the interesting thing is that both sides think the answer is obvious, and both sides think the other side is being silly. When I created a video criticizing Veritasium's video This Paradox Splits Smart People 50/50 I received a ton of feedback particularly from the two-box camp and I simply could not convince anyone of why they were wrong.
That lead me to believe there must be some cognitive trap at play: someone must be not seeing something clearly. After a ton of debates, reading the literature, considering similar problems, discussing with LLMs, and just thinking deeply, I believe the core of the problem is recursive thinking.
Some people are fluent in recursivity, and for them certain kind of problems are obvious, but not everyone thinks the same way.
My essay touches Newcomb's problem, but the real focus is on why some people are predisposed to a certain choice, and I contend free will, determinism, and the sense of self, all affect Newcomb's problem and recursivity fluency predisposes certain views, in particular a proper understanding of embedded agency must predispose a particular (correct) choice.
I do not see how any of this is not obvious, but that's part of the problem, because that's likely due to my prior commitments not being the same as the ones of people who pick two-boxes. But I would like to hear if any two-boxer can point out any flaw in my reasoning.

Jump in the discussion.
No email address required.
Notes -
In a clockwork universe it's of course all luck all the way down.
In a clockwork universe, is there such a thing as decision theory, or a subset thereof known as game theory? It would seem to me that, sure, one could have a mathematical theory of optimization, extremal values, or even min-max theory, but it would not seem to me that one could view any such results as being prescriptive - I.e., "If you are trying to accomplish X, you should choose Y." Instead, it would simply observe, "You might by chance (or deterministic integration of physical differential equations or whatever) take action Y or Y', and it turns out that we can compute that Y is optimal for purpose X, while Y' is suboptimal."
That is, if one is an adherent to this conception of a clockwork universe, I think the way they would state their position on Newcomb's problem would be something more like, "You will either 1-box or 2-box, based on the movements of the clock. We can also compute from axioms regarding the clock's movements that 1-boxers will possess more money," and less like, "You're in this hypothetical situation where you need to think about the rational way to proceed optimally, and here is why you should choose to act in the following way." I think if such proponents presented their perspective in this way, it would be less amenable to criticism that their problem is ill-posed as a decision/game theory problem.
The axioms of decision/game theory seem to conflict with the axioms that seem to appear here. I guess one way to put words to that would be that one has a feasible action set within the underlying dynamical system that has cardinality greater than one. Perhaps another way to put it is that it does not seem to me that decision/game theory is applicable to clocks. The feasible action set of clocks has cardinality one. One does not ask how a clock should choose among non-identical actions, though one may observe whether a clock's deterministic actions are/are not optimal according to some metric.
Taking this alternative position would, I think, sidestep the criticism I relayed above from Wolpert/Benford, as what they were fundamentally trying to do was to formalize the problem within decision/game theory, where players have feasible action sets with cardinality larger than one. They observed that if you do this, you run into contradictions without further specification. But it would seem like, sure, if you give up on that, give up on saying that it has anything to do with decision/game theory, that it's more like just making an observation about clocks and optimality/suboptimality, then I think you do avoid the critique.
More options
Context Copy link
More options
Context Copy link