site banner

Recursive thinking, Newcomb's problem, and free will

felipec.substack.com

Newcomb's problem splits people 50/50 in two camps, but the interesting thing is that both sides think the answer is obvious, and both sides think the other side is being silly. When I created a video criticizing Veritasium's video This Paradox Splits Smart People 50/50 I received a ton of feedback particularly from the two-box camp and I simply could not convince anyone of why they were wrong.

That lead me to believe there must be some cognitive trap at play: someone must be not seeing something clearly. After a ton of debates, reading the literature, considering similar problems, discussing with LLMs, and just thinking deeply, I believe the core of the problem is recursive thinking.

Some people are fluent in recursivity, and for them certain kind of problems are obvious, but not everyone thinks the same way.

My essay touches Newcomb's problem, but the real focus is on why some people are predisposed to a certain choice, and I contend free will, determinism, and the sense of self, all affect Newcomb's problem and recursivity fluency predisposes certain views, in particular a proper understanding of embedded agency must predispose a particular (correct) choice.

I do not see how any of this is not obvious, but that's part of the problem, because that's likely due to my prior commitments not being the same as the ones of people who pick two-boxes. But I would like to hear if any two-boxer can point out any flaw in my reasoning.

2
Jump in the discussion.

No email address required.

The problem with Newcomb's problem is that it basically involves time travel, and generally underspecifies how that time travel works. Consider a similar problem:

Time 1: you discover box 1 with 1,000,000 points

Time 2: you discover a box 2 with 1,000 points

Time 3: someone shows up claiming to be a time traveler shows up and says that if you hand him box 2, he will multiply it by 1,000, and then go back in time to put it in box 1. Actually, he claims, that's where box 1 came from all along and if you don't give him the 1,000 your box 1 will disappear.

Assuming you are rational/selfish, whether you say yes or no very much depends on whether he's telling the truth. If the problem carefully specifies that he actually is a time traveler telling the truth, and time travel does work this way, then obviously you should give it to him (one box). If this happened in real life, I would not give him anything and two box, because my prior on time travel existing is less than 1/1000 and he's just a liar trying to con me. If the problem is not careful and is ambiguous about his truthfulness then people's answers are going to depend on their trustfulness, suspension of disbelief, or just general attitudes towards how willing they are to buy time travel in a hypothetical logic puzzle.

Actual Newcomb's problem is basically the same as this in that decisions you make in the future affect things in the past, and the being making the boxes has to have time travel powers in order to guarantee a 100% success rate (though not all version of the problem specify this precisely, maybe it just has a 99% success rate, or a vague but high success rate) The reason people so confidently disagree is that in any well-specified version of the problem the answer is obvious, but in any vague under-specification it's ambiguous to which well-specified version people will round it to. This is the exact same reason the Monty Hall problem is controversial as well. It's not merely there being a counter-intuitive answer, it's that the problem specifications are very volatile and people keep leaving important details ambiguous that they shouldn't.

The problem with Newcomb's problem is that it basically involves time travel, and generally underspecifies how that time travel works. Consider a similar problem:

Not time travel, just perfect prediction. If you're actually a perfect predictor then you can in essence see the future. If you had a perfect model of physics and initial conditions then you could predict a coin flip with 100% accuracy. The kind of reason a human does when presented with the boxes is no different unless you a proposing some spooky non-material stuff in the reasoning. The formulation I'm familiar with is perfect prediction in which case there are four theoretical cases.

  1. You one box and Omega correctly predicted you would one box thus you get $1m

  2. You one box and omega incorrectly predicted you would 2 box so you get zero. This is impossible by construction, omega cannot predict wrongly.

  3. You two box and Omega incorrectly assumed you would one box, you get $1m + $1k. This is impossible by construction, Omega cannot be wrong.

  4. You two box and Omega correctly guesses you'll two box. You get $1k.

There are only two actually possible options with the given constraints and you get to make a choice which of them is the case. This is not a paradox unless predicting future events is impossible.

Your whole reasoning relies on there being something intrinsically impossible about predicting your decisions, even as you lay out the reasoning for them. Is it so hard to imagine that someone could read you well enough to know which outcome you'll ultimately reach?

Not time travel, just perfect prediction

Corporate want's you to find the difference between these two pictures and they are the same picture. Either they KNOW the future or they don't, and the problem as typically presented gives us no reason to believe that they do.