ControlsFreak
No bio...
User ID: 1422
Wolpert and Benford argue that the problem is ill-posed for almost any error rate, so it's not clear that stuffing in a particular number actually helps resolve the problem. I haven't spent all that much time with this problem yet, so I'm not going to commit to saying that I think they're right about this, but it jives with my intuition.
Generally speaking, in order to have a well-posed game, one must be very formal and precise in many details. Particularly things like order of operations, allowable policy spaces, information sets, and details around estimators. I've become more annoyed by estimators in various problems over time, even apart from the relatively minimal thinking I've done on Newcomb's problem. One of the greatest sources of my criticisms in reviews of submitted papers (or even when my collaborators come to me with a problem set-up and/or proposed solution) revolves around not taking sufficient care around estimators.
I do think that Wolpert/Benford at least suffice in arguing that there are at least two possible formalizations that are sufficiently well-posed. I think it's probably on someone else to either bite the bullet and say they are clearly choosing one form or the other... or to provide a sufficient alternative formalization that makes the details more clear.
Aside on Yudkowsky, relevant for the discussion below and my thoughts generally on these sorts of problems. I wouldn't be surprised if he has/had something in mind like what he did to the prisoners' dilemma problem, with the business about source codes and such. There could be a way to try to resolve Newcomb's problem in a similar fashion, but my perspective is that it would still be proposing a very specific formalization... and one that is not at all just a clear instantiation of the initial problem statement. I might go so far as to say that in the prisoners' dilemma case, he just proposed a different problem, with different policy spaces. Interesting in its own right, sure. Probably correct for that particular formalization of that particular version of the problem, sure. But also kind of just a different problem. In general, even minor tweaks to these aspects of the formulation can result in different games.
Similarly for Newcomb's problem, unless one takes the step of clearly laying out in a formal way exactly what they're going to specify for the domain of the problem (and then, I guess, argue that this is like, 'the one true interpretation of the original problem' or something), then I'm probably going to lean toward just thinking that the original problem is so informally stated as to be ill-posed.
Against Talking About Anthropics/Possible Worlds/etc in the Sleeping Beauty Problem
I get it. Anthropics is an interesting topic. Possible worlds has a long and rich philosophical history. I can get why people might want to expose more people to that stuff, kinda squint at the sleeping beauty problem, then think that it's close enough to spread the gospel.
But that's confusing people.
It's confusing them on what is otherwise a very simple math problem.
For those who haven't seen my last entry, I made some minor modifications, primarily adding a second person, so we have both Alice and Bob undergoing simultaneous experiments. The simplest version is that they each undergo approximately the same experiment, with the same coin, but opposite (the implications of heads for one person are like the implications of tails for the other). I also had some computer communication between them for some instructive purposes, but that's not even necessary here.1
Let's follow Alice and Bob a bit further. Suppose after their one/two awakenings, they're put back to sleep, memory again wiped. They're both finally awoken on Wednesday. "No more questions," the doctors say. "We took the liberty of interpreting your answers as wagers. We have your home address. We'll compute your payout and mail you a check with your results, revealing to you how the coin actually came out, how you answered the questions (because they won't remember), and what your payout is. Expect it to take 4-8 weeks."
Alice and Bob leave their respective rooms. They run into each other in the lobby.
Wait
Can Alice and Bob run into each other in the lobby? Aren't they, like, in different possible worlds or something? No, silly. That's confusing people. They're in the same world. They've been in the same hallway all along, separated by only a paper-thin wall.
Ok, so they run into each other in the lobby. They hit it off, decide to go out to a pub and grab a pint together. Naturally, the conversation turns to the strange experiment they each went through. Neither one is going to know how the coin flip actually went or what subsequently happened for another 4-8 weeks.
They begin to debate. How should they best guess what their results might have been? What if they'd like to wager against one another about the results? Should they have significantly different estimates of what they're going to see in their results? Should Alice think that there's a 1/3 chance that they're going to learn that it was heads, while Bob should think that there's a 1/3 chance that they're going to learn that it was tails? Did they truly "update" their probabilities during the course of the experiment?
No. Of course not. If either of them thought that, you could take their money. They should both think that it was 1/2 either heads or tails. This is because they didn't "update" some probability estimate. They didn't enter some weird different possible worlds, where never the physical Alice and Bob could ever meet again.
Instead, Alice and Bob are both capable of having a perfectly reasonable conversation. "Yeah, of course I think the probability of the coin flip was 1/2. It's just because of the weird observation function of the experiment that I computed that there was a different probability for what I was likely to observe." "Yeah, me too, but my observation function was the opposite, so I computed that I was likely to observe the opposite. But obviously, at the same time, the probability of the coin flip was 1/2."
They're just different probabilities with different meanings. You can just compute them from the observation functions.
1 - That time, I was trying to get people to figure out that they could have one individual's brain retaining multiple different probabilities, with multiple different meanings. I guess this time, I'll just try having multiple different minds meeting.
Perhaps part of it is that married women who changed their name want to vote too.
As someone whose wife came from a foreign location where women don't tend to change their names, and can thus attest to a significantly higher-than-normal level of grief over the wife changing her name, getting US documentation that would be sufficient for voting is probably the easiest part of a married woman changing her name.
- Prev
- Next

My sense tracks with that of @MathWizard. If you add some particular assumptions about the form of the problem, you can code it up, and likely, for a wide range of parameters, 1-boxing is higher EV.
I think the criticism of Wolpert/Benford is also similar in type. (Again, not really having spent sufficient time with it.) That is, they construct two possible interpretations. Either of them, you could just sit down and code. It may even be the case that for a wide range of parameters, EV still points to 1-boxing for both versions. However, my understanding of their claim is that those two codes will be very different. Even the strategy spaces are fundamentally different in their claim. And for a similarly wide range of parameters, the joint distributions will be contradictory. The point is not that the sign may be the same for this particular ratio of prizes; it's that there are just multiple contradictory ways to construct it.
Of course, someone could take the time and search out what ratio of prizes in the respective boxes produces maximum tension between the two interpretations, so that rather than having the two EV calcs mostly pointing in the same direction, we could maximize how often they conflict. That's kind of not the point of the critique, but I suppose it could be done if one found it necessary to really grok the difference between a well-posed and ill-posed problem. Though, like you put it, I probably can't be arsed to do it.
That said, I am almost motivated enough to try it (but it would probably have to wait a few weeks, and then, I'll probably be bored with it). I certainly don't know that we can for sure find parameters where the two possible games differ in terms of sign. If this problem was actually relevant to my research interests, I would absolutely just do it, because it's one where I have a vague sense of, "Wouldn't it have to be amazingly coincidental if the values were different, but the signs were always the same?" And when I sniff at the possibility that there could be an amazing coincidence like that, it's usually an indicator of a really interesting theoretical opportunity.
More options
Context Copy link