Monday, October 27, 2008

Newcomb’s Paradox with a Perfect Predictor

I don't know much about decision theory, so the following may contain naive misunderstandings, elementary errors, and badly drawn tables. But the whole point of blogs is to allow people to talk about stuff that they are completely ignorant of and make an arse of themselves in public, right? So...

I'm sure I've heard sensible people say that even if the predictor in a Newcomb's Paradox-type situation is a perfect predictor, you should still two-box. I can't see it at all, but I'm probably missing something. So, here's an explanation of why you should one-box in such a situation (or, at least, in the kind of situation described below). What am I missing?

Set-up

In front of you are two boxes. One is transparent and contains $1,000. The other is opaque and contains either nothing, or $1,000,000 (you don’t know which). You’re faced with the following choice: take the opaque box or take both boxes. You get to keep whatever is in the box or boxes you choose to take. Yesterday, a Perfect Predictor (PP) predicted whether you would choose to ‘one-box’ (take the opaque box only) or ‘two-box’ (take both boxes). If she predicted that you’d one-box, the PP put $1,000,000 into the opaque box. If the PP predicted that you’d two-box, she put nothing in the opaque box. What should you do?

Some stipulations/clarifications

You know everything stated above. You don’t know what prediction the PP made, and you don’t know the contents of the opaque box.
All you value is getting as much money as possible.
The PP is perfect – she can’t be wrong.
Your choice has no causal effect on the contents of the boxes: after the PP set up the boxes (yesterday, before your decision), their contents do not change.
There is no backwards causation.
Your choice (whatever it is) does not feature in the explanation of why the PP predicted what she did.
Your choice is free. In other words, in some important sense, you do have a choice.
You are perfectly rational.

A reason for two-boxing

One reason why you might think that you should take two boxes is as follows. At the moment at which you must make your choice, the contents of the opaque box are fixed: whatever choice you make will not affect what is in that box. Imagine that the PP predicted you will one-box. It follows that there is now $1,000,000 in the opaque box. In this situation, if you one-box, you get $1,000,000, and if you two-box you get $1,001,000. You only value getting as much money as possible, so, if the PP predicted that you’d one-box, you’ should two-box. Now imagine that the PP predicted that you’d two-box, so the opaque box is empty. In these circumstances, if you one-box then you’ll get nothing, but if you two-box you’ll get $1,000. As you only value maximising the money you get, if the PP predicted you’d two-box, you should two-box.

So we have it that whatever the PP predicted, you should two-box. If follows that you should two-box. The possible situations can be set out in a table like the one below.

















Action
Prediction
One-box Two-box
One-box Profit:
$1,000,000
Profit:
$1,001,000
Two-box Profit:
$0
Profit:
$1,000


A reason for one-boxing

One reason why you might think that you should one-box is as follows. The PP is a perfect predictor. Thus, whatever you do, the PP will have predicted it. So if you one-box, the PP already predicted that you’d one-box and put $1,000,000 in the opaque box. If you one-box, you get $1,000,000. And if you two-box, the PP already predicted that you’d two-box, and put nothing in the opaque box. If you two-box, you get $1,000. Since you only care about maximising the amount of cash you get, you should one-box.

Contradiction!

So it seems that you should two-box (because two-boxing is better – according to your values – than one-boxing) and also that you should one-box (because one-boxing is better – according to your values – than two-boxing). But this is a contradiction: bad. One way to remove the whiff of paradox would be if we could reject either the two-boxer’s or the one-boxer’s reasoning.

Can we reject the two-boxer’s reasoning?

I think so. As mentioned in the reason for one-boxing, the PP is perfect. It follows that, two of the squares in the diagram are impossible. It cannot be that you choose one box and that the PP predicted you’d two-box. And nor can it be that you choose two boxes and the PP predicted you’d one box. These possibilities are ruled out ex hypothesi.

The table of possibilities should look like this:

















Action
Prediction
One-box Two-box
One-box Profit:
$1,000,000
Profit:
$1,001,000
Two-box Profit:
$0
Profit:
$1,000


Victory for the one-boxer? Not if there’s also a flaw in their reasoning.

Can we reject the one-boxer’s reasoning?

The one-boxer seems to ignore the fact that, when they come to choose, the contents of the opaque box are fixed. And what they choose will not change whatever it is that is in the box (if there’s $1,000,000 it won’t disappear if they two-box, and if there’s nothing, no $1,000,000 will mysteriously materialise if they one-box). This is true, but I don’t think it harms the one-boxer’s argument, since it is also true that, whatever is chosen, the PP predicted that that choice would be made, and filled the opaque box appropriately.

To put things another way, once the PP makes her prediction, your choice is determined in the following sense: given that PP predicts that you will φ, then it follows that you will φ. The prediction needn’t cause you to φ, but nevertheless, that the prediction that you will φ has been made by the PP entails that you will φ. (That the forecast of rain tomorrow is true today presumably doesn’t cause it to rain tomorrow, but that the forecast is true does entail that it will rain tomorrow.)

Perhaps it’s not possible for it to be the case that you φ before you φ and also true that you freely φ. If so, then the situation described here is not a possible one, and so we don’t need to worry about whether one-boxers or two-boxers are correct. But if it this scenario is possible, then I can’t see anything wrong with the one-boxer’s reasoning.

Conclusion

In these sort of situations (if they're coherent), you should be a one-boxer.

No comments: