Friday, November 28, 2008

The aim of credences

"Beliefs are to truth as degrees of belief are to _____"

How do we fill in the blank? Certainly not with 'degrees of truth'. If I have a .8 credence that 'Crime and Punishment' is on the bookshelf, it is not the case that I'm representing some kind of vague state of affairs.

A much more plausible thing to say here is that degrees of belief represent the chance or likelihood that the world is thus and so. The spirit of this proposal is that there is some objective standard of chance or likelihood that our credences are attempting to latch on to. Now while it is entirely plausible that our world is chancy, I do not think that this is an adequate characterisation of what our degrees of belief are aiming at. We have credences about a whole host of things, most of which are not chancy, or not chancy in the right way. There is no question of chance with respect to whether 'Crime and Punishment' is on the bookshelf. It either is or it isn't. If I were to flip an indeterministic fair coin, then there is a 50% chance that the coin will land heads, and in that respect my credence of .5 that the coin will land heads accurately represents the chance that it will. However, if I have already flipped the coin, but have not yet seen the result, then there is no chance that the coin has or hasn't landed heads, and yet I still have a .5 credence that it has landed heads. If I have a .5 credence that it has landed heads, and in fact it has landed heads, then by the suggestion that credences represent chances I have misrepresented the world. I consider this an unhappy consequence. In fact, what we ought to say about the coin before it is flipped is that we have a .5 credence that it will land heads and a credence of 1 (all other things being equal) that there is a 50% chance that the coin will land heads.

There are two good-making features of doxastic attitudes. One is truth, and the other is justification. We want an epistemic theory that respects both of these aims. The problem is that it is very difficult to respect both of these aims at the same time. Suppose I have high level of justification for the proposition that p. What doxastic attitude should I adopt with respect to p? Certainly I should not have a credence of 1 in p. While I have a high level of justification for p, it is not that high. On the other hand, if I set my credence to something less than 1, and in fact p turns out to be the case, then I have actually deprived myself of being right about p. I have missed out on something good as it were. So we have a dilemma. Set your credence too high and you fail to meet the norm of justification. Set your credence too low (anything less than 1) and you fail to meet the norm of truth. Certainly you have something going for you if you don't set your credence to 1; p might have turned out to be false. If it were to have turned out false, you would have avoided doing something bad (we can suppose that it is as bad to believe falsely as it is as good to believe truly). So never setting your credences to 1 or 0, and never believing anything (if that is a distinct notion) allows you to avoid ever believing something false. However, you also prevent yourself from believing anything true.

Let's make the following simplifying though implausible assumption: if you believe something true then you score 1 point, and if you believe something false you lose a point. You're trying to score as many points as you can. In this case, you would be better off believing that p if you have a high credence that p than not believing that p. You're quite confident that you will score more points this way. But isn't it wrong to believe that p if you're not absolutely confident that p? Haven't you failed to meet the norm of justification? Well yes, you have (I say), but that may be a reasonable price to pay. You can't adequately meet both norms at once, but if all you tried to do was meet the justification norm, then you would never score any points on the truth norm. Attempting to meet both norms at once requires a trade-off. (I don't have any plausible way of measuring the trade-off at this stage).

"Beliefs are to truth as degrees of belief are to _____"

There is no way to fill in the blank, for there is no parallel job description for degrees of belief. Degrees of belief are not in the business of representing the world like beliefs are. Degrees of belief aim at representing your level of justification, while beliefs aim at truth. It is (relatively) easy to say what degrees of belief you ought to have in p, for any p; it is whatever level of justification you have for the truth of p, for any p. Of course it's very difficult to say what justification is, how we get it, and how much we have of it, but once we have an answer to these questions it is straightforward to say what level of credence one should have. It is considerably harder to say what you ought to believe given your credences. That requires some kind of trade-off. That requires saying what the value of truth is and what the value of justification is. It seems to me that the value of justification is entirely derived from the value of truth. If that thought is right and if, as seems to be obvious, we should rarely have degrees of belief of 1, then proponents of replacing belief talk with talk of degrees of belief are offering us an epistemology bereft of aim and value.

(Thanks to Alan Hájek for arousing my thoughts on this issue).

Monday, October 27, 2008

Newcomb’s Paradox with a Perfect Predictor

I don't know much about decision theory, so the following may contain naive misunderstandings, elementary errors, and badly drawn tables. But the whole point of blogs is to allow people to talk about stuff that they are completely ignorant of and make an arse of themselves in public, right? So...

I'm sure I've heard sensible people say that even if the predictor in a Newcomb's Paradox-type situation is a perfect predictor, you should still two-box. I can't see it at all, but I'm probably missing something. So, here's an explanation of why you should one-box in such a situation (or, at least, in the kind of situation described below). What am I missing?

Set-up

In front of you are two boxes. One is transparent and contains $1,000. The other is opaque and contains either nothing, or $1,000,000 (you don’t know which). You’re faced with the following choice: take the opaque box or take both boxes. You get to keep whatever is in the box or boxes you choose to take. Yesterday, a Perfect Predictor (PP) predicted whether you would choose to ‘one-box’ (take the opaque box only) or ‘two-box’ (take both boxes). If she predicted that you’d one-box, the PP put $1,000,000 into the opaque box. If the PP predicted that you’d two-box, she put nothing in the opaque box. What should you do?

Some stipulations/clarifications

You know everything stated above. You don’t know what prediction the PP made, and you don’t know the contents of the opaque box.
All you value is getting as much money as possible.
The PP is perfect – she can’t be wrong.
Your choice has no causal effect on the contents of the boxes: after the PP set up the boxes (yesterday, before your decision), their contents do not change.
There is no backwards causation.
Your choice (whatever it is) does not feature in the explanation of why the PP predicted what she did.
Your choice is free. In other words, in some important sense, you do have a choice.
You are perfectly rational.

A reason for two-boxing

One reason why you might think that you should take two boxes is as follows. At the moment at which you must make your choice, the contents of the opaque box are fixed: whatever choice you make will not affect what is in that box. Imagine that the PP predicted you will one-box. It follows that there is now $1,000,000 in the opaque box. In this situation, if you one-box, you get $1,000,000, and if you two-box you get $1,001,000. You only value getting as much money as possible, so, if the PP predicted that you’d one-box, you’ should two-box. Now imagine that the PP predicted that you’d two-box, so the opaque box is empty. In these circumstances, if you one-box then you’ll get nothing, but if you two-box you’ll get $1,000. As you only value maximising the money you get, if the PP predicted you’d two-box, you should two-box.

So we have it that whatever the PP predicted, you should two-box. If follows that you should two-box. The possible situations can be set out in a table like the one below.

















Action
Prediction
One-box Two-box
One-box Profit:
$1,000,000
Profit:
$1,001,000
Two-box Profit:
$0
Profit:
$1,000


A reason for one-boxing

One reason why you might think that you should one-box is as follows. The PP is a perfect predictor. Thus, whatever you do, the PP will have predicted it. So if you one-box, the PP already predicted that you’d one-box and put $1,000,000 in the opaque box. If you one-box, you get $1,000,000. And if you two-box, the PP already predicted that you’d two-box, and put nothing in the opaque box. If you two-box, you get $1,000. Since you only care about maximising the amount of cash you get, you should one-box.

Contradiction!

So it seems that you should two-box (because two-boxing is better – according to your values – than one-boxing) and also that you should one-box (because one-boxing is better – according to your values – than two-boxing). But this is a contradiction: bad. One way to remove the whiff of paradox would be if we could reject either the two-boxer’s or the one-boxer’s reasoning.

Can we reject the two-boxer’s reasoning?

I think so. As mentioned in the reason for one-boxing, the PP is perfect. It follows that, two of the squares in the diagram are impossible. It cannot be that you choose one box and that the PP predicted you’d two-box. And nor can it be that you choose two boxes and the PP predicted you’d one box. These possibilities are ruled out ex hypothesi.

The table of possibilities should look like this:

















Action
Prediction
One-box Two-box
One-box Profit:
$1,000,000
Profit:
$1,001,000
Two-box Profit:
$0
Profit:
$1,000


Victory for the one-boxer? Not if there’s also a flaw in their reasoning.

Can we reject the one-boxer’s reasoning?

The one-boxer seems to ignore the fact that, when they come to choose, the contents of the opaque box are fixed. And what they choose will not change whatever it is that is in the box (if there’s $1,000,000 it won’t disappear if they two-box, and if there’s nothing, no $1,000,000 will mysteriously materialise if they one-box). This is true, but I don’t think it harms the one-boxer’s argument, since it is also true that, whatever is chosen, the PP predicted that that choice would be made, and filled the opaque box appropriately.

To put things another way, once the PP makes her prediction, your choice is determined in the following sense: given that PP predicts that you will φ, then it follows that you will φ. The prediction needn’t cause you to φ, but nevertheless, that the prediction that you will φ has been made by the PP entails that you will φ. (That the forecast of rain tomorrow is true today presumably doesn’t cause it to rain tomorrow, but that the forecast is true does entail that it will rain tomorrow.)

Perhaps it’s not possible for it to be the case that you φ before you φ and also true that you freely φ. If so, then the situation described here is not a possible one, and so we don’t need to worry about whether one-boxers or two-boxers are correct. But if it this scenario is possible, then I can’t see anything wrong with the one-boxer’s reasoning.

Conclusion

In these sort of situations (if they're coherent), you should be a one-boxer.

Thursday, October 9, 2008

This blog is not intended for the purposes of spam! It is intended for serious philosophical work. The only reason why no serious philosophical work appears yet is because Alan gave us a crate of wine that we are now under obligation to consume before any serious work can continue.