Pages

Saturday, June 18, 2011

The Virtues of False Belief - A Puzzle

Stephen Stich


Sorry, it's been a slow week on the blogging front, and this is likely to continue on into next week. Here's something to tide you over until I get back to more frequent posting. It's essentially a paradox -- or, at least, I think it might be a paradox -- and I'd like your thoughts on it.

What I want to do is to look at a passage from Stephen Stich on the virtues of false beliefs. It's actually taken from an article by Michael Bishop and J.D. Trout on their epistemological theory Strategic Reliabilism (SR). The passage contains the paradox, but some background might be appropriate before quoting it, so here are the essentials: SR recommends that we adopt belief-forming strategies that are robustly reliable and efficient. In defending this, Bishop and Trout appeal to the Aristotelian principle that, in the long run, good reasoning tends to produce good outcome. Stich criticises this appeal because it reduces SR to a crude form of pragmatism. In other words, it would allow us to pick epistemic strategies that produce false beliefs simply because those beliefs will lead to better outcomes.

Here's what Stich says on this point (from p. 1059 of Bishop and Trout):

[I]n some very significant situations, having false beliefs leads to better outcomes than having true beliefs. Though examples are legion, perhaps the best known comes from the work of Shelley Taylor and her colleagues who have shown that 'positive illusions' and 'unrealistic optimism' in patients with HIV leads to both better psychological coping and slower progression of the infection. To put the matter simply, if you have false beliefs you live longer and have a higher quality of life.

Now I don't know much about Shelley Taylor and her work on HIV patients, but something about this passage struck me as being odd. If optimistic beliefs about one's future disease progression lead to better outcomes, then in what sense are those beliefs false? Surely, those who were optimistic were right to be so since they did, as a matter of fact, live longer and have a better quality of life?

On the face of it, Stich's claims about false beliefs seem paradoxical. At least, they seem that way to me. But I wonder if more could be said. The way I see it there are two interpretations of "false belief" at work here, and each can be used to support a different conclusion. As follows:


  • (1) A belief about one's future disease progression is false if it is based on a faulty inference from existing evidence concerning patients with the same disease.
  • (2) The patients in Taylor's HIV study drew faulty inferences from existing evidence about patients with the same disease.
  • (3) Therefore the patients in Taylor's HIV study had false beliefs.


Or:


  • (1*) A belief about one's future disease progression is false if, as a matter of fact, it does not correspond with the actual progression of one's disease.
  • (2*) The optimistic beliefs of the patients in Taylor's HIV study did correspond with the actual progression of their disease.
  • (3*) Therefore, the patients in Taylor's HIV did not have false beliefs. 


One problem with the second argument is that I don't know if (2*) is true because I don't know the content of their beliefs. It may be that their beliefs were completely out of line with reality but that they still did better than those with more pessimistic beliefs.

Anyway, here are my questions to you: Is there something paradoxical about Stich's false belief example? Does it simply highlight the difference between different epistemological theories? Or am I just being incredibly naive about the whole thing? Answers on a postcard (or in the comments section)

9 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Since 'positive illusions' and 'unrealistic optimism' might include the belief in spiritual, experimental and homeopathic treatments, or in sudden remission of the disease, I don't think Stich's passage need be paradoxical. As for this..

    "SR recommends that we adopt belief-forming strategies that are robustly reliable and efficient. In defending this, Bishop and Trout appeal to the Aristotelian principle that, in the long run, good reasoning tends to produce good outcome."

    .. I can't see how it is possible to adopt a belief (or belief in the products of a belief-forming strategy) for pragmatic reasons. This is what seems paradoxical to me: that I should simultaneously understand a proposition as true whilst taking as my grounds for its truth some other propositions which I realize do not conduce to the truth of the proposition I believe.

    ReplyDelete
  3. Heres my preliminary two cents

    If the patient's belief is simply a vague "I will live longer and have a better quality of life than the average patient with HIV" and this comes to pass, then it seems to me that he did not hold a false belief. Stich's argument for them having a false belief would therefore have to be the first of your interpretations. But that argument seems obviously a case of the genetic fallacy, infact it seems like a textbook example of it to me. If this is what he means, I think he should rather express his hang-up with IR as allowing obviously unwarrented beliefs to be warrented. Though this could possibly raise the criticsm of circular reasoning. But still, no paradox on this reading.

    But the terms "positive illusion" and "unrealistic optimism" suggest to me that Stich has in mind beliefs that express more than the vague hope my patient expressed in the above paragraph. Indeed, the terms "illusion" and "unrealistic" usually have falsity built into their definitions. Its hard to see how something unrealistic can happen, or how we can have an illusion of something actually occuring. But the beneficial results the patient experiences do not make these false beliefs true, so on this reading of Stitch there is no paradox.

    P.s. love the game theory posts! Really apprecite them! :)

    ReplyDelete
  4. Aren't a useful belief different from a true belief?
    Isn't SR concerning the latter?

    ReplyDelete
  5. TaiChi,
    could you please be more specific? I don't understand what you mean by "whilst taking as my grounds for its truth some other propositions which I realize do not conduce to the truth of the proposition I believe".

    From what I understand one adopts SR if one want's to reason better about significant matters. Even if this is a pragmatic reason, the outcome is that you can reason better about the evidence available for the significant matter you are interested in. I take it this is conducive to the truth of the belief formed upon evaluating the evidence.

    ReplyDelete
  6. The "paradox" seems very natural to me. Let us imagine that the average sufferer of disease X lives an average of T years, but that their state of mind matters at least a little (say, because stress makes you die sooner, and people get stressed by being told they will die soon).

    Specifically, if an optimist thinks "I'll live at least 2T years" and hence isn't stressed, they will live an average of more than T years (but less than 2T). Conversely a pessimist thinking "I'll die in T/2 years" will go on to die before T years (but more than T/2), on average. So although both are wrong about how long they will live, they would both be correct about whether they will live more or less than the average.

    In this example we might have a quantitative false belief (the amount of years) but a qualitative true belief (more or less than the average). There is a causal pathway from over-exaggeration of ones beliefs to changing the outcome. There is a paradox because the less accurate the prediction becomes, the better the outcome: the more you overestimate your lifetime, the longer it will be. This is of course assuming that stress due to fear of death is the only factor at play.

    This would also relate to how a doctor should advise patients about their life expectancy. Is it better to tell people they will be live much longer T, so that they actually live a little longer than T, but your predictions will look bad? Or is it better to tell them they will live less than T so that they outlive your prediction, even though they will live a shorter time overall? Etc.

    ReplyDelete
  7. el ninio,

    What I mean is that I don't believe that there are such things as practical reasons for belief, and so one cannot believe in the propositions endorsed by SR for the pragamatic reason that Bishop and Trout offer.

    Why do I believe that? Well, suppose I am a billionaire and offer you a cool $1m to believe that the universe contains an odd number of stars. (Let's also suppose I have some way of ascertaining that your belief is genuine, too). On any account of practical reasons, my offering you $1m to believe that there is an odd number of stars in the universe is a practical reason to believe that there are an odd number of stars in the universe. But can you do it? Is it actually possible that you should volutaristically adopt a belief in this way, though it would be in your interest to do so? I doubt it. But if I am right that you can't believe in the universe's having an odd number of stars in the universe for the reason that you will receive $1m, then the $1m isn't after all a reason for you to believe it, for a reason to believe something is at least a potential explanation of someone's believing the fact it is a reason for.
    As in the above case, so too elsewhere: I think the point generalizes. Practical reasons are not the sorts of things which can be reasons for belief because our beliefs are not something which we have voluntaristic control over, that can be adopted at will. We can of course act with the aim of bringing ourselves into a state of belief, and I'm sure that we can have practical reasons for that, but the success of this procedure will rely on acquisition of theoretical (i.e. truth conducive) reasons for belief in the end, without which we cannot sincerely believe a proposition as true or likely true.

    "From what I understand one adopts SR if one want's to reason better about significant matters. Even if this is a pragmatic reason, the outcome is that you can reason better about the evidence available for the significant matter you are interested in. I take it this is conducive to the truth of the belief formed upon evaluating the evidence."

    So, one should adopt SR because one will reason better and better reasoning is truth-conducive. Sounds good to me. But that is not the argument which Bishop and Trout give, and it is not the argument which bothers me.

    ReplyDelete
  8. My impression of Bishop and Trout was that they were offering an incentive to use a strategy that is likely to result in true beliefs. Not that they were saying this justifies SR or provides a normative reason for using it. IMO, it's impossible to come up with a reason to use a truth apt strategy that everyone will find appealing. In the end those who want to know what is true will use truth apt strategies and those who do not will not and there's nothing we can say to convince them otherwise.

    ReplyDelete
  9. I don't think Stich's objection is incoherent, but I don't think it's that useful. By this I mean, there's something strange about our (the cog sci/philosophy) community's focus on input -on the the beliefs we form about the world - but less so on their effect on output (decision-making and action), and the conflict Stich sees here exposes the imbalanced way we think about beliefs. If I can paraphrase your critique of Stich's objection, you're saying that the effect the beliefs have matters more than whether they are a complete and consistent description of the world as it is up until this point (the point at which the patient is being asked what they think their prognosis is). I agree that the OUTCOME of a belief is what matters, but to take a strong position that outcomes matter more and that good outcomes can come from false beliefs (and bad reasoning) opens the way to a very slippery slope, and I think that would be Stich's larger concern.

    ReplyDelete