I’ve recently been covering evolutionary debunking arguments (see index of posts here). These are a sub-class of the more general naturalistic or causal debunking arguments. Such arguments suggest that if a belief (X) is produced by a causal process (P), and if that process is not reliable or truth-tracking with respect to X, then belief in X is unjustified or unreliable. Evolutionary debunking arguments obviously focus solely on the causal process of evolution.
Debunking arguments of this sort can be deployed to cover a range of beliefs. Joshua Thurow’s recent article:
“Does Cognitive Science Show Belief in God to be Irrational? The Epistemic Consequences of the Cognitive Science of Religion” (2011) International Journal of the Philosophy of Religion
is concerned with the impact of evolutionary explanations of our religious belief-forming faculties on the rationality of religious belief (specifically, belief in God).
I’m going to summarise Thurow’s argument over the next few posts. Since a large portion of what Thurow says in introducing his argument has been covered elsewhere in the series on debunking arguments, I’m going to try and cut straight to the chase. In this part I’ll outline his simple version of the debunking argument. In the next part, we’ll consider an objection to this version and develop a stronger one.
The only thing I will say now, in the interests of setting the scene, is that although Thurow acknowledges that there are three basic types of theories in the cognitive science of religion (adaptationist, by-product, and exaptationist), his analysis is limited to by-product theories. But he thinks his arguments can, pardon the pun, be exapted to cover the other theories as well.
The by-product theory suggests that our religious beliefs are the by-products of cognitive faculties that have evolved for other purposes. The best-known and most widely-cited example being the faculty for agency detection. This faculty is what allows us to attribute events and states of affairs to the actions and intentions of other agents. It produces belief in supernatural agents because it is hyperactive (hence, it is sometimes called the hyperactive agency detection device or HADD).
1. Thurow’s First Version of the Argument
One of the nice things about Thurow’s article is that although he is ultimately rejects the use of debunking arguments in challenging religious beliefs, he does try to give a decent formulation of the argument. He does this in two parts. First, he develops a relatively simple version of the argument. He then finds this to be deficient and formulates a stronger version. We’ll look at the simpler version first. It runs like this:
- (1) If theory T is true, then religious beliefs are produced and sustained by process P (in this case as a by-product of other cognitive faculties or “Pbp”).
- (2) Process P is unreliable and does not make use of good evidence.
- (3) If the process by which a belief is formed and sustained is unreliable and does not make use of good evidence then that belief is unjustified.
- (4) Therefore, religious belief is unjustified.
The structure of this argument should be familiar to anyone who has read the other entries on debunking arguments. One nice feature is that premise (3) is agnostic as to whether epistemic internalism or externalism is to be preferred.
The key to the argument is premise (2). Focusing on the by-product theory (or process Pbp), we must ask: what grounds do have for thinking that this process produces unreliable and evidentially deficient beliefs? The answer, according to Thurow, comes from the following argument (this is my interpretation of his reasoning):
- (5) If the by-product theory is true, then even {if there were no God or gods, we would still believe in their existence}.
- (6) If a process would produce a belief in X even if X did not exist/occur, then that process is unreliable or uses poor evidence.
- (7) Therefore, process Pbp is unreliable and does not make use of good evidence.
The First Argument (Slightly Cleaned-up) |
There are two key premises here. Premise (5), which is a counterfactual claim about the nature of the by-product process of forming beliefs; and premise (6), which proposes a principle for testing the reliability of a belief-forming process.
Premise (5) would usually be defended by reference to the various kinds of experiment that cognitive scientists perform on the HADD. These experiments reveal that the HADD produces a belief in the existence of agents even when they are not around. That said, premise (5) can still be challenged by some forms of theism. Premise (6) can also be challenged on the grounds that it is not a good test. We’ll consider both of these objections briefly.
2. The Anselmian Objection
The first objection comes from the Anselmian theist. According to them, God cannot not exist because God is a necessary being. This means that the first part of the counterfactual proposed in premise 5 (the first part of the counterfactual within the squiggly brackets, that is) is impossible.
Why is this an issue? Well, it is traditionally supposed that counterfactuals with impossible antecedents (so-called counterpossibles) have trivial truth values. Hence they pose no significant challenge to the views being challenged. This point is often marshaled in defence of Divine Command Theories of morality (for example, the classic Euthyphro-inspired objection “what if God commanded something terrible...” is said to have an impossible antecedent and hence not a true objection to the theory).
There is a by-now standard response to this kind of objection. It is to point out that many disputes in philosophy turn upon the acceptation or rejection of necessary propositions, and that there doesn’t appear to be anything circumspect or trivial about these disputes.
There is another kind of objection that the theist can make. It is to argue that we depend necessarily on God for our existence (and for our belief forming faculties) and so once again the counterfactual required by Thurow’s arguments has an impossible antecedent.
This objection can be responded to using the same response as was adopted above, but there is also another kind of response that is particular to this objection. It is to point out that the reliability of a belief-forming faculty is distinct from its dependence on something else, and that it is thus right and proper to investigate the former by imagining scenarios in which the latter does not hold true.
Thurow illustrates this with an example. Imagine Jones, an ordinary man who, due to wishful thinking, believes that there is a beer in his fridge. Now suppose that there really is a beer in his fridge and that this beer is pressed down upon a button. The button is linked to some explosive device such that, if Jones removes the beer from the fridge, he will be instantly blown up.
In this scenario, two facts appear to be true: (i) Jones’s existence depends upon the actual presence of a beer in the fridge; and (ii) Jones’s belief that there is a beer in the fridge is formed by an unreliable process (that of wishful thinking). This suggests that the reliability of a belief that P, is not always sustained by a dependency relationship between P and the believer in P. This, according to Thurow, suggests that even if there is some kind of dependency relation between religious believers and the object of their beliefs, it is still right and proper to question the processes through which they form such beliefs using counterfactuals of the kind outlined in his argument.
3. The Reliability Test
The other potential bone of contention with Thurow’s initial defence of the debunking argument is the actual test it proposes for reliability, which was:
- (6) If a process would produce a belief in X even if X did not exist/occur, then that process is unreliable.
The problem is that this test seems not to apply to certain kinds of inductive (Bayesian) inferences. Here’s an example from the philosopher Jonathen Vogel:
Two policemen confront an armed mugger who is standing some distance away. One is a rookie and one is a veteran. The rookie attempts to disarm the mugger by firing a bullet down the barrel of the mugger’s gun. The chances of pulling this off are virtually nil. The veteran knows what the rookie is trying to do. When it comes to the actual firing of the shot, the veteran can’t see the outcome. However, based on his years of experience, and his knowledge of the chances of success, he believes (correctly as it turns out) that the rookie probably missed.
I suspect most people will think that the veteran’s belief, in this kind of scenario, was reasonable. He is making a plausible inference about the likely success of the rookie’s shot based on his background knowledge acquired after years of experience.
The problem for Thurow is that the veteran’s belief would be impugned if we were to use his proposed reliability test. After all, the veteran would have made the same kind of inference even if the rookie’s shot had been successful.
Thurow concedes that this is a counterexample to his proposed reliability test, but he his undeterred for two reasons. First, the counterexample only covers beliefs formed through inductive inference. The kinds of religious belief we are interested in here are basic or non-inferential in nature. Second, even if those weren’t the relevant kinds of belief, he reckons an alternative test would still yield the same result because there is a significant disanalogy between the veteran’s belief in Vogel’s case and religious belief formed using, say, the HADD. What is this disanalogy? It is that veteran has a good inductive argument for his belief, and the believer using the HADD does not. After all, inferring divine agency from strange events does not, without further supporting argument, warrant the belief in question.
4. Where to Next?
So far so good for the proponent of the debunking argument. The crucial premise of the original version can be defended using another argument and the premises of this additional argument can in turn be defended from two objections. We might think we’re home and dry by now. But this is not the case. As Thurow points out, there is a persuasive reason for abandoning his proposed reliability test. We’ll see what that is in part two.
The veteran analogy seems really bad to me.
ReplyDeleteFor example, I flip a coin. I believe, for reasons similar to the veteran, that there is a 50% chance the coin has come up heads. However, I have not looked at the coin.
If the veteran example is a good objection to (6), then I would not be justified in that belief. After all, I would believe that the chance that the coin came up heads is 50% even if the coin is in fact tails.
But that's just poor use of language in discussing probability. My beliefs about the probability that a trial event will happen in a particular way are not beliefs about the actual result of a given trial.
To say that the veteran is wrong, you'd have to say that he was wrong in the actual thing he believed- not the result of the shot, but the probability that the result of the shot would come out a particular way. And that's not posited, nor is it likely.
I think that's a good point. The example does seem to misunderstand the nature of probabilistic estimates in these kinds of cases.
ReplyDeleteThe relevant analogy might be where the veteran can see the outcome, knows that his visual perception is sometimes faulty (maybe 1% of the time) and reaches a conclusion about what he sees by incorporating some background evidence.
That might still be a problem for Thurow's reliability test. I'm writing a bit off the cuff right now so I haven't really thought it through.
Infering agency from strange events has a lousy track record. Science is full of explanations for hitherto strange events that were attributed to "god". OTOH, the veteran probably has a very good track record of assuming that even another vet could not shoot down the barrel of another gun, especially when they are being targeted by it. He also has some reason from experiential data for supposing that the event is, at least, possible. The theist has no probabilistic data of this nature. There is no known success when religious claims of agency have been rigorously subjected to experiment.
ReplyDeleteSo here's what I'm thinking:
ReplyDeleteThurow's reliability test is way too stringent. If you believe, as I do, that most beliefs have an error rate associated with them (even if that rate is low), then the process through which you form your beliefs will fail his test. After all, it will produce beliefs in X even when X does not exist.
I think Thurow tries to avoid this by limiting his argument to religious beliefs held on non-inferential, basic grounds. But even this seems problematic. As far as I know, even Plantinga accepts that our ability to recognise what is and what is not a basic belief is itself prone to error. This is why he accepts that basic beliefs are open to being defeated (how sincere this acceptance is might be challenged).
If this is right, then a reliability test will need to specify some error rate threshold: above that threshold the belief-forming process is unreliable, below that threshold it is reliable. What should the threshold be?