These are some notes about design arguments for the existence of God. They are based on my readings of Benjamin Jantzen’s excellent book An Introduction to Design Arguments, which was published by Cambridge University Press back in 2014.
1. Likelihood Versions of the Design Argument
Design arguments for the existence of God are popular and persistent. They all share a common form. They start with evidence drawn from the real world — the remarkable way in which a stick insect resembles a stick; the echolocation of bats; the fact that the planet earth exists in the habitable zone; the fine tuning of the physical constants for the production of life in the universe; or the collection of all such examples — and then argue that this evidence points to the existence of a designer, i.e. God.
This basic common form has been developed in numerous ways over the course of human history. Most recently, it has been common to present design arguments using the formal trappings of probability theory and, quite often, this involves the use of likelihood comparisons. ‘Likelihood’ here must be understood in its formal sense. In every day language, the term ‘likely’ is synonymous with ‘probable’. In its formal sense, its meaning is subtly different: it is a measure of how probable some piece of evidence is given the truth of some particular theory.
Let’s use an example. Suppose you have a jar filled with one hundred beans. You are told that one of three hypotheses about that jar of beans is true, but not which one. The three hypotheses are:
H1: The jar only contains black beans.
H2: The jar contains 50 black beans and 50 green beans.
H3: The jar contains 25 black beans and 75 green beans.
Suppose you draw a bean from the jar. It is green. This is now some evidence (E) that you can use to rank the likelihood of the different hypotheses. How likely is it that you would draw a green bean if H1 were true? Answer: zero. H1 says that all the beans are black. If you draw a green bean, you immediately disconfirm H1. What about H2 and H3? There, the situation is slightly different. Both of those hypotheses allow for the existence of green beans. Nevertheless, E is more expected on H3 than it is on H2. That is to say, E is more likely on H3 than it is on H2. In formal notation, the picture looks like this:
Pr (E|H2) = 0.50
Pr (E|H3) = 0.75
Therefore - Pr (E|H2) < Pr (E|H3)
Notice that this doesn’t tell us anything about the probability of the respective hypotheses. Likelihood is a measure of the probability of E|H and not a measure of the probability of H|E (the so-called ‘posterior probability’ of a hypothesis). This is pretty important because there are cases in which the posterior probability of a hypothesis and the likelihood it confers on the evidence are radically divergent. Based on the above example, we conclude that H3 is the more likely theory: it confers the greatest probability on the observed evidence. But suppose we were also told that 90 percent of all jars contain a 50-50 mix of black and green beans, whereas only 5 percent contained the 25-75 mix. If that were true, H2 would be the more probable hypothesis, even if we did draw a green bean from the jar. (You can do the formal calculation using Bayes Theorem if you like). The only case in which likelihood arguments tell us anything about the posterior probability of a theory are cases in which all the available hypotheses are equally probable prior to observing the evidence (i.e. when the ‘principle of indifference’ can be applied to the hypotheses).
This hasn’t deterred some theists from defending likelihood versions of the design argument. The reason for this is that they think that when it comes to comparing certain hypotheses we are in a situation in which the principle of indifference can be applied. More particularly, they think that when it comes to explaining evidence of design in the world, the leading available theories (theism and naturalism) both have equal prior probabilities and hence the fact that the evidence of design is more likely on theism than it is on naturalism gives some succour to the theist. In other words, they think the following argument holds:
- Notation: E = Remarkable adaptiveness of life in the universe; T = hypothesis of theistic design; and N = hypothesis of naturalistic causation.
- (1) Prior probabilities of T and N are equal.
- (2) Pr (E|T) >> Pr (E|N) [probability of E given theism is much higher than the probability of E given naturalism]
- (3) Therefore, Pr(T}E) >> Pr (N|E) [theism has more posterior probability than naturalism]
Is this argument any good?
2. The Reverse Gambler’s Fallacy
There are many things we could challenge about the likelihood argument. An obvious one is its underspecification of the relevant explanatory hypotheses. Consider N. How exactly does naturalistic causation explain the adaptiveness of life? One answer is simply to say that it explains it through chance. The naturalistic view is that the universe churns through different arrangements of matter and energy, and through sheer luck it occasionally stumbles on arrangements of matter and energy that take on the adaptive properties of life. If your understanding of N is that it only explains E in terms of pure chance, then the likelihood argument may well be effective (though see the objection discussed in the next section).
But no one thinks that naturalism explains adaptiveness in terms of pure chance: the universe doesn’t constantly rearrange itself in completely random ways. Even before the time of Darwin, there were versions of naturalism that went beyond pure chance as an explanation. David Hume, in his famous Dialogues Concerning Natural Religion argued that design could be explained in Epicurean terms. The idea here is that although the universe does churns through different arrangements of matter and energy, some of those arrangements are more dynamically stable than others. They tend to persist, replicate and adapt. Those are the arrangements to which we attribute the properties of life and adaptiveness. Jantzen fleshes out this Humean/Epicurean hypotheses in the following manner (2014, 180):
- N1: The traits of organisms (and the universe as a whole) are the product of a process involving chance, the laws with which atoms blindly interact with one another, and a great deal of time — after a very long time, the universe eventually stumbled across a configuration that is dynamically stable.
If this is your understanding of naturalism, then the likelihood argument is cast into more doubt. It is at least plausible that the probability of E|N is much closer to the probability of E|T (particularly if the universe has been around for long enough).
Elliott Sober disputes this Humean argument. He says that proponents of it overstate the likelihood of E because they commit something called the Inverse Gambler’s Fallacy. The regular Gambler’s Fallacy arises from the tendency to assume that if a particular random outcome occurs several times in row it is less likely to happen in the future. Thus, if you flip a coin ten times and get heads on each occasion, you would commit the Gambler’s Fallacy if you assumed that you were more likely to get tails on the next flip. Although the numbers of heads and tails tend to be roughly equal over the very long term, the probability of the next coin flip being tails is the same as it is for every other coin flip, i.e. 0.5. Thus, the regular Gambler’s Fallacy is the tendency to overstate the likelihood of an event (a tails) given a previous set of evidence.
The Inverse Gambler’s Fallacy is, as you might expect, the reverse. It’s the tendency to overstate the likelihood of a particular event given a limited set of evidence. Jantzen’s explains the concept with a simple example. Imagine you have just wandered into a casino and you see somebody roll a double-six on a pair of dice. That’s your evidence (call it E1). There are two hypotheses that could explain that observation:
- H4: This is the first roll of the evening.
- H5: There have been many rolls of the dice that evening.
Although the probability of any particular roll of the dice being a double-six is 1/36, if there were lots of rolls in the course of one evening you would expect to see a double-six at some stage (indeed, given enough rolls the probability of eventually seeing a double six would start to approach 1). Thus, you could argue that:
- Pr (E1|H4) << Pr (E1|H5)
And hence that H5 is the more likely explanation. But this, according to Sober, is a fallacy. You have overstated the likelihood of the observation you made. The reason for this is that E1 is ambiguously stated. It could mean ‘a double six was rolled at some point in the evening’ or it could mean ‘a double six was rolled on this particular occasion’. If it means the former, then H5 is indeed more likely than H4. But if it means the latter, then the likelihood of H5 and H4 is equal. For any particular throw, they each confer an equal likelihood on E1, i.e. 1/36.
How does this apply to the Humean argument? The answer, according to Sober, is that the Humean explanation is like H5. The Humean idea is that given enough time and enough rolls of the galactic dice, we will eventually see arrangements of matter and energy that have the properties of life and adaptiveness. This could well be true, but for any particular arrangement of matter and energy — e.g. the functional adaptation of the eye for receiving and processing light signals — the Humean explanation does not confer that much likelihood on the outcome. Hence, the person who assigns a high likelihood to the Pr (E|N1) is committing the Inverse Gambler’s Fallacy.
There are, however, three problems with this criticism. The first is that the evidence of design that is relevant to the likelihood argument is general, not specific. Theists are appealing to the general presence of adaptiveness in the universe over the course of history, not just individual specific ones. The Humean explanation takes this into account. So the Humean argument does not really involve anything analogous to the Inverse Gambler’s Fallacy. Second, if the focus were on specific instances of adaptiveness, the theistic explanation would be just as much trouble as the Humean one. After all, the generic hypothesis of theism doesn’t explain why God would have chosen to design particular functions and adaptations into animals. You need a much more specific hypothesis for that, and providing one runs into all sorts of trouble (more on this below). Third, the Humean explanation obviously does not exhaust all the possible naturalistic explanations of adaptiveness. The most scientifically credible explanation — Darwinian natural selection — confers a much higher likelihood on adaptiveness than the simple Humean explanation. If we were to compare the likelihood of E given Darwinian natural selection to the likelihood of E given theism, the comparative likelihoods would be much harder to disentangle, and would arguably lean in favour of naturalism.
3. The Problem of Auxiliary Hypotheses
There are other problems with likelihood arguments. Sober’s favourite criticism of them focuses on the role of auxiliary hypotheses in their computation. His point is subtle and its significance is often missed. The idea is that whenever we make a claim concerning the likelihood of one hypothesis relative to another, we usually leave a great deal unsaid (implicit) that helps us in making that comparison. When I gave the example of the dice being rolled in the previous section, I assumed a number of things to be true: I assumed that dice rolls are statistically independent; I assumed that there are usually many dice rolls in any given evening of play; I assumed that the dice in question were fair. It was only because of these assumptions that I was able to say, with reasonable confidence, that the probability of any particular roll resulting in a double six was 1/36 or that the probability of observing a double-six at some point in the evening was reasonably high.
All of these assumptions are auxiliary hypotheses and they are needed if we are going to make sensible likelihood comparisons. In everyday scenarios, the presence of auxiliary hypotheses in a likelihood calculation is not a major cause for concern. We share common experience of the world and so rightfully take a lot for granted. Things are rather different when it comes to explaining the origins of adaptiveness in the universe as a whole. When we reach this level of explanatory generality, there is less and less that we can assume uncontroversially. This means that it is very difficult to compute sensible likelihoods for general explanations of adaptiveness.
This is a particular problem with theism. In order for the general hypothesis of theism to confer plausible likelihoods on the presence of adaptiveness, we would need to add a number of auxiliary hypotheses concerning the intentions and goals of the designer. For example, when looking at the human eye (or any collection of examples of adaptiveness), we would have to be able to say that God has goals X, Y and Z and these explain why the eye (or the collection) has the features it does. Some theists might be willing to speculate about the intentions and goals of God, but doing so gets them into trouble, especially when it comes to explaining away instances of natural evil. They would have to state the intentions and goals that justify God in creating parasites that incubate in and destroy the functionality of the eye (to give but one example). In light of the problem of evil, many theists are unwilling to speculate in too much detail about divine intentions. They resile themselves to view that God’s intentions are unknowable or beyond our ken. But in doing this, they undercut the likelihood argument.
Note, however, that the problem with auxiliary hypotheses is not just a problem for the theist. It is also a problem for the naturalist. In order for the naturalist to compute plausible likelihoods, they have to add more detail to explain why the adaptiveness we see have the features it has. There are various ways of doing this, e.g. by making assumptions about natural laws, historical conditions on earth, and so on. They would all have to get added into the mix to make a reasonable likelihood comparison. The problem then, as Jantzen puts it, is that ‘Sober’s objection is not really about picking auxiliary assumptions but rather identifying allowable hypotheses. But [the likelihood principle] tells us nothing about what counts as an acceptable hypothesis. Nor does the principle of Indifference. So it seems we have to either entertain them all or risk begging the question in favour of one or another conclusion” (Jantzen 2014, 184).
The net result is that it is very difficult to come up with a plausible likelihood argument for design.