## Monday, April 26, 2010

### Update

Sorry for the lack of posts. Got a backlog of work to contend with at the moment. Probably won't have any new stuff til next week.

In the meantime, I suggest browsing the back catalogue.

## Monday, April 19, 2010

### Causal Models (Part 1): Constructing a Causal Model

This post is part of my series on Steve Sloman's book Causal Models. For an index, see here.

Over the next few posts, I will be going through Chapter 4 of Sloman's book. In this chapter, Sloman introduces the causal model framework that is currently in vogue with computer scientists and statisticians. Sloman's presentation of this is based on the more comprehensive version provided by Judea Pearl.

The causal model framework provides an abstract language for representing causal systems. It is a graphical probabilistic model. In other words, it allows us to model a causal system even if we are ignorant or uncertain about the likelihood of something happening. In doing so, it relies on Bayesian network theory.

In this part I will simply sketch the main components of this modeling framework. Fuller consideration of the implications will have to wait.

1. The Three Parts to a Causal Model
Scientific modeling is all about representation. In other words, about depicting one state of affairs or event in terms of something else. For example, the Hodgkin-Huxley model of the action potential (discussed here) represents the flow of electrical current across the membrane of a neuron in a mathematical equation.

A causal model has three main components. First, there is the causal system that you want to represent. Second, there is a set of probability distributions that represent this causal system. And finally, there is the graph that represents both the causal system and the associated probability distribution.

The basic schematic for all causal models is illustrated below. The arrows are to indicate what is represented by what.

It is difficult to make sense of this in the abstract, so let's consider an example. Fire is a causal system. It includes oxygen, an energy source, and sparks, all of which contribute to produce the entity or event we call "fire".

Fire can be represented by a set of probability distributions. First, there is the marginal probability of the fire occurring, i.e. Pr(Fire) in the absence of other conditions. We can assume that this probability is low. Second, there is the conditional probability of the fire, i.e. the probability of the fire given certain conditions. So, for example, the probability of fire given the presence of oxygen, sparks and an energy source is high; the probability of the fire given sparks, an energy source but no oxygen is low; and so on.

Fire can also be represented as a causal graph. This is a simple box and arrow diagram showing the causal relations between oxygen, sparks, energy sources and fire. In this instance, the three conditions jointly contribute to the production of fire.

This gives us the following model.

2. Independence
It is possible to derive conditional probabilities for virtually everything. For example, I could work out the probability of my laptop exploding given the presence of a full moon. I would probably find that the probability of my laptop exploding is unchanged by the presence of the full moon. In other words that the marginal probability of the exploding laptop is equal to the conditional probability. This implies that the events are independent.

Independence is one of the most important pieces of information we can have when constructing causal models. It allows us to make the graph and the probability distributions much simpler.

3. Structural Equations
Even relatively simple causal systems, like the fire system outlined above, can have complex sets of probability distributions associated with them.

For instance, when I first introduced the notion of conditional probability in relation to the fire-system I only listed a couple of examples. I should have listed the probability of fire given all possible states of the three conditions (oxygen, sparks and energy sources). This would be as follows:

• Pr(Fire | sparks, oxygen, energy source) = High
• Pr(Fire | sparks, oxygen, no energy source) = 0
• Pr(Fire | sparks, no oxygen, energy source) = 0
• Pr(Fire | sparks, no oxygen, no energy source) = 0
• Pr(Fire | no sparks, oxygen, energy source) = very low
• Pr(Fire | no sparks, oxygen, no energy source) = 0
• Pr(Fire | no sparks, no oxygen, energy source) = 0
• Pr(Fire | no sparks, no oxygen, no energy source) = 0

Even this is a simplification. It assumes that the conditions come in just two states "present" or "absent". In reality, they could assume a range of values.

If this level of complexity is present in a relatively simple example like fire, imagine the amplification of complexity when modeling a complex causal system like cancer. This would involve many variables (lifestyle factors, genetic factors) with many possible values.

To overcome this complexity, modelers use structural equations. These represent the functional relationships between the elements of the causal mechanism (illustrated by the graph) in a single equation instead of a list of probability distributions. The structural equation for the fire-system is the following:
• Fire = f(spark, oxygen, energy source)
The f denotes a conjunction, i.e. all three conditions must be present for fire to occur. This version of the equation does not include probabilities. To do so would simply require the inclusion of an additional variable called "error" or "noise". This would represent randomness and thereby make the equation probabilistic.

That's it for now, in the next part we will tease apart the probabilistic nature of the causal modeling framework.

## Sunday, April 18, 2010

### Potential Theistic Explanations: Optimality and its Discontents

This post is part of my discussion of Chapter 5 of Gregory Dawes's book Theism and Explanation that began here and, much to everyone's surprise, continued here.

Dawes's basic goal is to show that there are no good in principle objections to theistic explanations. They can be genuine intentional explanations. It just so happens that they aren't very good explanations.

Chapter 5 of Dawes's book deals with some of the in principle objections. Since the argument is that divine explanations are types of intentional explanation, the proponent must posit a specific divine intention as the explanation of a given state of affairs.

The theological sceptic thinks this is untenable: we cannot know the mind of God. So we cannot offer divine intentional explanations. We saw this last time when looking at Elliot Sober's objection to intelligent design.

Dawes responded to Sober by claiming that we can put some constraints on theistic explanations. We do so by employing the rationality and optimality principles. The optimality principle states that God, because of his divine nature, would always choose the most optimal means to an end.

Dawes thinks this helps to constrain potential theistic explanations. If the theist wants to claim God (G) is the best explanation for something (X), then they have a double burden: (i) they must posit a specific divine goal that would require X; and (ii) they must show that X is the most optimal means to achieving the divine goal.

This is potentially devastating for the theist since it seems obvious to many that the world is imbued with sub-optimality. This would seem to imply that God could not be a good explanation for what we observe.

Thus, some will be inclined to object to this optimality principle. Dawes considers four such objections.

1. God is not Obliged to act Optimally
The first objection derives from certain assumptions about God's agency. It is argued that since God is omnipotent and perfectly free, he is under no obligation to act in an optimal way.

Dawes argues that this objection is misplaced. The optimality principle places a constraint on potential theistic explanations; it does not place a constraint on God. It is an epistemological claim; not a metaphysical or theological claim.

If theism wants to enter the explanatory market, then it has to play by the rules. It has to offer itself up for rational scrutiny along with other explanations. If the response to that scrutiny is that God can do whatever he likes, then theism is inscrutable and cannot be a explanatory thesis.

2. There is no Optimal Action
The second objection begins with some parallels between the idea of the best possible world and the optimal realisation of a divine intention. Surely if we are to claim that X is optimal, we are implying that X is a feature of the best possible world?

The problem with this parallel, according to the objectors, is that the concept of the best possible world is incoherent. Two reasons are offered for this (i) there is no single scale of value and (ii) value is potentially infinite.

These are problems with which utilitarians have long contended. For instance, classical hedonic utilitarians argued that conscious pleasure was the sole measure of value. "Piffle!" replied John Stuart Mill. There is a range of higher pleasures that are not commensurable with the lower pleasures. But if there is no single scale of value, then we cannot establish which is the best possible world.

Likewise, if value comes in units (e.g. utiles) then it is something that you can repeatedly add to (like an infinite set). And if it is infinite, there is no best possible world.

Dawes agrees that these are forceful criticisms but identifies three possible responses.

First, this objection may simply prove the incoherence of theism. After all, the optimality principle seems plausible: if God is omniscient, omnipotent and omnibenevolent, then it seems right to expect him to act optimally. So maybe the problem is not with the acceptability of the optimality principle, but with the very idea of God. Perhaps to speak of perfect goodness is to land ourselves in a conceptual muddle.

Second, Dawes thinks it is possible to reject the parallel between the optimal realisation and the best possible world. The idea here is that optimal realisation is only concerned with specific features of the actual world and not with general features of all possible worlds.

Third, it may be that a comparative judgement is all that is required. In other words, even if we cannot talk about a best possible world, we can talk about a better world. This line of thought is attributed to William Rowe. If Rowe is right, then comparing the merit of different realisations of a divine plan should be a doddle.

3. We Cannot Make Such Judgements
The third objection to the optimality principle stems from modal scepticism. This is something I alluded to in the first post on Dawes's book. The idea is that in proposing an intentional explanation, we assume knowledge of the options that were available to the intentional agent.

So in explaining why you chose chocolate ice-cream, I can imagine the options that were open to you and make certain guesses about why you chose as you did. The problem is that we can't do this when considering God as an intentional agent. We have no idea what options were available to him.

This type of modal scepticism is promoted by Peter van Inwagen, who uses it in responding to the problem of evil. (He rejects the idea that God explains anything; we do not come to know of God's existence through evidence and observation.)

Dawes has a couple of responses to this. First, he argues that a complete modal scepticism is unwarranted. We may not be able to comment on all the options available to God, but we may be able to make some decent comparative assessments.

Second, modal scepticism has devastating implications for the doctrine of divine omnipotence. Omnipotence is usually defined in terms of being able to do what is logically possible. But modal scepticisms implies that we cannot even know what is logically possible. Hence we cannot appeal to divine omnipotence.

4. Intelligent Design is not Optimal Design.
The final objection comes from the intelligent design theorist William Dembski. The official ID-position is that the designer is intelligent, not necessarily divine. So ID is not committed to optimality. Of course, Dembski is a theist, but he thinks that sub-optimality arguments must be dealt with from a theological perspective not a scientific one.

Suppose the theist is challenged by an atheist claiming that the wasteful suffering in the natural world provides evidence against the existence of a theistic designer. Dembski would respond by trying to reconcile God's nature with what we observe. In other words, by constructing a theodicy. This would still not affect our ability to infer design simpliciter.

Dawes thinks that this underestimates the problem. Sure, it is possible to reconcile God's existence with wasteful suffering, but this only works on the presupposition of God. It is not possible to infer the existence of God from a sub-optimal natural world.

In other words, unless we specify the divine intention and adopt the optimality constraint, we must concede that theism is not in the explanatory game.

### Potential Theistic Explanations: Sober Scepticism

(Series Index)

This post continues the discussion of Chapter 5 of Gregory Dawes's book Theism and Explanation that began here.

As noted last time, Dawes is trying to argue that theistic explanations cannot be a priori wiped from the explanatory menu. They can be genuine intentional explanations: they can explain events and states of affairs by relating them to a set of beliefs, desires and intentions.

It may turn out that these intentional explanations aren't any good in practice (Dawes makes this case), but they are, nonetheless, worthy of consideration.

Chapter 5 deals with some in principle objections to theistic explanations. These in principle objections come in two forms: theological scepticism and modal scepticism.

In this entry, we will take a look at one variety of theological scepticism that is attributable to philosopher of science Elliot Sober. We will also look at Dawes's response to Sober's scepticism.

1. Sober Scepticism Described
Sober's theological scepticism begins with his consideration of the design argument. Sober analyses this argument in accordance with what he calls the "likelihood principle". This uses concepts from confirmation theory.

The likelihood principle is a way of testing the strength of a potential explanation by comparing its probability with the probability of an alternative explanation. As follows:
• Observation O supports hypothesis H1 more than it supports hypothesis H2 if and only if Pr(O|H1) > Pr(O|H2)
Restating this last part in plain English we get: "if and only if the probability of O given H1 is greater than the probability of O given H2". So we are concerned with the entailment relationship between a hypothesis and an observation: how likely does the hypothesis render the observation?

Looking at the arguments of intelligent design theorists,* Sober points out that they try to compare the likelihood of two hypotheses, design (D) and chance (C), given certain observed features of the natural world (O).

They are aware of Darwinian explanations but think these are insufficient. So the contest is between chance and design. And, of course, they think this is no contest at all: design wins hands down. In other words:
• Pr(O|D) > Pr(O|C)
Sober thinks they are wrong to jump to this conclusion, not because chance is a good explanation, but because the design hypothesis is incapable of yielding any probability judgements whatsoever.

His reasoning is as follows. He concedes that we often do make inferences to design (or, more appropriately, intention) in the sciences. For example, archaeologists do it when they discover artifacts. But the reason they can do this is that they already know something about the beliefs, desires and intentions of human beings. Using this background knowledge, they can specify how human beings might be expected to act and makes guesses about the artifacts they might be inclined to create.

There is no analogous background knowledge when it comes to God. The theist can of course look at structure of the vertebrate eye and exclaim "how clever of God to include a blindspot! A permanent reminder of our limitations and his transcendence." But to do so merely begs the question: is the vertebrate eye actually attributable to a divine intention?

We can postulate divine goal-ability pairs til the cows come home, but this is ad-hocery, pure and simple. It has no explanatory merit.

Hence, Sober concludes, putative theistic explanations are D.O.A.

2. Rationality and Optimality
Dawes actually thinks Sober is correct in some of his criticisms. It is surely unacceptable to dream up ad-hoc divine intentions on the basis of what we observe. We need to impose some independent constraints upon theistic explanations.

Where Dawes differs from Sober is in thinking that there are two plausible independent constraints on theistic explanations. The first of these comes from looking at intentional explanations as a whole; the second from looking at the specific nature of the divine agent.

a. Rationality
The first constraint comes from the "rationality principle". Whenever we use an intentional explanation we must assume that the agent is acting rationally. This means that we assume their actions flow from their beliefs, desires and intentions (BDIs).

To be more precise, we assume that the action they have chosen to perform is: (a) consistent with their BDI-complex; (b) efficacious, i.e. likely to attain their goal; and (c) efficient, i.e. requires the least expenditure of time and effort (given their beliefs).

To illustrate what this might mean in practice, Dawes uses Gould's example of the panda's thumb. The panda has five regular digits (like all mammals) and one strange bony protuberance that it uses to strip bamboo.

Gould famously argued that the Panda's thumb, and other functional oddities like it, were good arguments for the truth of evolution. Why? Because they were clearly cobbled-together solutions to selective pressures  necessitated by the Panda's convoluted ancestry. They were not the kind of solutions you would expect from an intelligent designer.

Sober disagrees with Gould's argument. Gould, he argues, is presuming we know what God would be inclined to do if he built pandas. We cannot make that presumption.

Dawes thinks Gould was right to argue as he did because in doing so he employed the rationality constraint. If we posit a particular goal such as "creating a panda that can strip leaves from bamboo", then we are surely right to point out that the means chosen was inefficient.

There are problems with Gould's approach. Foremost among them is that he never actually specifies what the relevant divine intention is. Why would God want to create a panda in the first place?

This is a weakness in theistic explanations: posited divine intentions will always be open to challenge. But this is the price you pay if you want to offer theistic explanations. You cannot offer vague generalisations about a divine "plan"; you have to identify a specific intention.

b. Optimality
The second constraint on theistic explanations arises from a the nature of the divine agent. God is no ordinary rational agent. He is omniscient, omnipotent and morally perfect. Consequently, he would adopt the most optimal means for achieving his goals.

To make this more explicit, Dawes lists the following qualities of divine agency, qualities that are missing for ordinary agents:

1. God cannot act on false beliefs. You might open the fridge on the mistaken belief that the last slice of chocolate cake still resides within. Unbeknownst to you, I have taken it. God, being omniscient, cannot act on false beliefs like this (although, note, there are theologies in which he may lack knowledge of what free agents).
2. God has unlimited logically possible options open to him. You might be limited in your choices by time and physical capabilities; God faces none of those restrictions.
3. God would not suffer from weakness of will. You might resolve to give up alcohol and then find yourself falling off the wagon. This could not happen to God.
4. God could directly will whatever he likes. Just as you can raise your arm merely by thinking about it, God could create the world.

Taken together, Dawes argues that these qualities imply optimality.

The optimality principle gives us another constraint with which proposed theistic explanations can be assessed. To withstand scrutiny, the proposed theistic explanation will have to fend off sub-optimality arguments. These are arguments that show how the means chosen to achieve some divine goal are wasteful and inefficient.

In the next entry we will look at the objections to this optimality principle.

* Note: I hope to be covering Sober's arguments in more depth on this blog at a later time.

## Saturday, April 17, 2010

### Potential Theistic Explanations: Introduction

Gregory Dawes's book Theism and Explanation is one of the more careful treatments of the question: can God be an explanation of anything?

I have no intention of going through the entire book, but I was recently reading through Chapter 5 on "Potential Theistic Explanations" and thought it merited a blog series (why waste all that effort reading it, right?).

1. The Story so Far...
In Chapter 5, Dawes deals with the position of theological scepticism. This is the claim that God cannot be an explanation of anything because we do not know enough about him. Interestingly, this position is defended by atheists, such as Elliot Sober, and by theists, such as Peter van Inwagen.

You might wonder why theists defend this position, but the answer is straightforward: they think we come to know of God's existence in a manner distinct from how we come to know the truth of ordinary explanatory hypotheses. They may also be conscious of the fact that bringing God into the explanatory arena is not necessarily a boon to theism.

To get the most out of the discussion of Chapter 5, I need to offer a quick background sketch of what Dawes tries to establish in the preceding chapters. I am not going to cover everything he says, just the essentials. Nor am I going to defend any of the claims presented in this sketch. Dawes does this, but for present purposes they will need to be taken as given.

The first important point relates to the purpose of Dawes's book. He is trying to argue that there are no in principle objections to theistic explanations; that theistic explanations cannot be wiped from the explanatory menu.

There may, however, be good de facto objections to theistic explanations. Indeed, the final chapter of Dawes's book presents several of these de facto objections.

Having established the purpose of the book, Dawes proceeds in chapters 2, 3 and 4 to the general topic of explanation and the specific topic of theistic explanation. He makes two important claims. First, that any purported explanation must satisfy the requirements of Peirce's schema for abductive inference. This schema is the following:
• The surprising fact E has been observed.
• H, if true, would entail E.
• Therefore, there is reason to suspect H is true.
As he points out at length elsewhere, it does not take much to satisfy this schema. The real test of an explanation is how well is measures up against a list of explanatory virtues. I have discussed this previously

The second important claim is that theistic explanations are a brand of intentional explanation. What does this mean? Well, an intentional explanation is one that explains something in terms of the beliefs, desires and intentions of rational agents.

So I explain your opening of the fridge door, in terms of (a) your intention to open the door which arises from (b) your desire to retrieve the milk and (c) your belief that the milk is in the fridge.

To add some formalistic dressing to this relatively simple idea, Dawes presents the following practical syllogism. All intentional explanations must fit with this syllogism:

• There exists a rational agent A with intended goal G.
• A has beliefs B1, B2, .... Bn relating to the attainment of G.
• If B1, B2, .... Bn were true, E would be the best way of achieving G.
• Rational agents always choose the best way of achieving their goals.
• Therefore A will do E.

This practical syllogism incorporates what Dawes calls the "rationality principle". I will talk about this in more detail later.

2. Theological Scepticism
Now that we have some appreciation for the backdrop to Chapter 5, we can proceed to its actual contents. In this chapter, Dawes tries to counter some in principle objections to theistic explanations. As mentioned at the outset, these objections come in the shape of theological scepticism.

Dawes distinguishes between two varieties of theological scepticism. Remember, the idea is that God could be an explanation of certain states of affair because they are attributable to a divine intention. This is analogous with how we explain the behaviour of other intentional agents.

The first type of theological sceptic thinks that the analogy cannot hold water. The divine agent is wholly distinct from the human and animal agents we have to contend with on a daily basis. We have no idea what God's beliefs and desires really are, so we have no juice to put into the divine explanatory engine.

The second type of sceptic focuses more on our inability to make modal judgements about God. Modality is, very roughly, a bit of jargon for propositions that are qualified by terms such as "possible", "necessary", "contingent" etc. It is most often brought up when discussing possible worlds. I mentioned this in one of my posts about causation.

Applying this to the present debate, we cannot have a theistic explanation because we simply do not know what options (possible worlds) were open to God when he is trying to implement his intentions. Are there innumerable options or is he restricted in some way?

Dawes responds to these criticisms by saying that we can make certain assumptions about God and so can actually place certain constraints on a theistic explanation. Two of these constraints are central to this chapter: (i) the rationality principle and (ii) the optimality principle.

We will take a look at these constraints and take a more detailed look at the arguments of the sceptics in future entries.

## Friday, April 16, 2010

It's been awhile but I am finally writing about explanations again.

In this post I'll examine two basic explanatory qualities: breadth and depth. It is often said that a good explanation should have breadth and depth, but what does this mean? And is it really a good thing?

1. Explanations vs. Arguments
Before addressing the nature of these two concepts, let's backtrack for a moment and consider the differences between explanations and arguments.

An argument is an attempt to demonstrate or establish the truth of a particular proposition. It works from a set of premises to a conclusion. For example, let's say that you and I disagree about whether the best way to tackle an recession is through increased government expenditure (in the form of a "stimulus package") or cuts in government spending.

Let's say I'm in favour of the stimulus package. I want to get you to agree with the proposition "The stimulus package is the way to get the country out of recession". How can I do this? Well, I might present the following argument:

• (1) The recession has been caused by a collapse in consumer confidence. People are afraid to spend money because they worry about further deepening of our economic woes.
• (2) The collapse in confidence creates a vicious cycle: less money is being spent which reduces output in the economy, and the reduction in output adds to people's lack of confidence.
• (3) To break this vicious cycle, we need to do something to restore consumer confidence.
• (4) A stimulus package, by pumping money into the economy and encouraging spending, will increase output and increase confidence.
• (5) Therefore, the stimulus package is the way to get the country out of recession.

Now this may be a good or a bad argument. It does not matter. What does matter is that the argument is about establishing the truth of the concluding proposition.

This is to be contrasted with an explanation. An explanation begins with the truth of a proposition and then tries to identify the factors that account for the truth of that proposition.

Let's imagine that you and I are arguing once more about the economic woes of our country. Except this time our argument takes place a few years later and the country has indeed exited recession. We both agree on this fact. We now want to know: what accounts for this?

Suppose that in the interim period the government has in fact passed a stimulus package. I think that the stimulus package is the obvious explanation of our increasing prosperity. How could I convince you of this? Well, I would in fact present the exact same argument as I did when we were in recession. You will challenge this by presenting alternative explanations and we will assess these explanations in terms of their explanatory virtues.

In sum, an explanation works from an accepted proposition to a set of premises that would, if true, entail that proposition.

That is, roughly, the formal distinction between an explanation and an argument. In practice, the distinction counts for little. For example, although an explanation may be introduced in order to explain a particular proposition, the strength of the explanation may be established because it makes successful predictions about other propositions.

The predictions work a little bit like arguments, i.e. we accept the truthfulness of the explanation and try to see what would follow (deductively) from its truth.

All of the above was an extended introduction to what I really wanted to talk about, which is the concepts of breadth and depth. I am going to use the relationship between the work of three giants of the scientific revolution to illustrate these concepts. They are Isaac Newton, Johannes Kepler and Galileo Galilei.

First, let's look at the concept of depth. Kepler, using the data gathered and compiled by Tycho Brahe, noticed something odd about the motions of the planets. He explained these oddities in terms of three laws of planetary motion. These laws suggested that the planets followed elliptical orbits around the sun.

Some years later, the deeply unpleasant Isaac Newton came along an showed how Kepler's elliptical orbits were themselves accounted for by his law of gravitational attraction. Newton laws were deeper than Kepler's.

You can think of depth in terms of the unending string of "why" questions:
• "Why is X true?"
• "Because of Y. "
• "Why is Y true?"
• "Because of Z."
• "Why is Z true?"
• "Stop asking these silly questions."
The deeper the explanation, the more of these questions it can answer.

To look at the concept of breadth we need only add to the picture Galileo's laws concerning falling bodies here on earth. Again, Galileo took the behaviour of falling objects as his data and developed a set of laws that accounted for this behaviour. And again, Newton came along and showed how Galileo's laws were accounted for by his law of gravitation.

Thus, Newton's laws have breadth as well as depth: they explain both the motions of the planets and the motions of falling bodies.

Breadth is a function of how much data is covered by an explanation. It is linked to depth. As an explanation becomes deeper and more abstract, it covers more and more facts.

We can represent the breadth and depth of Newton's laws schematically as follows.

Although breadth and depth are counted as explanatory virtues, some caution is warranted. An explanation can be broad and deep and still be trivial; explaining everything and nothing.

We need more criteria.

## Thursday, April 15, 2010

### The Psychology of Norms (Part 4): Research Questions

This post is part of my series How Society Works. For an index, see here.

I am currently working my way through an article by Sripada and Stich entitled "A Framework for the Psychology of Norms". No prizes for guessing what it is about.

In the first two parts I reviewed some of the data about normative behaviour. In Part 3, I presented Sripada and Stich's model for explaining the psychology of norms, and introduced some of the questions that should guide future research in this area.

In this final part I continue to look at these future research questions. Focusing in particular on the role of the emotions, explicit reasoning and cognitive biases in normative psychology.

1. The Emotions
Philosophical iconoclast David Hume once argued that the emotions had a significant role to play in normative judgement. Sripada and Stich think there is good evidence to suggest that the emotions play a part in generating punitive motivations.

Indeed, research in this area suggests that three phenomena are closely linked: (i) norm-violation; (ii) the experience of emotions such as contempt and disgust; and (iii) the desire to punish the elicitor of the emotion. (Sripada and Stich review some studies by Johnathan Haidt, Joshua Greene and others in support of this).

There is also some speculation to the effect that emotions play an important role in generating compliance motivations. However, Sripada and Stich note a lack of compelling evidence to support this conjecture.

These speculations about the role of the emotions necessitate some additions to the box-and-arrow model presented in Part 3. The arrows with the dotted lines indicate hypothetical links, the solid lines indicate links for which there is good evidence.

2. Explicit-Reasoning
An important question about normative psychology concerns the role of explicit reasoning in normative judgement. The classic Kohlbergian position maintains that people pass through a number of stages in moral development. The later stages of this development involve detached moral reasoning.

In this detached moral reasoning-stage, Kohlberg stresses the importance of "ideal perspective-taking". This refers to our ability to abstract away from personal circumstances to discover general normative principles. This is the type of thing that Rawls was trying to achieve with his original position and the veil of ignorance.

The actual role that detached moral reasoning plays in normative judgement and behaviour is unclear. Sripada and Stich think it is likely that detached reasoning is separate from the mechanism they have been outlining to this point. They argue that this would explain why rational awareness and revision of moral principles is often superficial and ineffective.

Studies by Jonathan Haidt support this contention. Using a technique called moral dumbfounding, Haidt presents subjects with scenarios that elicit strong moral disapproval despite not contravening rational moral principles.

This suggests yet another revision to the model under discussion.

3. Biases and Constraints
The final set of questions for future research relates to the role of biases and constraints in normative acquisition. There is plenty of evidence to suggest that biases feature in other psychological processes but does this carry over?

Sripada and Stich recommend that we begin with the Pac-man Hypothesis. This hypothesis maintains that people can acquire any and all types of norm. We then consider all the ways in which the Pac-man Hypothesis could be wrong.

The first way in which it could be wrong is if at least some norms are innate. This might be true if there were some norms that were shared by all cultures. However, this does not appear to be true: norms do cluster around common themes but there is wide variation and some exceptions.

The second way in which it could be wrong is if moral judgement is constrained by a set of innate principles and parameters. This is exactly what Marc Hauser argues.

The third and final way in which it could be wrong is if some norms are more cognitively attractive or if certain situations are more conducive to moral learning. For example, proponents of gene-culture coevolution, such as Boyd and Richerson, argue that we more readily acquire norms from certain individuals due to a suite of biases:
• Prestige Bias: we emulate those who are more prestigious.
• Age bias: we emulate those who are slightly older.
• Gender bias: we emulate those who are of the same gender.
• Conformity bias: we try to fit in.
There is some evidence for age and gender biases, and lots of evidence for prestige and conformity biases.

That brings us to the end of Sripada and Stich's article.

## Wednesday, April 14, 2010

### The Psychology of Norms (Part 3): Psychological Architecture

This post is part of my series on How Society Works. For an index, see here.

I am currently working my way through an article by Sripada and Stich entitled "A Framework for the Psychology of Norms". The goal of the paper is to provide a framework for research into the cognitive underpinnings of normative behaviour.

In part 1 we reviewed some social level facts about norms. In part 2 we reviewed some individual level facts about norms. In this part we will look at Sripada and Stich's proposed model for the psychological architecture that supports these facts.

The model presented here is described by the authors as a "first pass". They add elements to it later in the article as they consider some open questions for future research. I'll introduce some of those questions at the end of this post.

1. The Psychological Model
The authors argues for a psychological model with two major mechanisms: (a) a norm-acquisition mechanism; and (b) a norm-implementation mechanism.

The norm-acquisition mechanism helps us to pick up on external behavioural cues in our cultural environment. From these cues it infers that a particular set of norms are in existence. The acquisition-mechanism starts to work at an early age and is involuntary in nature.

The implementation-mechanism maintains a database of norms and generates a set of intrinsic motivations to comply with those norms. It may also play a role in detecting norm-violation. More on this later.

The basic model is illustrated in box-and-arrow fashion below.

The authors argue that this model helps to explain the data reviewed earlier in the article, makes substantive claims about innateness, and provides a framework within which future research questions can be pursued.

2. Some Open Questions
In the remainder of the article, Sripada and Stich review some of these questions. I will look at the first three sets of questions here, leaving the remainder for Part 4 of this series.

a. Morality and Normative Psychology
One big set of questions relate to how moral norms are differentiated, if at all, from other norms. There is some evidence suggesting that people process and interpret moral norms in a distinctive way. This suggests that moral norms might constitute a distinct subset within the norm-database or even have their own unique, uncontaminated psychological system.

The authors speculate that since this question overlaps with metaphysics and semantics (metaethics) it is unlikely to be resolved any time soon.

b. Proximal Cues
The next set of questions relates to the proximal cues that bring about norms acquisition. It could be that they are acquired in response to displays of punishment, but that seems unlikely given that children seem to acquire norms without exposure to such displays.

The psychologist James Blair suggested that norms are acquired when a parent's "sad faces" are paired with specific actions by a child. This was well-criticised by the philosopher Shaun Nichols.

It could also be that norms are at least partially acquired in response to verbal instructions.

c. Norm-Storage
The third set of questions relates to the storage of norms in the database. This touches on some long-standing debates in the philosophy of mind.

The classic position, associated with the work of Jerry Fodor, is that norms (like other mental concepts) are stored in sentence-like structures in the brain.

There are, however, a number of alternatives to this. According to exemplar theory, a cluster of cases that exemplify a norm are stored. When confronted with a normative decision, a person will search their database of exemplars and use similarity judgements to figure out what to do in the present context.

A question arises as to whether or not the entire database of norms is searched whenever a decision is made. This is unlikely. Recent cognitive and emotional history is apt to make certain exemplars more readily available to decision-making. Stich is himself a fan of this account.

Okay, that's it for Part 3. In Part 4 we will look at some additional research questions on the role of the emotions, explicit reasoning and cognitive biases in normative behaviour.

## Tuesday, April 13, 2010

### The Psychology of Norms (Part 2): Individual-Level Facts

This post is part of my series on How Society Works. For an index, see here.

I am currently looking at an article by Sripada and Stich entitled "A Framework for the Psychology of Norms". The article does exactly what it says on the tin: it provides a framework for investigating the psychology of norms.

In Part 1, I covered the preliminary account of norms and listed some social-level facts about them. To review, norms are principles and rules determining appropriate conduct; they are a cultural universal; they cluster around common themes but have variable content; and there are almost always exceptions to the common themes.

In this part, we will look at some individual-level facts about norms. In other words, we will look at how it is that we become norm-following agents.

1. Norm Acquisition
The most obvious and most evidentially well-supported fact is that people, of all cultures and heritages, seem to acquire norms in a reliable and predictable fashion. Indeed, acquisition occurs relatively early in life. Several studies suggest that children have knowledge of normative rules between the ages of 3-5.

A major cross-cultural study by Henrich et al* focused on norms of cooperation and fairness. It was found that while these norms varied in their content, that content was relatively fixed in people's minds by the age of nine.

2. Motivational Effects
It is very clear that the acquisition of norms has a powerful effect on people's motivations. Classic economic rationality would suggest that people are only motivated to follow norms if there is some clear benefit to themselves. We would call this instrumental rationality.

Several lines of evidence suggest that people follow norms for intrinsic reasons. In other words, people are disposed to follow norms even when there is no obvious personal benefit from doing so. Despite this, it would be wrong to say that instrumental rationality is never a factor: human motivation is complex and it is possible that people act for both intrinsic and instrumental reasons as roughly the same time.

One crucial feature of norms is that they tend to encourage people to take an impartial view of their actions. By abstracting away from personal circumstances, norms try to force upon us unselfish modes of reasoning. David Hume was fond of making this point.

There are several lines of evidence supporting the intrinsic motivation hypothesis. Here is a sampling:
• Anthropology and sociology suggest that people internalise norms, i.e. they display a highly reliable lifelong pattern of compliance that is not dependent on overt coercion.
• Robert Frank, an economist, argues that several everyday behaviours, such as tipping at restaurants and returning lost property, are not plausible on the hypothesis of instrumental rationality.
• Daniel Batson's studies of helping behaviour suggest that people are motivated to secure the happiness of others as an end in itself and not merely as a means to their own happiness.
• Experimental economics has found that people follow fairness norms in one-off prisoners' dilemma-style cases. This is true even when they are told that the encounter will be anonymous.

3. Punishment
Perhaps the most convincing evidence for people's willingness to follow norms irrespective of the impact on personal or societal welfare comes from studies of the motivation to punish.

It is found that people are innate retributivists. They have a strong non-consequentialist desire to punish people who violate norms.

There are some complexities to take note of. First, motivations to punish do not always translate into behaviours, they can be suppressed or overridden by other concerns. Second, not every norm has a punishment associated with it.

The evidence supporting the intrinsic motivation to punish is multifarious. Here is a sample:
• Anthropological and sociological literature suggests that punitive emotions and punitive sanctions are common to all societies.
• Experimental economics has found that people will punish norm-violators even when it is costly to do so. For example, in public goods games (where the norm would be to pay into a common investment fund) people are willing to spend extra money to punish those who do not pay into the common investment fund.
• Other psychology experiments have found that mere observers are willing to punish people for norm-violation. This would seem to be a clear violation of self-regarding norm-compliance.
The results from experimental economics have been widely replicated, which suggests that the findings are robust.

Finally, it is worth noting that developmental psychologists have found that children systematically exhibit punitive attitudes towards those who violate rules without being taught to exhibit these attitudes. This might lend some support to those who see moral learning as analogous to language learning.

To conclude, human beings seem to have some innate cognitive structures that predispose them to the acquisition of norms. Once they acquire these norms, they seem to follow them in a predominantly intrinsic manner.

In the next part we will look at the hypothetical cognitive structures that might be responsible for all of this.

* Henrich, Boyd, Bowles, Camerer, Fehr and Gintis Foundations of Human Sociality (Oxford, University Press, 2001).

### The Psychology of Norms (Part 1): Social-Level Facts

This post is part of my series on How Society Works. For an index, see here.

I am going to kick things off by looking at the following article:
Chandra Sekhar Sripada & Stephen Stich "A Framework for the Psychology of Norms" in Carruthers, Laurence and Stich The Innate Mind (Vol. 2) Culture and Cognition (OUP, 2007).
Let's get straight to it.

1. Why do we need a framework?
Sripada and Stich begin their article by noting the importance of norms to the study of human sociality. Norms make social life possible, and they are frequently mentioned in the psychological literature. Nonetheless, there has been little systematic attention paid to norms in cognitive science. The goal of this article is to provide a systematic framework for the future investigation of normative systems.

The article is divided into five main sections. The first section offers a preliminary account of what a norm is; the second section sets out some social-level facts about norms; the third section sets out some individual-level facts about norms; the fourth section sketches the psychological framework that the authors promised; and the fifth section highlights some key questions for future research.

In this post we will cover sections 1 and 2 of the article.

2. What is a Norm?
A norm, according to Sripada and Stich, is a rule or principle that specifies which actions are required, permitted or forbidden. According to this definition, a norm does not owe its existence to any particular legal or social institution. Norms can and often do exist without institutional support.

Part of the reason for this has to do with relatively fixed psychological traits that we all seem to share. The picture is roughly the following:
• People pursue norms as ultimate ends, not merely as instrumental ends (although they can do this as well).
• Norm violation automatically engenders punitive attitudes like anger, condemnation and blame. These attitudes sometimes, but not always, lead to punitive behaviour.
These psychological traits seem to make normative systems self-sustaining. Think "invisible hand of the market" and you are on the right track.

3. Social-Level Facts

With the preliminary account of norms under the belts, Sripada and Stich proceed to identify some social-level facts about norms. There are three of them.

The first fact is that norms are a cultural universal. Norms, and sanctions for norms are found in all societies and they govern practically all activities within a society. This suggests that there might be an innate basis for the acquisition and implementation of norms.

Although norms are a cultural universal, they display variable content. In other words, different acts are permitted or outlawed to different degrees, in different societies.

The variability is not indefinite. Indeed, certain types of norm pop up over-and-over again in the ethnographic literature. Sripada and Stich suggest the following as exemplars of this trend:
• Outlawing of incest and other restrictions on sexual activity.
• Outlawing of physical harm and killing.
• Some type of sharing (or equality) norm.
But within these general categories there is considerable variation.

Take the example of incest norms. Sripada and Stich note that every society has incest taboos of some sort but that these norms vary in terms of the sexual activities, and types of family relation to which they apply.

Most societies have what is known as a core incest norm: all sexual intercourse between members of the nuclear family is forbidden. But societies also vary in how they extend that core norm. For example, in some tribal societies all marriages within the tribe are outlawed. The idea of variable content is illustrated below (note: the diagram is based on absolutely no data and is intended for illustrative purposes only).

This brings us to the final social-level fact about norms. Although norms cluster around general themes, there are usually exceptions to these general themes. Sticking with the example of incest, there is good evidence to suggest that brother-sister marriages have been tolerated in different times and places, e.g. in Egypt during the Roman period.

That's it for social-level facts. In Part 2 we'll cover individual-level facts about norms.

### How Does Society Work? (Index)

I am interested in how society works: what holds it together and what pulls it apart? But I am not interested in this topic from a classic sociological perspective. I have not read Durkheim, Weber or Parsons and have no immediate intention to do so. Nor do I have any great interest in macroeconomic theory and the likes.

Instead, my perspective on this topic is influenced by some contemporary research in philosophy, psychology and game theory. I'm going to share some of the material that has influenced my thinking in this area.

Here's an index. It will grow as I work my way through my back catalogue.

1. A Framework for the Psychology of Norms by Sripada and Stich

2. Gene-culture Coevolution and the Evolution of Social Institutions by Boyd and Richerson

3. A Framework for the Unification of Behavioral Science by Herbert Gintis

4. Natural Justice (Game Theory and the Social Contract) by Ken Binmore

5. Enforcing Norms

## Monday, April 12, 2010

### What is a Cause? (Part 2) Crossing the Desert

This post is part of my series on Steve Sloman's book Causal Models. For an index, see here.

I am currently working my way through Chapter 3 of Sloman's book which offers a basic introduction to the concept of causation. At the close of Part 1, causation was defined in terms of counterfactual dependence. In this part we will cover some of the problems with this definition.

1. Crossing the Desert
The problems facing the counterfactual definition are well illustrated by a famous thought experiment. I have encountered many versions of this but here I will stick with Sloman's version.

A Sheik, with a well-stocked harem, is setting out on a journey across the desert. Obviously, deserts are not always the most congenial of environments, so he needs to take some precautions. In particular, he needs to ensure he has plenty of fresh water in his water canteen.

Unbeknownst to him, all is not well in the harem. His wife and one of his mistresses are independently plotting his demise. The wife poisons the water in his canteen, while the mistress punctures the canteen so that the water slowly leaks out.

The Sheik sets out on the journey. After a few miles he feels parched. He unscrews the cap on his canteen and finds, much to his displeasure, that it is empty. He soon dies of dehydration.

Question: who caused the Sheik's death, the wife or the mistress?

2. The Mistress...Duh
The answer seems obvious to most: the mistress clearly caused the death of the Sheik. After all, he dies from dehydration not poisoning.

But this answer poses certain problems for our counterfactual definition of causation. The counterfactual definition envisages a "but for" relationship between cause and effect. Applying this to the case at hand, this entails that we must be able to say "but for the actions of the mistress the Sheik would not have died".

This statement is clearly untrue when applied to the scenario above. If the mistress had not punctured the canteen, the Sheik would still have died.

Our definition must be expanded to cover scenarios of this sort. The expansion must allow our definition to be sensitive to what actually happens while at the same time retaining the "possible worlds" aspect of causation.

This is a difficult task. Are there any solutions?

3. Mackie and the INUS
As a matter of fact there are. Perhaps the most famous attempt to deal with problem cases of this sort is that of JL Mackie. Sloman gives a quick summary of Mackie's definition, although he recommends reading the original.

It should also be noted that Sloman's book is not really about these philosophical puzzles so his discussion of Mackie is an aside.

Mackie argued that a "cause" is really only one element in a larger entity, namely a "sufficient set". The sufficient set consists of all the conditions that led to an effect. In the case of the Sheik's dehydration, the sufficient set would include: the biological needs of the human body; the physical environment of the desert; the Sheik's intention to cross the desert and the mistress's actions.

The sufficient set is not, by itself, necessary for producing an effect. This is obvious in the example given: we know that the sufficient set just described was not necessary for bringing about the Sheik's death; he could also have died from poisoning.

Mackie argued that a "cause" is actually an INUS, which is "An Insufficient but Necessary element of an Unnecessary but Sufficient set". Quite a tongue-twister, I'm sure you'll agree.

What singles out the mistress's actions as the true cause of the Sheik's death is that they are an INUS: the puncturing of the canteen is one (I) of the critical elements (N) in one (U) of the large set of conditions that led to the Sheik's death (S).

The poisoned water was not an INUS because the sufficient set that would have involved the Sheik drinking the water did not obtain.

4. Causal Graphs
Sloman's approach to causation involves the use of causal graphs. These are simple "box-and-arrow" diagrams where the boxes represent events and the arrows represent causal relations. He defines a cause as anything that can be represented as an arrow in a causal graph.

If that sounds philosophically suspicious (defining cause in terms of causal arrows?) that's because it is. Sloman acknowledges this but argues it is okay because the framework can explain how human beings use causal knowledge.

The causal graph approach actually suggests a simple answer to the Crossing the Desert thought experiment. There is a potential causal pathway leading from the wife's actions to the sheik's death; there is also a potential causal pathway leading from the mistress's actions to the sheik's death. However, the mistress's causal pathway interrupts or displaces the wife's causal pathway. This is illustrated below.

This approach to causation is developed to a much higher degree of sophistication in Chapter 4 of Sloman's book. We will be looking at that in due course.

5. Other Types of Invariant
Sloman closes chapter 3 by looking at how broad our theory of causation should be. He notes that a theory that explains everything explains nothing. There is then a fear that the theory of causal modeling outlined in his book could be explanatorily empty.

Sloman tries to head-off this criticism by looking at the concept of invariance. In chapter 2, Sloman had argued that causation is a type of invariance. In this chapter 3 he notes that it is not the only type of invariance. And since the theory he develops does not cover all types of invariance it is not, prima facie, explanatorily empty.

Other types of invariance would include part-whole relations, class-subclass relationships. These types of invariance are covered by set theory. This theory deals with a variety of logical relationships but it does not deal with the logic of causal intervention (as we shall see).

Likewise, probability theory covers other types of invariance (frequencies etc.) and overlaps considerably with causation theory. However, they are not equivalent. This is because probabilities can be applied to correlations and, as we saw in Part 1, correlation is not causation.

In the next entry in this series I will look at Chapter 4 of Sloman's book.

### What is a Cause? (Part 1) The Counterfactual Approach

This post is part of my series on Steve Sloman's book Causal Models. For an index, see here.

Over the next two posts I will run through the concepts introduced and discussed in Chapter 3 of the book. The chapter offers a very basic counterfactual definition of causation and then mentions some problems with this definition.

To some extent the chapter is little more than a gentle warm-up, preparing the reader for the more detailed model of causation that follows in Chapters 4 and 5. That said, it offers a reasonably succinct introduction to some classic issues in the philosophy of causation.

Part One will set out the main features of the counterfactual definition; Part Two will cover the problem cases.

1. Event-Event Causation
Sloman begins by noting that the most common vocabulary of causation is that of event-event causation. In other words, in talking about causation most people are talking about how one event or state of affairs (smoking cigarettes) leads to another event or state of affairs (addiction or cancer).

There are more exotic forms of causation discussed in the literature. Perhaps most famously there is the notion of agent-causation. The idea here is that agents (or persons or souls if you like) cause events in a unique way. This idea is popular among some in the free will debate.

My own feeling is that there is no coherent concept of agent-causation, but one would need to swim through a sea of philosophical verbiage to make the point. There is no point in doing that here.

2. Experiments: Identifying Causes
After that brief introduction, Sloman moves on to consider how we identify causation. He does by noting the first law of psychology: correlation is not causation.

A correlational study merely identifies when two variables happen to go together. In this sense, a correlational study is merely descriptive: it has no deeper explanatory significance.

For example, a correlational study might find that those possessing two X chromosomes are more likely to have long hair. Does this mean that having two X chromosomes causes long-hairedness, or vice versa? Of course not, the fact that long-hair and two X chromosomes go together is likely the result of some third factor (cultural norms).

To work out whether one event causes another we need to do more than describe events; we need to perform an experiment.

The simplest form of experiment deals with two variables: an independent variable (IV) and the dependent variable (DV). We will have some hunch that there is a causal link between the IV and the DV. The goal of the experiment is to see whether this hunch is correct.

In the experiment, the value of the IV is manipulated and the change in the value of the DV is recorded. If the manipulation of the IV consistently results in a variation of the DV, we can infer a causal link. Although we should always be cautious in making such inferences.

To give an example, suppose we wish to know whether punishing our children (e.g. taking away pocket money and/or grounding them) changes their behaviour. To do this we need to vary the amount of punishment and record the resulting changes (if any) in the behaviour.

Of course, good experimental design is more complicated than this example suggests. As Sloman notes, a good experiment must do two things:
1. It must use a manipulation technique that is precise. In other words, it must make sure that other potential causes are not being manipulated at the same time.
2. It must use the right statistical tools to detect the effect. In particular, it must ensure that the difference in the value of the DV is not due to random chance.

3. Counterfactual Dependence
One important feature of the experimental method is its ability to compare two or more possible worlds. In one world the value of the IV and the DV are at one level, and in another world they are at a different level.

This is crucial because the most popular definition of causation is counterfactual in nature. So to infer causation, it is not enough to just say that one event follows or precedes another. You must also be able to say that, in another world, if the first event had not occurred then neither would the second. This is known as counterfactual dependence and is illustrated (poorly) below.

To sum up: a causal statement is not a mere description of the actual world; it is a statement about two (or more) possible worlds simultaneously.

That's enough for now. In the next post we will deal with one classic problem facing the counterfactual definition of causation.

### Causal Models by Steve Sloman (Index)

Some exciting work has been done in the philosophy of causation over the past 20 or so years. Most of it coming from cognitive and computer scientists. They have developed some sophisticated tools for constructing causal models.

The apotheosis of this effort is probably manifested in the work of Judea Pearl. However, Pearl's work is not for the faint-hearted.

Steve Sloman's book Causal Models: How People think about the World and its Alternatives offers a more manageable introduction to some of the key ideas. It is also of interest to me because it covers some of the psychological research on how people reason about causality.

I've decided to go through Sloman's discussion of the basic elements of causal modeling. The discussion is found in Chapters 3, 4 and 5 of his book.

Chapter 3: What is a Cause?

Chapter 4: Causal Models

Chapter 5: Observation v. Action

### Expanding my Repertoire

I've decided to expand the purview of this blog somewhat. To justify taking the time out to write it, I need to include more of the material that I actually research.

So you can expect more posts on topics in the philosophy and psychology of normative behaviour, causation and game theory. This will also include some material on the psychology of religion. If that interests you, stay tuned.

## Thursday, April 8, 2010

### Must Goodness be Independent of God? (Part 3): Alston and the Divine Metre Stick

This post is part of my series on Wes Morriston's discussions of theistic morality. For an index, see here.

I am currently taking a look at an article by Wes entitled "Must there be a Standard of Moral Goodness Apart from God?". This article looks at the attempts of William Lane Craig and William Alston to rescue divine command morality from the horns of the Euthyphro dilemma.

Part one introduced the whole Euthyphro debate. Part two covered Craig's solution to the dilemma. It was found to be lacking. In this part we will consider Alston's solution. It will also be found to be lacking.

1. The Supervenience Relationship
Alston's solution to the Euthyphro relies on the concept of supervenience. This is something that I have covered before on this blog. It is probably best explained here with an analogy.

Consider a painting. It has different sets of properties. Its pigmental properties are determined by the precise distribution of pigment across the canvas; its aesthetic properties depend on the pigmental properties but not necessarily in a causal or logical manner. They are said to supervene on the pigmental properties. Any change in the aesthetic properties must result in a change in the pigmental properties.

Alston envisages something like this when it comes to God and the Good. For him, God's commands are an expression of His perfect goodness.* And his perfect goodness is something that supervenes upon his attributes.

According to Alston's picture, God's existence is not merely incidental to the existence of morality. His existence is necessary for the supervenience relationship to hold. No God, no morals.

This is illustrated below.

To this point, Alston's solution sounds suspiciously similar to Craig's. Both seem to be singling out a set of properties as the basis for morality and in doing so their positions seem to reduce to Moral Platonism. Alston tries to avoid this interpretation with an analogy.

2. The Metre Stick Analogy
Alston suggests that there are two distinct kinds of predicate: Platonistic and particularistic. The former predicates are defined in terms of a set of necessary and sufficient conditions; the latter in terms of similarity to a paradigm.

An example of Platonistic predicate would be triangle. A triangle has a precise geometrical definition. Indeed, the definition is so precise that no existing object, no matter how triangular it appears to be, is actually a triangle.

By way of contrast, an example of a particularistic predicate would be metre. This is a unit of measurement that defined in terms of similarity to a paradigmatic object. According to popular misconception, this object is a platinum-iridium bar in Paris (Morriston points out that this is no longer true).

So now we reach the punchline: the property of goodness is to God as the property of being a metre is to the platinum-iridium bar. Consequently, to say something is good is to highlight its similarity to God.

In this way, Alston hopes to escape Platonism. But does he escape the Euthyphro dilemma?

3. An Unfortunate Analogy?
Recall, that the other horn of the Euthyphro is represented by the Ockhamist. According to the Ockhamist, God's commands are good for no other reason than that they are God's. This is thought to make morality arbitrary.

Is Alston's position simply a disguised form of Ockhamism? It would seem so. He claims that God plays the part of moral metre-stick, but what is it about God that justifies his choice for that role?

Alston has two responses to this.

i. The Significance of Particularism
First, he argues that the objection doesn't imbibe the full importance of the Platonist/particularist distinction. If one is going to charge his account with arbitrariness, one may as well ask: what singles out the platinum-iridium bar for the special status of metre-hood? The answer is nothing. Particularist properties simply don't work that way.

But this is, of course, a bizarre answer. The metre is an arbitrarily-chosen unit of measurement. We only use it because we need some agreed-upon unit of measurement. To pick God as the moral metre-stick simply because we need some standard of morality would make a mockery of moral thought.

Alston is aware of this, so he sometimes falls back on the idea that God's "maximality" is what makes him a non-arbitrary moral metre-stick. But this seems to be a reversion to Platonism: it is now maximal love, maximal kindness etc. that are doing all the work.

ii. The Arbitrariness of Platonism
Alston's second response comes in the shape of a riposte to the Platonist. He argues that his invocation of God as the supreme moral principle is no worse than the Platonist's invocation of some abstract property. Explanations have to come to an end somewhere and for Alston that somewhere is God.

This is all well and good. It may well be that explanatory stopping points are always metaphysically arbitrary. But that leaves us with a pragmatic question: is God more useful than a Platonic standard?

It would seem unlikely that he is (these observations are largely my own, although somewhat influenced by what Morriston says).

First of all, we frequently use Platonic properties - e.g. trianglehood - for pragmatic reasons without labouring under the misapprehension that triangles actually exist in the physical world.

We could do something similar in morality by appealing to an ideal observer and using him/her when making moral calculations. This observer is somewhat analogous to God, but no one thinks he/she actually exists. In fact, many secular theories of morality already do this.

Second, we may actually end up in a worse position if we appeal to God. After all, to make the appeal worthwhile, we would need epistemic access to unambiguous divine commands. Such access is sorely lacking.

4. Divine Sovereignty
I'm skipping a large part of Morriston's discussion on the truth conditions of particularistic predicates. I feel that it is too technical for a blog entry. Instead, I will close out this series by summarising what he has to say about divine sovereignty.

One reason that theist's cling to divine command theory is that they want morality to be "up to God"; to preserve divine sovereignty. That is what Alston and Craig's solutions to the Euthyphro were designed to do.

Morriston thinks that even if Alston and Craig's theories were successful, they would not preserve the notion of divine sovereignty.

Think about it: if God's nature is responsible for the existence of moral properties (in a particularist or Platonic way), then he would still not be able to exercise control over what those moral properties are.

Craig and Alston admit as much: their answer to the Ockhamist horn of the Euthyphro is that God's commands are not arbitrary because they are constrained by his nature.

This is similar to someone who might argue that logical and mathematical truths owe their existence to God's existence. Even if they are right, it would seem odd to say that God literally wills the validity of the modus ponens.

If this is right, then a divine command theory of morality would always be superfluous.

* It should be noted that Alston does not defend a divine command theory. He offers his solution to those who might wish to defend such a theory.