Sunday, October 31, 2010

The End of Moral Realism? (Part 2)



This post is part of my series on Steven Ross's article "The End of Moral Realism?". Part one is available here.

The purpose of Ross's article is to make the case for metaethical constructivism. He does so in a roundabout kind of way by first suggesting that traditional metaethical debates are too heavily focused on moral ontology and not enough focused on moral justification, and then suggesting that traditional metaethical theories such as realism and projectivism are ill-equipped to answer the justification-question.

Last time out, we considered Ross's case against different forms of moral realism. In this post, we will look at his case against the different forms projectivism.


1. What is Projectivism?
Unlike the realist, the projectivist maintains that moral properties and moral facts are not to be found "out there" in the world that is independent of our senses and rational interests. Instead, moral judgments are seen as projections of our internal mental lives.

The basic tenets of projectivism can be applied to a number of disciplines, from constitutional interpretation and the rules of chess, to artistic criticism and etiquette. All these disciplines involve some sort of projection of our internal mental representations onto the external world (i.e. adding something to brute fact reality).

So we must ask: what is distinctive about moral projectivism? Well, the popular forms of moral projectivism envisage the act of moral adjudication as equivalent to the act of projecting principles or life plans onto the external world. So, for example, saying "willful murder is vicious" would, in a projectivist worldview, be reinterpreted as a statement or expression about the general principles and plans according to which one lives one's life.

Versions of moral projectivism can be found in the work of R.M. Hare, Allan Gibbard and Simon Blackburn.


2. The Problem with Projectivism
Ross is willing to agree with projectivists to a certain extent, particularly in its characterisation of the ontology of moral values. He thinks it is right to say that moral values are dependent upon (or projections of) internal mental representations. In this regard, projectivism seems to have the upper hand over realism.

Where projectivism fails is in its account of moral justification. Ross argues that, as was the case with the realists, the projectivists are incapable of saying why certain sorts of projection or internal mental representation deserve a certain sort of consideration or weight. For example, most people would say that the autonomy of their fellow human beings is an important consideration, something to be given due weight in practical reasoning. Projectivists are unable to explain why this should be the case.

To draw out this point, Ross asks us to imagine that our evaluative responses are arrayed along a spectrum, with simple visceral responses to stimuli at one end and full-blown moral judgments at the other.


Now, when it comes to the visceral responses there is no need to talk about whether they are justified or acceptable. They are simply products of our contingent biological and personal histories. But when we move along to consider artistic and aesthetic judgments, not to mention moral judgments, the idea of justification and criticism of judgment begins to get a foothold.

If someone judges, say, the later works of Henry James to be psychologically complex and appreciates them for this quality, we think they are justified in their responses: they are seeing something important about James's work. They are seeing a quality that deserves a certain kind of consideration. But it's not clear how the mere idea of projectivism can account for that phenomenon.

When it comes to a projectivist metaethics, its proponents need to tell us why, when projecting principles or life plans, certain kinds of quality or property deserve to included, such as autonomy. Ross thinks they can't do this.

Indeed, he notes two trends in the history of projectivist thought. The first, associated with Hare and existentialists like Sartre, accepted that projectivism could not explain why particular qualities deserved consideration. They embraced the fact that we had total freedom to project whatever values we might like onto the world.

The second trend, associated with Blackburn and Gibbard, tries to retreat from the existentialist void and claim that certain values will, of course be included. They might, for example, appeal to evolution and say that this ensures that, say, parental love will always be among the values that are projected onto the world. Ross thinks that this appeal can do little to stem the tide of moral scepticism because evolution also seems to have endowed us with plenty of unwelcome dispositions and preferences.

So, Ross concludes that projectivists have no account of justification. At this point, having pinpointed the failings of realism and projectivism, Ross proceeds to consider the constructivist alternative. We will look at that next time out.

Saturday, October 30, 2010

The End of Moral Realism? (Part 1)


"The End of Moral Realism?" is an article by Steven Ross which argues that constructivism, an ethical theory which I have covered before, offers us the hope for a coherent metaethics. Since this is a topic I am interested in, I will run through the arguments in Ross's paper. (Note: Springer seem to have open access to their journals right now so you can get the article for free)

Ross's central contention is that most metaethical debates revolve around the correct nature of moral ontology, i.e. the kinds of things to which moral terms like good and bad refer. Ross thinks this focus is unfortunate because the question of justification -- i.e. determining the truth conditions of moral statements -- is far more important to morality than that of ontology. Once we get the right picture of justification, the problem of ontology disappears.

Ross's article tries to show how two classic metaethical theories, realism and projectivism, utterly fail to account for justification, and how constructivism can account for justification. He notes at the outset that one should not expect neat and decisive results in moral theory because the questions being pursued are difficult. Nonetheless, he still thinks constructivism is better than rival metaethical theories.

In this post I will look at Ross's arguments against moral realism. Next time I will consider the arguments against projectivism. And after that, I will summarise his case in favour of constructivism.


1. A Taxonomy of Realisms
Put simply, moral realism is the view that there are mind-independent moral facts, i.e. facts that do not depend on us for their existence. There are different varieties of moral realism, and in the interests of conceptual clarity, Ross proposes the following taxonomy.

First, there are the non-natural, Platonic realists. They maintain that moral facts exist, but are not reducible or equivalent to natural facts. They are more akin to abstract metaphysical properties like mathematical truths (although there plenty of disagreement about the correct form of mathematical ontology as well).

Second, there are the naturalist realists. Again, they maintain that moral facts exist, but they differ from the Platonists by maintaining that there is a direct equivalency (or supervenience relationship) between natural facts and moral facts. Ross suggests we call this type-type natural realism in order to distinguish it from a third version of realism....

This third version is something Ross calls trivial token natural realism (see here on the type-token distinction for more). Token natural realists simply maintain that moral terms like "good" "bad", "right" "wrong" are appropriately applied to natural facts. But they do not think moral goodness (etc) is constituted by natural facts. In other words, this is a theory about the application of moral terms, not about the truth conditions of those terms.


2. The Problem with Realism
Ross thinks that nobody could take issue with trivial token natural realism. After all, even Platonists would say that one can meaningfully apply the moral term "bad" to a natural set of facts such as the murder of one human being by another. However, that's the problem: it's trivially true.

The real metaethical weight is being carried by the other two versions of realism but Ross thinks that neither can do the necessary work. Why is this? Well there are well-rehearsed ontological problems with both views. Platonists face the problem of adhering to a (potentially) extravagant and metaphysically indulgent ontology. Naturalists face G.E. Moore's open question argument.

Ross notes these ontological worries, but doesn't push them. He thinks the more fatal problem with realism, of both varieties, is that it has no proper account of moral justification. In other words, it cannot tell you why you are right to say that murder is wrong, or that charity is good.

Why not? Ross says the problem is that both views turn justification into a species of fact detection: it is simply by recognising or becoming aware of the existence of moral and immoral states of affairs that we become justified in saying they are good/bad/right/wrong.

When asked what it is allows for such justification, realists are usually at a loss. So much so that they will tend to flirt with alternative metaethical theories and mutate their realism into its trivial form. For example, a realist might say we are justified in calling non-consensual sexual intercourse (i.e. rape) morally wrong because it harms certain interests or violates autonomy. In doing so, they switch to an anti-realist conception of morality (i.e. one that thinks morality is dependent on the mental states of conscious, intentional agents) with just a trivial form of realism left over.

So realism, for Ross, fails to account for moral justification. Perhaps projectivism will fare better. We will see next time out.

Rational Persuasiveness and Religious Arguments (Part 2)



This post is part of a short series on Jennifer Faust's article "Do Religious Arguments Persuade?".

In the previous post, we discussed Faust's account of rational persuasion. We saw that when assessing the persuasiveness of an argument {P1...Pn/C} we need to keep in mind the subjective probabilities and antecedent beliefs of the person to whom the argument is addressed.

Specifically, we saw that if we wish to persuade someone through argument we need to ensure that (i) they attach positive probabilities to the premises of the argument; (ii) that the premises raise the probability of the conclusion; (iii) that the premises are more acceptable to the person than the conclusion; and (iv) that the conclusion does not clash wish some stronger antecedent belief.

We further distinguished between peripheral antecedent beliefs and core antecedent beliefs. The former have low subjective probabilities and low epistemic costs associated with them; the latter have high subjective probabilities and epistemic costs associated with them. It follows that persuasive arguments are more likely to work on peripheral beliefs.


1. The Nature of Religious Beliefs
The question we now have to ask is how this account of persuasion affects religious arguments. The first thing to do is to determine the locus of religious beliefs: do they lie at the core or at the periphery? As noted last time, Faust thinks they probably lie at the core.

I think this is broadly correct. I suspect most theists would say that their belief in God is foundational; that it shapes how the understand and interpret the world; that it is the filter through which all evidence and argument must pass. This means that giving up theism would have very high epistemic costs associated with it, which is why most arguments against the existence of God are likely to fail to persuade. I recommend listening to my podcast on Anthony Flew's paper "Theology and Falsification" with this in mind.

Two words of caution about this characterisation of religious beliefs. First, what is true for religion is also likely to be true for a belief in naturalism or some worldview. Second, it is unlikely that all religious beliefs lie at the core. For example, I doubt that the doctrine of transubstantiation is a serious deal-breaker for most Christians. Indeed, I know plenty of Catholics who would be happy to give it up (most of the time they don't even think about it).

The second point is important to bear in mind if one wishes to persuade or argue someone out of their beliefs. I would imagine that most success could be had be carefully planning one's strategy so as to begin by taking out beliefs at the periphery, which may slowly erode the confidence with which the beliefs at the core are held.



2. Begging the Doxastic Question
Having located religious beliefs at the core, Faust proceeds to identify a basic flaw that seems to be shared by most arguments in the philosophy of religion. She calls this flaw "begging the doxastic question" and it is to be distinguished from the classic logical error known as "begging the question" or petitio principii.

An argument begs the question (i.e. is guilty of petitio principii) whenever it explicitly or implicitly assumes what it tries to prove. Faust cites the following example coming from George W. Bush (I'd imagine the statement was primarily made for rhetorical effect):
The reason I keep insisting that there was a relationship between Iraq... and al-Qaida is because there was a relationship between Iraq and al-Qaida.
This seems to be a straightforward example of the premise being equivalent to the conclusion, i.e. P&Q, therefore P&Q. There is no formal deductive error here because there is no real formal deduction taking place. Other times, arguments that beg the question can be more difficult to spot.

Anyway, Bush's error needs to be contrasted with the problem of begging the doxastic question. Faust argues that this takes place whenever the assignment of some positive degree of probability to one (or more) of the premise is conditional upon the pre-existing acceptance of the conclusion. So this is not straightforwardly circular reasoning. Faust cites the following example as being a classic instance of this phenomenon:
  • (1) Republican lawmakers routinely devalue public welfare programs, education funding etc.
  • (2) One ought to vote for candidates who value public welfare programs, education funding etc.
  • (3) Therefore, one should vote for the Democrats.
Now this isn't even a complete argument since there is a hidden factual premise (namely, one stating that Democrats are more likely to value those things) but in any event Faust says it begs the doxastic question because one's assignment of positive probability to (2) is likely to be conditional on one's acceptance of (3) and not the other way round.


3. Do Religious Arguments Beg the Doxastic Question?
Finally, we come to the crux of the matter: do the types of arguments that are tossed back-and-forth in the philosophy of religion beg the doxastic question? Faust thinks they do and she gives a few examples of which I'll mention just two.

First, there is Anselm's ontological argument. On its face it does not seem to presuppose any beliefs because it is based purely on a conceptual analysis of value, perfection and so on. Presumably, the conceptual analysis should be acceptable to all. However, as Faust points out, assigning a positive probability to Anselm's premises concerning maximal perfection and value will depend on one's pre-existing acceptance of a universal, objective scale of value with a greatest being at the top. Only theist's are likely to accept that idea.

Second, there is the cosmological argument (in all its forms). This argument points to some abstract property of physical (or metaphysical) reality such as time, causality and so on; says that these properties need some explanation; and jumps from this need to the existence of God. The problem here is that accepting God as an explanation will depend on one's pre-existing acceptance of (i) the inadequacy of scientific explanations; and (ii) the plausibility of non-natural forms of explanation. And, of course, theists are the one's who are most likely to accept those premises.

I'm not sure how accurate this representation of religious argumentation is. I have a feeling that carefully-formulated versions of these arguments need not always beg the doxastic question. Nevertheless, I can concede that, at least sometimes, they do and that this can make arguments and dialogues in this area quite frustrating.

Friday, October 29, 2010

Rational Persuasiveness and Religious Arguments (Part 1)


Jennifer Faust's article "Can Religious Arguments Persuade?" has received some exposure on the internet already. A youtube user by the name of ProfMTH has a two-part video series on it which you can watch here.

Despite this pre-existing coverage, I've decided to do two quick blog posts on her article. I do so because, at its heart, it contains an interesting and useful account of rational persuasion. She applies this account specifically to the topic of religious argumentation, but it clearly has a much broader application. Indeed, I think her account could be an important part of everyone's philosophical toolkit. It can help in understanding how we respond to argumentation and in formulating strategies when trying to persuade others.

In this post, I will outline Faust's account of rational persuasion. In the next post, I will see how she applies it to religious argumentation.


1. Persuasion and the Naive Account
I am sure that everybody reading this has had the opportunity to engage in linguistic exchanges with their fellow human beings. These exchanges can be called dialogues. Dialogues come in several distinct forms. The argumentation theorist Douglas Walton distinguishes six forms of dialogue:

  • Persuasive: The participants have conflicting opinions, and each participant tries to persuade the others to accept their opinions.
  • Inquiry: The participants lack evidence relating to the proof or disproof of certain hypotheses and ideas. The goal of the dialogue is to find that evidence.
  • Negotiation: The interests of the participants are misaligned and the try to reach some resolution that resolves or settles this misalignment.
  • Information-Seeking: The participants lack information which is shared and acquired through dialogue.
  • Deliberation: The participants face some dilemma or practical choice. The dialogue helps them to decide the best course of action.
  • Eristic: The participants are in some deep personal or emotional conflict and the dialogue merely serves to deepen that conflict.

Although awareness of these six types of dialogue is not essential to understanding Faust's account of rational persuasion (she doesn't mention them at all) I think it will be useful to keep them in mind. Primarily because an awareness of the type of dialogue one is engaged in can help one to formulate an effective argumentative strategy as some styles of argument are more appropriate in certain types of dialogue.

Anyway, arguments play an important part in all of these dialogues. Arguments are used to change opinions, prove hypotheses, resolve dilemmas and settle negotiations. An argument can be defined as a set of premises leading to a conclusion or set of conclusions {P1...Pn/C}. In order to be effective, an argument will need to be rationally persuasive. This means that the person to whom the argument is directed must be obligated to accept the conclusion. (Note: persuasive arguments are distinct from persuasive dialogues).

The naive, traditional account of rational persuasion -- the one to which Faust objects -- is that an argument is persuasive whenever it is (i) valid and (ii) true. In other words, whenever the conclusion actually follows from the premises and whenever those premises are true.

Faust thinks that this account is naive because it fails to acknowledge the role of antecedently held beliefs. So she proposes an alternative.


2. Faust's Account of Rational Persuasion
In developing her alternative, Faust first asks us to bear in mind the subject S to whom the argument is being directed. She then formulates the following three conditions of persuasion:

  • (1) The subject S must attach some positive degree of subjective probability to each of the premises (P1....Pn) of the argument. This would appear to stand to reason. After all, how can one accept a conclusion if one does have any confidence in the premises.
  • (2) S must recognise the logical strength of argument. In other words, they must attach a greater degree of probability to the conclusion after being confronted with the argument than they would attach to the conclusion by itself. In formal terms, the Pr(C | P1....Pn) > Pr(C).
  • (3) The premises must be more acceptable to S than the conclusion. This really just fleshes out (2): one is unlikely to accept the logical strength of an argument if one does not have confidence in its premises.

This account improves upon the naive one by acknowledging the impact of subjective probabilities and antecedent beliefs on the persuasiveness of an argument.




3. Barriers to Persuasion
However, this account is still incomplete because the three conditions, while necessary for persuasion, are not sufficient. There are two additional factors to bear in mind when assessing the persuasiveness of an argument. They are (i) non-epistemic mental states; and (ii) contradictory antecedent beliefs.

Non-epistemic mental states are things like desires, hopes and fears. We can easily imagine how such states might impede one's acceptance of an otherwise persuasive argument. For example, a patient who has recently received a diagnosis of cancer, based on some medical tests to which they assign a high degree of subjective probability, might nevertheless fail to be persuaded of their diagnosis due to some deep emotional reluctance to come to terms with their circumstances.

Although non-epistemic mental states are important, it is safe to say that they do not really affect the account of rational persuasion that is being developed. After all, when considering the reluctant cancer patient we would not say that their failure to accept the diagnosis is rational, rather it is an irrational reaction to an otherwise persuasive argument.

Contradictory antecedent beliefs are beliefs that are inconsistent with the conclusion of the argument. Such beliefs may lead one to completely disregard the argument or lessen the impact of the argument on one's overall web of belief. The precise nature of the impact will depend on how strong the antecedent beliefs actually are.

Faust suggests that we can distinguish between core beliefs and peripheral beliefs. Core beliefs are the ones to which we attach a very high degree of subjective probability and which have high epistemic costs associated with giving them up. Peripheral beliefs have much lower probabilities and costs associated with them.

Faust goes on to argue, obviously enough, that beliefs at the periphery are more likely to undergo readjustment or revision in the face of a persuasive argument. Conversely, beliefs at the core are less likely to undergo readjustment or revision.



Now the obvious question is which kinds of beliefs are found in the core? The most obvious candidates are beliefs that form the basis of one's overall worldview, such as a belief in the existence of God. Indeed, it is the fact that such beliefs lie at the core that makes persuasive religious argument so difficult to formulate and so difficult to accept. We will consider this issue in part 2.

Tuesday, October 26, 2010

The End of Skeptical Theism? (Part 11) - Summing Up



(Series Index)

I had originally planned to complete my series The End of Skeptical Theism? during September. It ended up taking me a lot longer than expected -- mainly because of the intervention of "real" life. Fortunately, it has now been completed. My previous post on skeptical theism (PT) and its implications for Plantinga's externalist epistemology was the last substantive entry in the series.

In the interests of wrapping things up appropriately, I thought I might summarise some of the main take-home points. There are three that spring readily to mind.


1. Rowe's Evidential Argument is Stronger Than You Might Think
As mentioned throughout this series, ST was originally conceived as a response to William Rowe's evidential problem of evil. As a result, understanding Rowe's argument is a necessary first step towards understanding ST. When I wrote the first entry on Rowe's argument, one thing that struck me was how strong Rowe's challenge to theism actually is.

Rowe argues that God, being omnipotent and omnibenevolent, could not allow for the existence of evil unless it was logically necessary in order to achieve some overriding good. Thus, the existence of evils (E1...En) for which we cannot locate a logically necessary good (call these "gratuitous evils") provides some evidential disconfirmation of God's existence.

The "logically necessary"-condition is what makes this a strong challenge to theists. Because of it, they cannot simply point to the existence of some causally connected greater good and claim that that permits the existence of evil. After all, that good could (possibly) have been brought about without the need for some intervening evil. Furthermore, the "logically necessary"- condition is justified by appeal to God's omnipotence and so seems legitimate.

Although Rowe's challenge is a strong one, it is important to make sure, when presenting it, that you are not limited to one or two examples of gratuitous evils. As noted in an earlier entry, it is the abundance of such evils that makes certain ST-responses implausible.


2. Skeptical Theism has Three Basic Forms
As a response to Rowe's argument, ST maintains that we are not warranted or justified in assuming that just because an evil seems to be gratuitous to us means that it is, as a matter of fact, gratuitous. "Seeming so" does not imply "actually so". Rowe's inference is impermissible.

Throughout the series we have looked at three basic forms of ST, each one associated with a different theorist:

  • (1) Representativeness of the Sample: According to this form of ST, Rowe cannot make the necessary inference because there is no good reason to think that the sample of goods and evils which is available to him is representative of the totality of good and evil. This form is associated with Michael Bergmann.
  • (2) Low-Seeability: According to this form of ST, there are certain kinds of things that we simply cannot expect to see (or otherwise perceive) due to (a) their nature and (b) the epistemic context in which we find ourselves. The prime example being, of course, God's reasons for action: they are derived from his unlimited knowledge, and we are like mere children relative to him. This form is associated with Stephen Wykstra.
  • (3) Multiple Cognitive Limitations: According to this form of ST, human cognition faces a number of serious limitations which undercut our ability to make inferences of the sort demanded by Rowe. These limitations arise from our poor understanding of what is logically and metaphysically possible, as well as our inability to combine and analyse large amounts of data. This form is associated with William Alston.

3. Skeptical Theism has Several Unwelcome Implications
The problem with the three forms of ST outlined above is that the principles they invoke to justify their skepticism -- i.e. representativeness of sample, low-seeability and cognitive limitation -- seem to apply to domains beyond those invoked by Rowe's argument. As a result, ST undercuts a large swathe of human knowledge, including knowledge that a theist would like to retain.

The following examples of this were pinpointed in this series:

  • ST damages moral reasoning by endorsing partial or complete skepticism about the states of affairs that we usually think to be morally commendable.
  • Just as ST undermines inferences made from supposedly bad states of affairs to the non-existence of God, so too does it undermine inferences made from supposedly good states of affairs to the existence of God. As a result, arguments from design or arguments from miracles are no longer justifiable.
  • ST undermines arguments based on Biblical revelation or personal experience because these arguments rely on the assumption that we are able to reliably identify direct and true communications from a perfectly good being. We can't due to {insert preferred form of ST here}.
  • ST undermines Alvin Plantinga's externalist religious epistemology because it provides at least one reason for thinking that God may wish to conceal certain forms of knowledge from us.

As a result of these unwelcome implications, I think ST cannot be consistently embraced by the committed theist. To rescue ST from the Room 101 of philosophy, its proponents need to show how the principles to which they appeal only apply to the specific claims made by Rowe and not to these other domains. At present, it is difficult to see how this could be done.

Monday, October 25, 2010

The End of Skeptical Theism? (Part 10) - Theism's Cognitive Blindspot


This post is part of my series The End of Skeptical Theism? For an index, see here.

I am currently working my way through an article by Paul Jude Naquin entitled "Theism's Pyrrhic Victory". The article looks at the implications of skeptical theism (ST) for Alvin Plantinga's religious epistemology.

As we saw at the end of the previous entry, Plantinga's externalist epistemology can allow one to have warranted properly basic beliefs even when one lacks the evidential justification for those beliefs. However, this can only be allowed if one's general worldview permits the existence of cognitive faculties that satisfy the four conditions of proper function.

Plantinga argues that, according to the naturalist's worldview, our cognitive faculties are the product of an undirected process of evolution by natural selection. The problem is that there is no reason to think (and maybe good reason to think the contrary) that evolution would produce reliable, truth-oriented cognitive faculties. This is because there is no necessary overlap between beliefs that are good for survival and beliefs that accurately represent the state of reality.

The net result is that a naturalist must believe that the probability of having reliable, truth-oriented cognitive faculties, given the truth of naturalism, is either low or inscrutable. Thus, it is irrational to be a naturalist.

Plantinga contrasts the unwelcome predicament of the naturalist with that of the theist. According to Plantinga, the theist has every reason to believe that God would design their cognitive faculties so as to be reliable and truth-oriented. What's more, they have every reason to believe that God would create a special cognitive faculty that would give them direct epistemic access to the truth of his existence and the truth of any specific religious doctrines.

This is where Naquin thinks Plantinga goes wrong. If one accepts the ST-response to the problem of evil, one has a reason to think that God would not design such cognitive faculties.


1. Why Do We have Unreliable Cognitive Faculties?
The difficulties for Plantinga begin, unsurprisingly, with the responses to Rowe's evidential problem of evil. I covered Rowe's argument in the first entry in this series. Stated more briefly, Rowe maintains that the existence of evils for which we cannot locate a logically necessary greater good ("gratuitous evils") should undermine the confidence of our belief in the existence of God.

Theists of all stripes can respond to Rowe's argument by inverting it and denying the existence of gratuitous evils. In other words, they can say: "because God exists, all purported evils must have some logically necessary justification". Granting them this inversion of Rowe's argument, Naquin thinks they would have reason to accept the following argument:

  • (1.1) There exists an omnipotent, omniscient, omnibenevolent being, which we call God.
  • (1.2) An omnipotent, omniscient, being would be capable of creating humans so that their cognitive and perceptual faculties are 100% reliable.
  • (1.3) An omnibenevolent being would wish to make human cognitive and perceptual faculties 100% reliable, unless that being could not do so without sacrificing some overriding good.
  • (1.4) Human cognitive faculties are not 100% reliable.
  • (1.5) Therefore, God could not make humans with 100% reliable cognitive faculties without sacrificing some overriding good.


The logical structure of this argument is outlined in the diagram. Premise (1.2) is derived from (1.1) on the grounds that the creation of beings with perfectly reliable cognitive faculties is logically possible, and logical possibility is the threshold for omnipotence. 

Premise (1.3) is derived from (1.1) on the basis of an optimality assumption. The idea is that God, being perfect, would design anything he creates to be maximally efficient at achieving its specified purpose (which is "knowledge acquisition" in the case of cognitive faculties), unless there was some overriding moral reason to the contrary. 

Premise (1.4) seems to be an incontrovertible fact (indeed sub-optimality of all sorts seems to be common in the natural world). 

Thus, for a theist, the conclusion would seem to be unavoidable.


2. The ST-Reponse to the Problem of Impaired Cognition
The argument just outlined demands the existence of some overriding good that justifies the existence of sub-optimal cognitive faculties. This makes it identical to the problem of evil, which means it can be responded to in the same way.

So it could be responded to by working out what the overriding good actually is. In other words, by constructing a theodicy. The alternative, ST-response, as we have seen throughout this series, is to argue that human cognitive limitations are such that we cannot expect to know what God's moral reasons for permitting sub-optimality actually are. Thus, we must be skeptical.

As we have seen, there are some good grounds for thinking the ST-response is a plausible derivation from the basic idea of theism. After all, God is supposed to be an omniscient, transcendental and "wholly other" kind of being. This would seem to guarantee that we could not suppose or claim to know his mind. The problem, for Plantinga, is that this implies that we have reason to think that God conceals at least some truths from our knowledge.

Indeed, the ST-response makes the following argument possible:

  • (1.1) There exists an omnipotent, omniscient, omnibenevolent being, which we call God.
  • (2.1) If God is omniscient and "wholly other" then humans cannot know that God intends for their cognition and perception to be generally reliable.
  • (2.2) If humans cannot know that God intends for their cognition and perception are to be generally reliable, then they cannot know that their cognitive and perceptual faculties are in fact generally reliable.
  • (2.3) Humans cannot know that their cognitive and perceptual faculties are in fact generally reliable.

This argument would seem to seriously undermine Plantinga's claim that the theist can have warranted properly basic beliefs about theism and any doctrinal extensions thereof. In fact, it would seem to provide a Humean defeater for those beliefs: the reflective theist can no longer have confidence in the existence of a direct epistemic link to God because they have found that God could have reason for deceiving us.


3. Possible Plantinga-esque Responses
Is there any way for the proponent of a Plantingan epistemology to cope with this defeater? Naquin, obviously, doesn't think so. First off, there is no reason for thinking that cognitive reliability and truth-directedness are somehow built into theism. This much has become apparent in examining the justification behind the ST-response to the problem of evil.

Additionally, there is no reason to think that revelation, Biblical or otherwise, is self-evidencing or self-justifying. If we have reason to think that God can prevent us from having perfectly reliable cognition when it comes to making moral or scientific judgments (which is what the skeptical theist must claim), then why think our judgments about the contents of revelation would be any more reliable. 

Why should you think that God is really revealing the truth to you when you read the Bible or look at the stars when you already accept that he may have (moral) reasons for limiting our epistemic access to truth?

In the end then, a Plantingan religious epistemology would appear to be incompatible with a ST-response to the problem of evil. This is just one more way in which ST fails to provide any comfort to the theist. 

Friday, October 15, 2010

Morals by Agreement (Part 4): Bargaining and Impartiality


This post is part of my series on David Gauthier's Morals by Agreement. For an index, see here.

In the two previous entries we have looked at Gauthier's proposed solution to the bargaining problem, namely: minimax relative concession (MRC). According to this solution, when rational players have to reach some agreement on how to distribute a cooperative surplus, they should initially claim as much of the surplus as they could possibly have, and then agree on the distribution that minimises the maximum relative concessions they have to make from this initial claim.

If that summarisation is in any way confusing, I advise you to go back and read parts two and three. If it's not confusing, we can proceed to consider the moral implications of this theory.


1. What is it that Gauthier wants to do again?
As mentioned in part one, the goal of Gauthier's book is to find the deep connection between morality and rationality. Knowing when we have found the deep connection will depend on how we understand the terms "rationality" and "morality".

For Gauthier, "rationality" means what it means to economists, decision theorists, and game theorists: people should do what they most want to do. In slightly more formal terms, this means that rational agents should choose actions that maximise their utility. (Utility is simply the worth of an action or outcome to an agent -- the utility scale is calculated on a strictly individualistic basis). So, if there is such a thing as morality, it will have to be compatible with the utility-maximising conception of rationality.

But what is morality? For Gauthier, the distinctive feature of moral behaviour is its impartial nature. In other words, a moral act or a moral outcome is notable in that it treats agent's in an equal manner (no favour or bias is shown to particular agents).

It follows that if Gauthier's project of finding the deep connection between morality and rationality is to succeed, he must show how impartiality is possible for a utility-maximising agent. The theory of minimax relative concession is part (but only part!) of his attempt to do this.

In the remainder of this entry, three issues will be addressed: (i) why the MRC-solution is something to which utility-maximising agents can agree; (ii) why the MRC-solution allows us to realise impartial outcomes; and (iii) why the MRC-solution is only part of the complete picture.


2. Why is the MRC-solution Rational?
The bargaining process arises when there is some value to be obtained from cooperation that goes over and above what can be obtained from independent action. As such, there is the opportunity for every agent to increase their utility by cooperating. The problems arise when deciding how exactly the cooperative surplus should be distributed.

Gauthier identifies the following four conditions of rational bargaining (remember: a concession is an offer by a prospective cooperator for less than their initial claim; a concession point is the outcome that would result from a given set of concessions, one from each cooperator):

  • (1) Rational Claim: every player should claim the cooperative surplus that yields them the maximum utility, with the sole caveat being that they cannot claim the surplus if they would not be party to the cooperative interaction required to create it. In other words, they can't claim or demand something from the other rational players in order to secure cooperation, if they themselves wouldn't agree to that claim or demand.
  • (2) Concession Point: Given claims satisfying condition (1), every player must suppose that there is a feasible concession point that every rational player is willing to entertain (since they want the benefit of the cooperative surplus, but they know they can't each get their maximum claim, they must suppose there is some concession point that they can agree upon).
  • (3) Willingness to Concede: Each player must be willing to entertain a concession in relation to a feasible concession point, if its relative magnitude is no greater than that of the greatest concession that he supposes some other rational player is willing to entertain.
  • (4) Limits of Concession: No person is willing to entertain a concession in relation to a concession point if he is not required to do so by conditions (2) and (3).

I think conditions (1) and (2) are relatively straightforward. (1) is simply the application of the utility maximising model of rationality to the specific context of a bargaining problem; and (2) merely draws out from this the fact that if cooperation is to be possible at all, there must be a concession point that all can agree upon. Otherwise, there would be no point to cooperation.

Condition (3) highlights the equal rationality of the players. Since each player is seeking to maximise their utility (and by correlation minimise their concessions) no player can expect another player to make a concession unless they would be willing to make a similar concession.

Condition (4) is saying that no utility maximising player will be willing to entertain a concession unless: (i) there is a feasible concession point that all could agree upon and (ii) he/she is not being asked to make unnecessary, or unnecessarily large, concessions.

These conditions of rational bargaining -- which are compatible with the utility-maximising conception of rationality -- combine to show that the MRC-solution is one on which rational actors can be expected to agree. How so? Well, conditions (2), (3) and (4) imply that every rational agent should be willing to entertain a concession point up to and including (but no greater than) the minimax relative concession. At the same time, if the proposed outcome is not the MRC-point, it would mean that some player is being asked to concede more than they can be expected to concede. Consequently, no rational player is going to agree to an outcome that is different from the MRC-point.


3. Why is the MRC-solution Impartial?
Gauthier's argument for the impartiality of the MRC-solution is slightly more complicated and, as it's late enough as I'm writing this, I'm going to skimp on the details and give a pretty cursory summary. Basically, Gauthier argues that a solution is impartial if it gives the same relative treatment to people, whenever similar treatment is possible.

There are two cases to consider. The first is where the surplus produced by cooperation is a fully transferable good (i.e. can be easily transferred between the parties). In this case, equal relative shares should be distributed between the parties as they contribute equally to the creation of the surplus.*

The second case is that of the non-fully-transferable good. In this case, it will not be possible to equally distribute the entire surplus, only the transferable portion can be so distributed.

The MRC-solution covers both scenarios: when it is possible to fully transfer the good, an equal relative share will coincide with the MRC; likewise, in the non-transferable case, the MRC-solution will allow for equal relative shares of the transferable portion with the non-transferable portion going to whoever accrues it. It is the best that any rational player can expect to obtain, and it also affords them equal relative treatment.


4. What more needs to be done?
Although the MRC-solution is a significant step along the road to showing the deep connection between rationality and morality, it does not bring us all the way to our desired destination. Two additional things need to be done.

First, the MRC-solution only shows the impartiality of the agreement reached through rational bargaining. It does not show the rationality of complying with that agreement in the long run. Gauthier's theory of constrained maximisation tries to deal with this problem.

Second, the impartiality of the MRC-solution is relative to the initial bargaining position (IBP) of the parties. If this IBP is seriously partial or unequal, it will be reflected in the final outcome. Hence, some restrictions may need to be placed on what counts as an IBP. Gauthier tries to specify those restrictions in a later chapter.

I will be looking into the idea of constrained maximisation in due course. I have no intentions to look at the stuff on the IBP.



* Gauthier has an argument covering cases where the initial contribution seems to be unequal. In those cases he shows how the MRC-solution leads to a distribution which is equal, but proportionate to the contribution. I won't explain that here, it occurs on pp. 140-141 of the book.

Thursday, October 14, 2010

Morals by Agreement (Part 3): Some Examples of Gauthier's Bargaining Solution



This post is part of my series on David Gauthier's Morals by Agreement. For an index, see here.

I am currently looking at Gauthier's proposed solution to the bargaining problem, something he calls minimax relative concession. The previous post outlined how to approach the bargaining problem and how to discover the MRC-solution. We can summarise it as follows:

  • (1) Define the outcome space, i.e. all the feasible solutions to the bargaining game, and, if possible, draw it on an x-y axis.
  • (2) Locate the initial bargaining position (IBP), i.e. the outcomes the parties could achieve without reaching agreement.
  • (3) Locate the claim point, i.e. the point representing the maximum that each player can demand. This will usually be a point outside the outcome space, as players will initially demand more than can be distributed between them.
  • (4) Let the players make concessions from this claim point and then compare the relative magnitude of those concessions.
  • (5) The solution will be the point at which the maximum relative concession is as small as it is possible for it to be, hence the minimax relative concession.

In this post, we will look at two numerical examples of this solution-concept in action. These are discussed by Gauthier in Chapter 5.


1. Jane and Brian go to a Party
Jane has been invited to a party by Anne. She would really like to go but is worried that Brian might be there. She doesn't like him and would prefer not to go if he would be there. Brian has also been invited to the party, but he doesn't want to go unless Jane is going too.

Based on these preferences, we can draw-up the following payoff-table (or outcome matrix) for this game. The figures are interval measures of the utility each player derives from the four possible outcomes. They are measured by asking the players to consider lotteries over the different outcomes. For example, Jane's 2/3 payoff in the bottom left quadrant represents her indifference between that outcome and a lottery with a 2/3 chance of achieving her preferred outcome (top right) and a 1/3 chance of achieving her least favoured outcome (top left).


I won't get into it here, but it turns out that there is no pure strategy equilibrium in this game ("pure strategy" = definitely staying at home, or definitely going). There is, however, a mixed strategy equilibrium ("mixed strategy" = choosing the options with a certain probability).

Consider, if Jane chooses to go to the party with a probability of 1/4 (or 0.25) and to stay at home with a probability of 3/4 (or 0.75), then Bob's expected utilities from his possible choices will be:
  • Stay at home:       [(1/4 x 0) + (3/4 x 1/2)] = 3/8
  • Go to the party:    [(1/4 x 1) + (3/4 x 1/6)] = 3/8
Since the expected utilities from each option are the same, either response is utility maximising. A similar argument can be made for Jane's expected utilities if Brian chooses to stay at home with a probability of 1/2 and goes to the party with a probability of 1/2.

Consequently, the outcome resulting from the choice of these two mixed strategies is in equilibrium: each is a utility-maximising response to the other.


2. Jane and Bob Negotiate a Party-going Agreement
The analysis to this point has been straightforward game theory. Now we are going to look at the same outcome space through the lens of bargaining theory and try to locate the MRC solution. 

The first thing we need to is draw the outcome space and locate the initial bargaining position. In this case, the IBP will be the outcome that the parties could expect to obtain without an agreement. That will be the pair of outcomes associated with the mixed strategy equilibrium that has just been described, i.e. (1/2, 3/8).


Once we have drawn the outcome space and located the IBP, we can define the range of admissible outcomes that rational players would agree upon. These will occur on the optimal boundary (the line between (0, 1) and (1, 0)) between the points (1/2, 1/2) and (5/8, 3/8). Each party will initially try to claim as much as is possible. This means Brian will demand 1/2 and Jane will demand 5/8. This is illustrated below.

Obviously, the claim point is not an admissible outcome so the parties need to make some concessions. If we draw a straight line connecting the claim point to the IBP, then every point along that line will represent an equal relative concession from the players. This line intersects the optimal boundary at the point (9/16, 7/16). At this point, the relative concession for each player is 1/2 (I leave the math to the reader). This is the MRC-solution, because any outcome which gave more to Jane would force a greater relative concession from Brian and vice versa.


What does this solution mean in practice? Well, according to Gauthier, it means that Jane should be allowed to go to the party, and Brian should be allowed to play a mixed strategy with 7/16 probability of going to the party and 9/16 probability of staying at home.


3. Ernest and Adelaide Make a Deal
A second example works with monetary payoffs instead of utilities and allows us to explore the difference between relative concessions and absolute concessions.

Suppose that Ernest and Adelaide have the opportunity to co-operate in a mutually beneficial way, provided they can agree how to share their potential gains. Adelaide would receive a maximum net benefit of $500 from the joint venture, provided she receives all the gains after covering Ernest's costs. On the other hand, Ernest could only obtain a maximum net benefit of $50, provided he receives all the gains after covering Adelaide's costs. In this case we assume that neither can obtain anything without cooperation and so the IBP is (0, 0). We assume the possible outcomes lie along the curve in the following diagram.


Each party will initially claim as much as is possible for them to claim, i.e. the maximum net benefit. Obviously, this would not be desirable for the other party as they would then receive no gain from the joint enterprise. Concessions will have to be made by both sides.

Again, we follow the familiar method and draw a straight line connecting the claim point to the IBP. This line will intersect the optimal boundary of the outcome space at the point (353, 35). This amounts to an equal relative concession from each part of approximately 0.3. This is illustrated below.


Now the legitimate question arises: what about the absolute magnitudes of the gains and the concessions? Should they change how we think about the solution? After all, wouldn't Ernest be entitled to complain that he is not gaining anywhere near as much as Adelaide?

Here, we run into some interesting possibilities. Although Ernest could indeed make the complaint just outlined, Adelaide could also complain that, in the final agreement, Ernest is conceding far less than she is ($15 compared with $147). So, in some sense, the greater gain is offset by the greater loss. 

Gauthier points out that this kind of absolute comparison is only possible in a few cases (where utilities map directly onto monetary outcomes). And in those cases, if we are tempted by some principle of equal gain, we should always bear in mind the principle of equal loss (as we just did). What makes MRC an acceptable solution to the bargaining problem is its ability to automatically balance relative loss and gain.

That's it for now. In the next post, we will try to relate the MRC-solution to the broader issues in moral and political philosophy that Gauthier is trying to address.

Morals by Agreement (Index)


This post serves as an index to my series on David Gauthier's book Morals by Agreement.

Index
1. Gauthier's Approach

2. Minimax Relative Concession

3. Some Examples of MRC

4. Bargaining and Impartiality

Wednesday, October 13, 2010

Morals by Agreement (Part 2): Minimax Relative Concession



This post is part of my series on David Gauthier's Morals by Agreement? The first part is available here.

Gauthier's book tries to show the deep connection between rationality and morality. "Rationality", for Gauthier, means what it means to economists, decision theorists and game theorists. But to show the deep connection between it and morality, he is not afraid to reformulate certain key parts of the traditional theory.

In particular, he tries to show (i) how rational cooperation is possible through the concept of constrained maximisation (CM) and (ii) how rational agents would agree to cooperate through the bargaining solution known as minimax relative concession (MRC).

I promised I would address both of those concepts in this series, but I have some difficulty knowing where to begin. My personal feeling is that it makes more sense to discuss CM first, and MRC second. However, Gauthier does things the other way around, and, at the end of the day, who am I to argue. MRC it is.

I reckon it will be best to spread the discussion of MRC out over a few posts. So this post will sketch out the theory in somewhat formal terms; the next post, will look at some worked examples; and another post will consider the moral implications of the theory.

My discussion is based on Chapter 5 of Morals by Agreement. It is quite long, but straightforward and (I hope) informative.


1. The Outcome Space
I'm not going to say anything about the moral and and social importance of cooperation and bargaining since I've discussed it before. Instead, I'm going to cut straight to the chase and describe the formal concepts needed to understand Gauthier's theory.

First, allow me to introduce you to something we are going to call the outcome space. It is depicted in the diagram below. You may recognise it from a previous post where it depicted the payoffs (or utilities) that two players attached to particular outcomes in the Meeting Game. On this occasion, it is meant to stand-in for the outcome space in any bargaining game.

The Outcome Space and the Optimal Boundary


The X-axis represents the payoffs for Player 1 and the Y-axis represents the payoffs for player 2. The area enclosed by the blue line represents the space of possible outcomes. Every point within that space is an outcome on which the players can agree. However, the blue line itself represents the efficient (or optimal -- Gauthier prefers to say "optimal") boundary or frontier of this space. Every point along this line would constitute an optimal agreement.

There are obviously bargaining games involving more than two players (n > 2). The outcome spaces for such games are not easily represented in visual terms. One must rely on the math. Fortunately, we are going to stick with the two-person example.

Defining and representing the outcome space is the first thing to do whenever you are modeling a bargaining problem. Once it has been defined, you can start adding some complications to your model. This is what we are going to do next.


2. The Initial Bargaining Position
The first complication we are going to add to the representation is the inclusion of the initial bargaining position (IBP). This has been referred to in previous posts as either the disagreement point or the Best Alternative to Negotiated Agreement (BATNA).

Actually, I need to qualify that. It's not quite right to say that these two terms are equivalent to the IBP because, in a later chapter, Gauthier defines what counts as an IBP in a slightly different manner. I'm not going to get into that here. If you're interested, read the book, or ask me about it in the comments section.

Anyway, the IBP represents what the parties bring to the negotiation table. It is the outcome they can achieve without reaching any agreement. This might mean different things in different contexts. The important point is that it changes how we think about the bargaining process and the outcome space. No longer are all points in the outcome space possible agreements. Instead, only those points that lead to a gain over the IBP are possible. After all, the players aren't (voluntarily) going to agree to something that makes them worse off.

The diagram below has added an IBP to the outcome space. The dotted lines carve out the segment of the outcome space that is now in play. We can even narrow that down further by saying that only those points on the optimal boundary, between the dotted-lines, are outcomes that rational players would agree upon.

Initial Bargaining Position


3. The Claim Point
Now that we have defined the outcome space and narrowed down the range of possible agreements, we can get into the meat of the bargaining process itself. This begins with each player making a claim to their preferred outcome.

Working with the utility-maximising conception of rationality, we can say that rational bargainers will initially claim the maximum they can. This maximum will be the point along the optimal boundary, between the dotted-lines, that represents the most utility for that player. This is depicted in the diagram below for both X and Y.

Initial Claims


Now there is an obvious problem. If each player demands the maximum for themselves, we will end up with a pair of initial claims that goes over and above the values of the possible outcomes. This pair of claims will be called the claim point and it is illustrated in the following diagram.

The Claim Point


4. Making Concessions
Since the claim point is not a possible outcome in the bargaining game, the players will have to make concessions. Anybody who has haggled with a seller at a market is familiar with this process. You initially offer the seller far less than they are willing to accept; they initially demand far more than you are willing to pay; and you both start making concessions until you arrive at an agreed price.

In terms of our diagram, the concessions will be points below the claim point that one player thinks the other might accept. These will be called concession points.

The question before us is: what kinds of concessions would it be rational for players to make and agree upon? This is where Gauthier's theory of rational bargaining starts to get interesting.

One of the problems with determining the rational concessions is that we have to find some way to make comparisons between the concessions made by the players. This is a difficulty since, as discussed before, the utility scales for each player are somewhat arbitrary and so you don't know whether you are comparing like with like.

The easiest analogy here is to imagine comparing temperatures on two different scales (fahrenheit and centigrade). You can only do this if you have some function that converts a measurement of temperature on one scale into a measurement on the other. This would then allow for like-with-like comparison.

How can this be accomplished in the case of utility scales? Well, we could just assume that the players utilities are being measured in the same units. This is essentially what John Harsanyi does and it might be a reasonable assumption under certain conditions (Harsanyi said it would be when players have been exposed to the same information). An alternative proposal, from Ken Binmore, is to come up with a social index that allows you to say how much the utils on one person's scale are worth in terms of the utils on another person's scale. I looked at this before.

Gauthier's solution is neater. He says that instead of comparing the absolute magnitude of the concessions made by the players, we should compare the relative magnitude of the concessions.

This might require a little explanation. The absolute magnitude is simply the difference between the outcome that would be obtained at the claim point and the outcome that would be obtained at the concession point. The relative magnitude is the ratio of the absolute magnitude (just described) to the difference between the outcome at the claim point and the outcome at the IBP.

Take an abstract example: Suppose Player 1's outcome at the IBP is U*; his outcome at the claim point is U1; and his outcome at the concession point is U2. Then, the relative magnitude of his concession will be:

  • [(U1 - U2) / (U1 - U*)]

This will be a number between 0 and 1. It will be 1 if the concession point is, for that player, the same as the IBP; it will be 0 if the concession point is, for that player, the same as the claim point; and it will be a fraction (or decimal) if the concession point is somewhere in between.


The advantage with using ratios like this for comparison is that they are pure numbers, not tied to any particular scale. As a result, you don't need to worry about whether you are comparing like with like.


5. Minimax Relative Concession
Now that we have a method for comparing the concessions of the bargainers, we can proceed to identify the agreement that they would reach. According to Gauthier, the agreement would be one in which the maximum relative concession is minimised. Hence, the theory is called minimax relative concession.

In most cases, the MRC will be equal for each player. In other words, both players end up making the same relative concession (this might be quite different in absolute terms). There is an easy graphical representation of this if we go back to the earlier diagrams.

Consider once more the claim point in these diagrams. This point represents the maximum that each player could get from the deal (at the expense of the other player). Now draw a straight line connecting the claim point to the IBP. Every point along that line will represent an outcome requiring equal relative concessions. If that line intersects the optimal boundary of the outcome space, we have the MRC for this bargaining game.

Optimal Solution with Equal Relative Concessions

That's it; that's the theory of minimax relative concession. In the next post, we will look at some numerical examples.

Tuesday, October 12, 2010

Morals by Agreement (Part 1): Gauthier's Approach


I have recently been pursuing some questions arising from the intersection between bargaining theory and moral and political philosophy.

In a previous post, I argued that many of the most popular uses of bargaining theory in moral and political philosophy are restricted in form. That is to say: they do not attempt to provide a complete answer to the core metaethical questions concerning the ontological basis of moral truths. Instead, they attempt to show how one set of moral judgments can be derived from another set of rational and moral intuitions and principles. This is certainly a valuable endeavour, but it is obviously incomplete.

In this series, I want to look at David Gauthier's unrestricted use of bargaining theory. This will be based on his book Morals by Agreement. This first post will look at the basic methods and concepts that lie behind Gauthier's thesis.


1. A Metaethical Inquiry?
Gauthier's stated goal is to find out where morality comes from. This is a quintessentially metaethical goal: it does not focus on normative ethical questions such as "Should I eat meat?"; it focuses on questions about the ontological significance of normative statements such as "This state of affairs is good/bad".

To pursue his goal, Gauthier follows the platitude-to-state-of-affairs methodology that I have discussed before. This methodology begins with a set of platitudes about moral truths (derived from the agreed-upon semantics of moral terms) and checks to see whether any actual state of affairs would satisfy those moral platitudes. In Gauthier's case, the relevant moral platitude is the impartial nature of moral oughts: he thinks that the distinctive feature of moral prescriptions is that they are not tethered to the beliefs and desires of any particular agent.

One might think that focusing on this single moral platitude could lead to an impoverished account of morality. I am inclined to agree, but I am willing to take Gauthier's point that impartiality is central to most people's understanding of what successful moral theory would contain.


2. Rationality and Morality
In pursuing his metaethical goal, Gauthier tries to show the "deep" connection between rationality and morality. To be precise, he tries to show how moral prescriptions are simply a proper subset of rational prescriptions. This is what makes Gauthier's account different from the restricted approach I was considering earlier. Those accounts tried to combine sets of moral and rational prescriptions without questioning the relationship between them.

Rationality is as good a place as any to locate the foundations of morality because a theory of rationality has a sort of inbuilt normativity to it. After all, a theory of rationality will specify the kinds of things that motivate and provide an agent with reasons for action. Furthermore, the theory will have some connection with empirical reality as it will attempt to capture the process of practical reason that is embodied in actually existent agents.

Gauthier notes that there are two ways to develop the deep connection between rationality and morality:

  • The Kantian Approach: this works from a universal or transcendentalist account of practical reason and shows how this account gives rise to impartial moral prescriptions. My series on Alan Gewirth's Principle of Generic Consistency will be exploring this approach.
  • The Social Science Approach: this works from the account of rationality that has been developed in the social sciences (economics, decision theory, and game theory) and shows how impartial moral prescriptions can be derived from it. This is Gauthier's preferred approach.

Although some have argued that Gauthier is seriously confused in his understanding of rationality, there are advantages to his approach over the Kantian one. Chief among them is the fact that the social science account of rationality has been formalised in (often painstaking) mathematical terms, and has some empirical support and tractability (although this is certainly questionable).


3. Rational Choice and Cooperation
Cooperation and coordination are essential to society. Indeed, they are the glue that binds society together. In addressing the possibility of impartial moral prescriptions, Gauthier zones in on a particular type of cooperative problem that befalls society, namely: the Prisoners' Dilemma (PD).

Many will be familiar with this problem from the famous story told about the two prisoners who are in separate holding cells, and who are each told they can avoid jail-time if they rat out the other guy. The problem is that if they both rat each other out they get a lengthy jail sentence, whereas if they both stay silent they get a short jail sentence.

Although this story is memorable, it is important to realise that the PD is a general form of cooperative problem, not something that only applies to the specific set of circumstances in the story. To make this point, the diagram below (click to enlarge) describes a PD that has arisen in professional cycling. I took this from an article by Michael Shermer that appeared some time back in Scientific American.



The important features of the PD, for Gauthier's purposes, are the following:

  • There is some gain to be made by opting for mutual cooperation over mutual defection. In other words, there is a cooperative surplus that the agents can obtain if they work together that they would not be able to obtain if they worked independently.
  • There is some gain to be made by opting for individual defection over mutual cooperation. In other words, one agent can obtain even more if he defects while the other agent cooperates. Furthermore, the agent who is on the receiving end of this unilateral defection receives even less than they would have received through mutual defection.

One may wonder: why does Gauthier focus on the PD? There are, after all, other types of cooperative problems that do not have these features and that can play an equally important role in social life.

As it happens, I think there is a good reason for Gauthier's focus on the PD. Because it has the two features just described, the PD is the ultimate testing ground for a rationalistic account of impartial moral prescriptions. Why? Because the standard analysis of the PD is that rational players, who seek to maximise their utility, should choose mutual defection over mutual cooperation. This is because mutual defection ensures the maximum payoff, no matter what anybody else does.


4. Gauthier's Solutions
So in order to succeed, Gauthier must show how the standard decision-theoretic analysis of the PD is wrong and how mutual cooperation is, in fact, the rational strategy. He must then go on to show that the actual distribution of the cooperative surplus is impartial in form.

Gauthier tries to do this by presenting two key revisions of rational choice theory and bargaining theory. They are:
  • (a) Constrained maximisation: This is what allows rational actors to opt for cooperation over defection, even in PDs.
  • (b) Minimax Relative Concession: This is Gauthier's contribution to bargaining theory. It describes the type of distribution that fully rational actors could be expected to agree upon.

I'll cover both of these concepts in later entries.