Tuesday, September 1, 2015

A Rawlsian Approach to Intoxicated Consent to Sex?

Should we choose standards of consent from behind a veil of ignorance?


People are often mildly to severely intoxicated when they have sex. This creates a problem. If someone signals consent to sex whilst voluntarily intoxicated, should that consent be treated as morally/legally valid? I have been very slowly working my way through Alan Wertheimer’s excellent paper on this topic (cleverly entitled ‘Intoxicated Consent to Sexual Relations’). So slow has been my progression that I have actually written three previous posts examining the complex web of moral claims associated with it. But in doing so I have yet to share Wertheimer’s own view. Today, I finally make up for this deficit.

A brief review of previous entries is in order. First, recall that throughout this series the focus is on the heterosexual case involving a man (who may or may not be intoxicated) who has sex with an intoxicated woman. The reason for this focus is that this is probably the most common scenario from a legal perspective and the one that reveals the tensions between traditional liberal legal theories and certain feminist theories. One of the ways in which these tensions are revealed is when it comes to the relationship between personal responsibility and consent. It is widely accepted that voluntary intoxication does not absolve one of responsibility for one’s actions. This widespread agreement was utilised by Heidi Hurd in her argument that intoxicated consent should be valid. Otherwise, she says, we are in the unusual position that an intoxicated man is responsible for raping an intoxicated woman but she herself not responsible for signaling consent. Conversely, there are those who argue that the kind of victim-blaming that goes on in such sexual offence cases is perverse. Susan Estich makes this case by arguing that just as we would not hold someone responsible for being assaulted if they walk down a dark alleyway at night, so too should we not hold a woman responsible just because she was intoxicated at the time of a sexual assault.

Both Hurd’s and and Estrich’s arguments were examined in a previous entry. Both were found to be lacking. Hurd’s argument was problematic because it assumed that the kinds of mental capacities involved in making ascriptions of responsibility were the same as those involved in assessing the validity of consent. This is not the case: there is good reason to suppose that higher mental capacities (ones that are more likely to be impaired by even mild degrees of intoxication) are required for valid consent. Likewise, Estrich’s arguments were found to be lacking because her analogies involved cases where people were clearly the victims of crime. The difficulty in the intoxicated consent case is that if the signaled consent is valid no crime has taken place. So you really have to determine the validity of the consent before you appeal to these moral equivalencies.

The upshot of all this is that there is no straightforward relationship between claims about intoxicated responsibility and intoxicated consent. There are more complex moral variables at play. Wertheimer’s goal is to reveal these variables and see whether they can help us to answer our opening question: is intoxicated consent valid? As we shall see, Wertheimer’s answer to this question involves a quasi-Rawlsian approach to setting the standards for sexual consent.


1. Intoxicated Consent in non-Sexual Cases
A useful window into the complex variables at play is to look at intoxicated consent in non-sexual cases. Wertheimer starts with the following:

Major Surgery: ‘Consider consent to a medical procedure. It seems entirely reasonable that a patient’s voluntary intoxicated consent to a major surgery should not be treated as valid if B’s intoxication is or should be evident to the physician, even if the physician has provided all the relevant information. A physician cannot say, “She was drunk when she came in to sign the consent form. She’s responsible for her intoxication, not me. End of story.”’ (Wertheimer 2001, 389)

This sounds reasonable. If someone walked into a doctor’s surgery after a few drinks and tried to consent to having her leg amputated, a doctor would surely be obliged to tell her to come back at another time. But what does this intuition reveal about the relationship between intoxication and consent? Wertheimer thinks it reveals that principles of consent are sensitive to at least three sorts of considerations:

Relative Costs: The principles of consent are sensitive to the ‘costs of the process of obtaining consent relative to just what is at stake’. In other words, the higher the potential costs, the more important (and more rigorous) we should be in ensuring that the consent is valid. We would worry that intoxicated consent to having one’s leg amputated would not be valid, but we would probably not worry that intoxicated consent to the use of a tongue depressor was valid. There is less at stake in the latter case.

Possible Errors: The principles of consent are sensitive to the two kinds of error that might arise: (i) false positives, i.e. assuming that someone has consented when really they have not; and (ii) false negatives, i.e. assuming that someone has not consented when they really have. To put it another way, the standards for consent have an impact on both positive autonomy (i.e. on the ability to get what we want) and negative autonomy (i.e. on the ability to avoid what we do not want). We need to be sensitive to those impacts when setting the appropriate standards.

Feasibility: The principles of consent are sensitive to both the possibility and feasibility of obtaining high quality consent. The medical context is instructive here again. If you have an elderly patient suffering from dementia, then it may simply be impossible or infeasible to get high quality consent to medical treatment (i.e. we may always be unsure whether their signals convey their higher-order preferences). But treatment may be necessary for their well-being so we may be satisfied with a less-than-ideal standard of consent. Contrariwise, in the case of the intoxicated patient looking to have their leg amputated, higher quality consent is feasible if we simply wait until their intoxication has ended. Consequently, we should be less satisfied with low quality consent in that case.


Wertheimer considers how these three factors impact upon our moral judgments in several other cases, I won’t mention them all here. One that is worth mentioning — because it highlights tensions between certain feminist theories and liberal principles of consent — is the standard of consent deemed appropriate when seeking an abortion. Many feminists are in favour of allowing women ready access to abortion. In favouring this, they often oppose or resist high standards of consent to abortion. For instance, they will oppose setting age restrictions, requiring women to be lectured to about the development of the foetus, the stipulation of waiting periods to avoid hasty decisions, and so on. Why do they oppose these things? Wertheimer argues that it is because, first and foremost, there are no natural defaults when it comes to setting standards of consent, and second because they see how these restrictions are part of a coordinated attack on women’s positive autonomy (i.e. their desire to access services they want to access). When the standards are too high, positive autonomy is undermined (because the system errs on the side of too many false negatives).

The conclusion to be drawn from all this is that, when it comes to intoxicated consent to sex, we need to factor in the three considerations mentioned above and examine the consequences of setting high/low standards of consent.


2. So how should we view intoxicated consent?
When we do so what might our conclusion be? Analogies aren’t always helpful when it comes to better understanding the ethics of sexual interactions. Some people insist that there is something unique and special about those interactions that cannot be fully captured by analogical reasoning. But analogical reasoning is often all we have in ethical cases. In this vein, Wertheimer pursues one last analogy before considering intoxicated consent to sex. The analogy is with the case of intoxicated gambling.

The legal position is usually that gamblers bear the moral and financial burden associated with intoxicated gambling. In other words, if you go into a casino, consume copious amounts of alcohol, and gamble away a significant amount of money, then you usually suffer the consequences (it does, of course, depend on whether gambling is legal in the relevant jurisdiction). Is this the right approach to take? Maybe, but it may well depend on how much the gambler stakes on their bets. If they gamble away a few hundred or thousand dollars, we might hold them to it; but if they gamble away their house or all their earthly possessions, we might view it differently. Again, the quality of the consent required would vary as a function of what the costs are likely to be.

Why might we take this attitude toward intoxicated gambling? Here’s where Wertheimer makes his main contribution. He says that one way to work out the right standard of consent is to adopt an ex ante test. In other words, ask the would-be intoxicated gamblers, prior to the fact (i.e. before they are intoxicated and before they know whether they have won or lost on their gambles), what standard of consent they would like to apply to their intoxicated gambling. In proposing this question, Wertheimer is advocating a methodology that is somewhat akin to Rawls’s famous methodology for deriving principles of distributive justice. Rawls argued that in order to settle on a just distribution of social goods, we should imagine would-be citizens negotiating on the relevant principles behind a veil of ignorance (i.e. without knowing where they will end up in society). Wertheimer is adopting a similar veil of ignorance test for his would-be gamblers.

Wertheimer’s ex ante test: When deciding on the appropriate set of consent principles for any intoxicated activity, we should ask the would-be participants which set of principles they would prefer to govern that activity before the fact (i.e. before they have actually engaged in that activity whilst intoxicated).

What are the results of this test? A full analysis of the gambling case would require a longer paper but we can make some suggestions. One is that would-be gamblers might favour a relatively low standard of consent (at least when the stakes are low). Why is that? Because they probably find the combination of alcohol consumption and gambling to be pleasurable. Hence, they might be inclined to favour a set of consent principles that allows them to engage in that combination of activities (up to a certain level of potential loss). In this sense, they tweak the precise mix of consent principles so as to favour their positive autonomy, and err slightly on the side of more false positives than negatives.

How about intoxicated consent to sex? Again, the procedure is the same: you ask women ex ante which mix of consent principles they would favour for intoxicated sexual encounters. They could favour a strict approach — i.e. no consent signal provided whilst intoxicated is valid — or a more liberal approach — where this comes in various degrees. When choosing the standard, they will need to pay attention to the level of harm involved relative to the cost of obtaining high quality consent, the feasibility of obtaining high quality consent, and the type of sexual autonomy that ought to be favoured.

Can we say anything more concrete? This is one of the more frustrating aspects of Wertheimer’s article. After his lengthy analysis, he still doesn’t have a preferred policy proposal. But he does say three interesting things. First, he says that there are reasons to think that positive sexual autonomy might favour the validity of, at least some, instances of intoxicated consent. Indeed, it might be that the combination of alcohol consumption and sexual activity is highly valued:

It’s not just that some women may wish to engage in sex and drinking simultaneously. Rather, drinking to the point of at least moderate intoxication may be crucial to what some regard as a desirable sexual and social experience. We do well to remember that a woman may choose to become (moderately or even severely) intoxicated precisely because she wants to suspend, curtail, or weaken some of her stable psychological traits. 
(Wertheimer 2001, 395)

It’s always dangerous when a man purports to say anything about what we would ‘do well to remember’ when it comes to women’s sexual preferences. But this does seem intuitively right to me. I think moderate intoxication is part and parcel of many positive social and sexual interactions, and that people often desire the intoxicated state because of its disinhibiting effects. That said, Wertheimer’s second key point is that this potential value needs to be balanced against the emotional and physical harms of an intoxicated sexual encounter. Here, he thinks we need to know much more about the effects of such encounters, and what the potential harms of erring on the side of false positives would be. The tricky question of regret also enters the fray:

The validity of a woman’s intoxicated consent to sexual relations is not a function of her actual ex post regret or satisfaction with respect to a given sexual encounter. The point of B’s sexual consent is always ex ante: it renders it permissible for A to have sexual relations with her. But the principles of consent that establish when we should regard a woman’s consent token as valid may take account of the ex ante disvalue of her ex post regret. If the evidence suggests that women are, in fact, likely to severely regret sexual relations to which they have given intoxicated consent, that is some reason to regard intoxicated consent as invalid. 
(Wertheimer 2001, 395-6)

This brings us to Wertheimer’s third key observation, which is that the harm of any such sexual encounter is likely to vary depending on the prior relationship between the two individuals. This is problematic insofar as it seems to allow for past sexual history to influence our moral assessment of the relevant consent standards (which, as anyone who has studied the history of rape laws will know, is highly contested). Nevertheless, it is part of Wertheimer’s view that consent standards may vary relative to the potential marginal harm of a sexual encounter. And the potential marginal harm from a first time intoxicated sexual encounter is likely to be higher than the potential marginal harm arising from an encounter between two long-term partners. He uses the following example to illustrate his approach:

Suppose that a married couple hosts a New Years Eve party, get roaring drunk, falls into bed, and has sex. It would be crazy to think that the husband does something seriously wrong here simply because his wife consents while quite intoxicated, unless the wife had previously indicated that she does not want to have her intoxicated consent taken seriously… Why do I think this view would be crazy? Because (in part) there is no ‘non-autonomy based’ physical or psychological harm to a marginal sexual interaction with a person with whom one frequently has sexual relations as contrasted with the case where a woman might have avoided sexual relations with that person altogether. 
(Wertheimer 2001, 396)


3. Conclusion
To briefly sum up, there is no simple rule when it comes to intoxication and sexual consent. The consistency thesis, which holds that the same standard should apply to sexual consent as applies to responsibility, is unattractive because it assumes the capacities for consent are equivalent to the capacities for responsibility. The impermissibility thesis, which holds that intoxicated consent should never be deemed valid, is unattractive both because the analogies use to support it are unhelpful and because of its potential impact on positive sexual autonomy.

Instead, the standard for consent should vary as a function of three variables: (i) the relative costs of procuring consent vis-a-vis the potential harms of the activity being consented to; (ii) the preference for false positives over false negatives (i.e. the value of favouring positive autonomy over negative autonomy); and (iii) the feasibility and/or possibility of procuring high quality consent. In figuring out how these variables work in the case of intoxicated sexual consent, we should adopt an ex ante test. This means we should ask the would-be intoxicants which standard of consent they would prefer prior to engaging in the intoxicated variant of the activity.

In doing so, we will probably learn that: (a) there is some value in allowing for some degree of intoxicated consent (from the perspective of positive sexual autonomy); (b) this value must be balanced against the potential harms of intoxicated sexual activity (including the likely ex post regret); and c) the appropriate standard is likely to vary depending on the potential marginal harm of the sexual encounter (where this is likely to be lower in the case of long-term partners than it is in the case of new ones).

Sunday, August 30, 2015

Beginning to Exist and the Kalam Cosmological Argument




The Kalam Cosmological Argument (KCA) opens with the following premise:

(1) Whatever begins to exist has a cause of its existence

From there, the argument continues with the observation that the universe began to exist and ends with the conclusion that God must be the cause of the universe’s existence. But premise (1) is the motivating principle. William Lane Craig defends it thusly:

Premise (1) seems obviously true — at the least, more so than its negation. First and foremost, it’s rooted in the metaphysical intuition that something cannot come into being from nothing. To suggest that things could just pop into being uncaused out of nothing is to quit doing serious metaphysics and to resort to magic. Second, if things really could come into being uncaused out of nothing, then it becomes inexplicable why just anything and everything do not come into existence uncaused from nothing. Finally, the first premise is constantly confirmed in our experience. Atheists who are scientific naturalists thus have the strongest of motivations to accept it. 
(Craig 2008, 111-112)

There’s a certain appealing bombasticism to this line of reasoning, but I’m not so sure that it is obviously true. As others before me have pointed out, premise (1) might be pretty compelling when considering how events occur in the universe (i.e. in the realm of space and time). It’s much less compelling when we are trying to consider how the universe itself came into being. When we reach that point, our everyday metaphysical intuitions could go out the window. Furthermore, if the problem is that things cannot come into being out of nothing, then it's not clear why God is excluded from premise’s ambit.

The problem is neatly illustrated in a paper by Christopher Bobier entitled ‘God, Time and the Kalam Cosmological Argument’. In this paper, Bobier argues that premise (1) is flawed because there is no sensible definition of the phrase ‘begins to exist’ that retains premise (1)’s intuitively compelling nature and allows Craig to reach his desired conclusion, viz. that God is the cause of the universe. I’m going to go through Bobier’s argument in the remainder of this post.


1. The General Structure of Bobier’s Critique
Bobier’s argument begins with some observations about Craig’s conception of God. Craig argues that God is the transcendent personal cause of the universe. He also believes that once the universe exists God is temporally bound to it (i.e. that once spacetime begins, God becomes a temporal being capable of intervening in events as they unfold). This is essential given that Craig is a Christian and believes that God has actually intervened in human history.

This means that, in order for the KCA to work, Craig must invoke a very particular understanding of God’s causal relationship to space and time. Bobier refers to this as CGT (‘Craig’s view of God and Time):

CGT: In order for the Kalam to work and for Craig to retain his preferred conception of God, God must be:
A. Timeless and unchanging before the moment of creation
B. Temporal from the moment of creation, but eternal in the sense that He exists without end.
C. Causally prior to the universe but not temporally prior.

Taking this conception of God onboard, Bobier’s argument is structured as dilemma (or maybe more properly a paradox). He thinks that there is no way for Craig to affirm both the KCA and the CGT without also forcing the conclusion that God must have had a cause of his existence (which, of course, defeats the whole purpose of the argument):

Bobier’s Dilemma/Paradox: If you affirm both the KCA and CGT, then you must conclude that God had a cause of his existence and hence that either the KCA or the CGT is false.

The problem stems from the phrase ‘begins to exist’. If God is atemporal before the moment of creation, but then becomes temporal at that moment, then it seems like he must ‘begin to exist’. And if he begins to exist he must, according to premise (1), have a cause of his existence. Therefore, God (as conceived by Craig) doesn’t really solve the problem that the KCA sets out to solve.

Craig is a smart guy so he is aware of this problem. He tries to address it by coming up with more precise definitions of the term ‘begins to exist’, ones that rule out the possibility of God beginning to exist. But he must be careful when doing so. Complex and arcane definitions of ‘begins to exist’ will undermine the intuitive obviousness of premise (1). Can Craig perform the required balancing act? Bobier doesn’t think so.


2. Some initial attempts at definition
To see why, we need to consider the various ways in which the concept of 'begins to exist' can be fleshed out. Here’s a first pass at it:

BTE: X begins to exist at T1 iff there is an earlier time immediately prior at which X did not exist.

This probably best captures our everyday definition of ‘begins to exist’, i.e. the one we take for granted in our observations of events in the universe. It has also found favour with some theists (notably Swinburne). But it won’t work for the proponent of the KCA. The problem is that BTE assumes a pre-existing temporal order: it tells us that X begins to exist whenever there are prior temporal moments at which X failed to exist. The proponent of the KCA is focusing on the beginning of the temporal order itself. Any suitable definition of ‘begins to exist’ will have to take this into consideration. It will have to leave open the possibility that nothing at all (no time, no space) was in existence prior to T1. Craig tries to do this with the following definition:

BTE1: X begins to exist at T1 iff (i) X exists at T1 and (ii) there is no time immediately prior to T1 at which X exists.

The second condition here seems to do the trick. It is sufficiently liberal in its terminology to allow the possibility that the temporal order itself began at T1. But there is something fishy about it too. It is not clear why it needs to include the phrase ‘no time immediately prior’. Indeed, doing so would appear to create a difficulty since one could imagine discontinuous universes being created, destroyed and recreated again. Suppose God created the universe at T1, then destroyed it at T2 (eliminating all of spacetime), only to create the exact same universe at T3. Under BTE1 this means the universe would begin to exist again at T3. But according to Bobier this is not consistent with our intuitions. We would be more likely to say that the universe came back into existence at T3, not that it began to exist.

So he proposes we drop the ‘immediately’ qualifier:

BTE2: X begins to exist at T1 iff (i) X exists at T1 and (ii) there is no time prior to T1 at which X exists.

But this is still no good because it brings us to the original problem. If we strictly apply this definition, then it seems like (on CGT) God begins to exist at T1. Remember, according to CGT, God is timeless and unchanging prior to T1. He becomes temporal at that moment. Which means he exists at that moment and there was no time prior to that moment at which he existed. Therefore, he must begin to exist at T1.


3. More sophisticated definitions
Craig tries to address this problem by adding a third condition to the definition of ‘begins to exist’:

BTE3: X begins to exist at T1 iff (i) X exists at T1, (ii) there is no time prior to T1 at which X exists, and (iii) the actual world contains no state of affairs involving X’s timeless existence.

Is this any better? Well, one thing that strikes me about it is that it seems terribly ad hoc. What started out as a simple concept in an intuitively compelling principle has morphed into something with multiple conditions, the last of which looks explicitly designed to avoid the unwelcome conclusion that God begins to exist at the moment of creation. Of course, it tries not to be too ad hoc by insisting that it applies to any ‘X’ (i.e. not just God). But in doing so it opens the floodgates to all manner of things that may not begin to exist at T1 and hence may not require causes of their existence.

The argument for this is somewhat complicated. Condition (iii) refers to states of affairs in the actual world. States of affairs are ways in which the world is represented as being. They come in a few different varieties. Some states of affairs are actual, i.e. they really exist in our actual world. Some states of affairs are possible but not actual, i.e. they could exist in our actual world but do not. Other states of affairs are metaphysically or logically impossible. All states of affairs that are actual are possible; but not all possible states of affairs are actual.

Craig, along with others, defines facts in terms of obtaining states of affairs. In other words, he thinks that any state of affairs (set down in a declarative sentence) which is true of our actual world is an obtaining state of affairs and is hence a fact about our world. This has the, perhaps surprising, consequence that certain modal statements (statements about what could or could not have been) are facts about our actual world. For instance, the statement ‘2+2 could not equal 5” is a fact about the world in which we live. Another example, given by Bobier, is the statement ‘Kobe Bryant could have been seven feet tall”. This is a fact, according to Bobier, because there is no reason to think this is could not have been true in our world. The notion that modal facts are facts about our world is a somewhat controversial view in philosophy, but is defended on the grounds that if we don’t allow for such modal statements to represent facts of our world, much of what we presume to know about ourselves and others would be false. For example, the statement ‘I am alive but I could have died in that car accident last week’ seems like a true fact about my present state of existence. If I cut out the ‘could’ bit, much of what I presume to know about my personal history falls by the wayside.

Why is this important? Well, taking onboard all these distinctions, we see that there are two ways in which to interpret condition (iii):

Interpretation 1: The actual world contains no possible states of affairs involving X’s timeless existence.
Interpretation 2: The actual world contains no actual states of affairs involving X’s timeless existence.

Craig couldn’t intend interpretation 1. It is too liberal as it allows for non-obtaining states of affairs to timelessly exist prior to T1. Bobier illustrates this with an unusual example. He asks us to imagine a universe in which a single physical object exists without change or temporal distinctions. Say the physical object is Bobier's basketball. Such a universe is metaphysically possible and can be represented by the state of affairs-statement 'My basketball exists timelessly'. This has the odd conclusion that Bobier's basketball then fails Craig's definitional test. It does not begin to exist since the universe contains possible states of affairs involving its timeless existence. This could be true for all manner of possible states of affairs. (Some might resist this example on the grounds that a basketball qua physical object must exist in time, but as Bobier points out there is no reason to think that this represents a genuine metaphysical truth because we there is no reason to suppose that physical objects must exist in time. For my part, I think metaphysical reasoning of this sort is dodgy).

That leaves interpretation 2. But this has its own problems. The chief difficulty is working out the best way of stating how it is that God actually exists timelessly and so does not begin to exist at T1. One possibility is that the following statement captures the idea:

God prior to the universe exists timelessly

But this is inelegant because ‘prior to’ looks like a temporal relation. Another possibility (favoured by Craig) is:

God, without the universe, exists timelessly 

This looks a bit better, but once you remember that possible but non-actual states of affairs can be obtaining, you begin to see how lots of things could exist timelessly in the same sort of way. According to BTE3, all these things would then not begin to exist at T1 and would escape the logic of the Kalam. The best example of this is the universe itself. Take the universe as a whole, i.e. not just the events and objects within it. The universe as a whole could have existed four-dimensionally (indeed, many physicists think that it actually does). A four dimensional universe is a static block: the passage of time within it is merely illusory. Hence the entire static block exists timelessly. Craig wants to argue that our universe is not actually four-dimensional because temporal change is real, not illusory; but, again, our universe could have been four-dimensional and so the statement ‘Our universe, without change, exists timelessly’ is an obtaining state of affairs. This means that our universe, according to BTE3 does not begin to exist at T1, exactly the conclusion Craig wishes to avoid.

 Craig himself now rejects BTE3. He offers one further possible definition:

BTE4: X begins to exist at T1 iff (i) X exists at T1; (ii) T is either the first time at which X exists or is separated from any time T* < T nondegenerate, temporal interval; (iii) X’s existing at T1 is a tensed fact. is a tensed fact.

This definition removes talk about states of affairs and replaces it with talk about tensed facts. Furthermore, Craig himself notes that the third condition is intended to avoid the possibility that X exists statically or timelessly (Craig 2002, 99). The problem is that in his zeal to replace talk of states of affairs with talk of tensed facts, Craig seems to be ending up with a circular definition. He himself tells us that (iii) is supposed to describe X’s act of ‘temporal becoming’ (Craig 2002, 99). But what is an act of temporal becoming if not a statement of X’s ‘beginning to exist’? In other words, isn’t the third condition now simply stating that in order for X to begin to exist at T1 it must begin to exist at T1? We have acquired no genuine insight into what it means for something to begin to exist.

So it seems then that no attempt to define the concept of 'begins to exist' is entirely satisfactory.



Friday, August 28, 2015

'Proper Good Innit Bruv': A Philosophical Look at Writing Styles



A Pernicious Influence? See Geoffrey Pullum's take


I have a dilemma. Every year I teach students how to write. I teach them how to come up with a thesis for their essay; how to research it; how to make their arguments persuasive; how to arrange those arguments into a coherent structure; and, ultimately, how to form the sentences that convey those arguments. In teaching the last of these skills, I am forced to confront the thorny topic of writing styles. It is then that my dilemma arises.

Style guides commonly present students with lists of prescriptive rules. Following these rules is thought to promote good writing. You are probably familiar such rules. I’m talking about things like ‘Don’t end a sentence with a preposition’, ‘don’t split infinitives’, ‘adopt the active voice’, ‘learn how to use apostrophes, semi-colons, colons (etc) properly’, ‘don’t use words that aren’t really words, such as “irregardless” or “amazeballs”’, ‘the word ‘hopefully’ should not be used as a sentence adjective’, ‘don’t use ‘presently’ to refer to the present moment; only use it to refer to something that will happen in the near future’ and so on. Hopefully, you get the idea; presently, we’ll see the problem. The problem is that I am an extreme non-prescriptivist when it comes to language usage. I don’t believe there is a ‘rule’ against using split infinitives, ending sentences with prepositions, or any of the other commonly listed offences. I fully embrace the non-prescriptivist view of language promoted by the likes of David Crystal, Steven Pinker, Geoffrey K. Pullum, and Oliver Kamm. I think the rules just alluded are really nothing more than (fussy) aesthetic preferences, and that the English language consists in a number of overlapping and equally valid styles.

And yet, when it comes to grading student essays, I often find my inner prescriptivist creeping to the surface. I don’t like it if students use idioms such as ‘It don’t seem no good’ or ‘it was proper good’. I rail against students who misspell words, put punctuation marks in the wrong place, adopt colloquial or slang terms, and generally fail to adhere to the conventions of Standard English. But am I entitled to do this? If there is no hard and fast set of rules to be followed, if English really consists in a number of equally valid styles, how can I complain when my students don’t conform to my preferences? This is my dilemma.

I was recently grappling with this dilemma and it occurred to me that there are some interesting philosophical issues at play. I decided it was possible to justify my quasi-prescriptivist attitude, but to do so I first needed to isolate and understand the competing metaphysical and ethical views of language that underlay my dilemma. Once I did this, I could better explain why a certain amount of prescriptivism is justified. I’m going to try to share the fruits of this analysis in this blogpost.

I am hoping that this doesn’t come across as a rant. But there is a danger that it will since I do find some of the classic rules to be absurd and my frustration with them may well show through.


1. The Metaphysics of Language
Metaphysics is my first port of call. I think the debate about language usage is best understood as a war between two competing metaphysical views of language. That is to say, two competing views of what language is (throughout this discussion I focus on the English language but I presume the same can be said for most other languages), where these in turn dictate particular ethics of language usage (i.e. sets of views on how we ought to speak and write).

The proponents of the two competing views can be given names. I’ll call them sticklers and pragmatists:

Sticklers: Have a legislative or Platonic view of language. They think language consists in rules relating to semantics, grammar and spelling, that are either set down by appropriate authorities (the legislative view) or are intrinsic to the language itself (the Platonic view). This dictates a deontological approach to the ethics of usage. You simply must follow the rules in order to speak or write properly. This is sometimes accompanied by a consequentialist ethic, which is largely focused on conservative values such as preserving a dominant national identity and preventing the pollution of the language by ethnic groups or the lower classes (to be clear: I don’t wish to tar all sticklers with this conservative ethos — it is just that it is sometimes present).

Pragmatists: Have a conventional and evolutionary view of language. They think language consists in a set of constantly shifting and evolving conventions governing semantics, grammar and spelling. This dictates a consequentialist approach to the ethics of usage. This ethic takes different forms, some focusing on achieving a communicative aim and others more political in nature (such as resisting the conservative ethos and celebrating linguistic diversity). Pragmatists can be pure act-consequentialists — that is to say: they can decide which conventions to follow based solely on what is best in a particular communicative act; or they can be more like rule-consequentialists — that is to say: they can follow a set of default conventions because doing so leads to the best outcome on average.

Although I am here imagining that sticklers and pragmatists fall into two distinct ‘camps’, the reality is likely to be more complex. It is more likely that the labels ‘stickler’ and ‘pragmatist’ define a spectrum of possible views. This spectrum filters into the teaching of styles In the diagram below, I illustrate a spectrum of possible learning outcomes for the teaching of writing styles. The spectrum ranges from ‘Stickler Heaven’ at one end, to ‘Pragmatist’s Anarchy’ at the other. I don’t want my students to end up in Stickler Heaven, but I don’t want them to end up in Pragmatist’s Anarchy either. I need to stake out some middle ground (Pragmatist’s Paradise) and explain why they should join me there.



2. Why the Sticklers are Wrong
As a first step toward staking out that middle ground, I need to explain why the stickler’s approach to language is wrong. I do so with two arguments. The first tries to illustrate why the legislative/Platonic conception of language is false (and, contrariwise, why the conventional and evolutionary view is correct). The second tries to argue that adopting the deontic ethic has unwelcome consequences. Of course, if you have fully imbibed the stickler’s deontic Kool Aid, you may be unswayed by such consequentialist reasoning, but I doubt many people will have fully imbibed the Kool Aid. In ethical debates, people often resort to consequentialist reasoning when following a deontic rule would lead to a horrendous outcome. And while I do not promise horrendous outcomes, I think the outcomes to which I appeal will be sufficient to persuade most people that the deontic ethic is inappropriate.

Let’s focus on the first argument: why the legislative/Platonic view of language is wrong. To some, this will simply be obvious: English is not governed by a legislative authority and the rules of language are not like other Platonic entities (say the rules of mathematics). We don’t discover eternal truths about sentence structure and word meaning; the truths, such as they are, are clearly the result of contingent, messy, cultural evolution.* This can be easily demonstrated by focusing on the history of some of the stickler’s favourite so-called rules. These histories illustrate how what the stickler’s take to be ironclad rules are in fact produced by historical accidents. I’ll give a few examples. An excellent source for the historical evolution of usage rules is David Crystal’s book The Fight for English:


Orthographic Conventions: Orthography refers to how words appear on the printed page. Remember, language began as a spoken medium. Words were conveyed through phonemes (i.e. sound structures) not through written symbols. Many words share the same phonemes (i.e. they are pronounced in the same way), even if they have distinct meanings. Listeners are usually able to tell the intended meaning from the context, or by simply asking the speaker follow up questions. Things were different once writing took hold. Conventions had to be adopted so that different meanings could be discriminated. But these conventions emerged gradually and messily. One classic illustration of this concerns the use of the apostrophe. Conventions emerged in which the apostrophe was used to signal an abbreviation (as in ‘don’t’) or possession (as in ‘greengrocer’s’). But these conventions clashed in some cases, most famously in the distinction between it’s and its. The former is an abbreviation of ‘it is’ (or ‘it has’) whereas the latter is a possessive form of ‘it’. There is no logic to this distinction. It is a purely arbitrary compromise that emerged because of the awkward evolutionary history of the apostrophe. In this sense it is akin to biological evolutionary accidents like the laryngeal nerve of the giraffe. I could list numerous other examples of orthographic evolution but I won’t. Just read any book from the 1700 and 1800s and you’ll see how orthographic conventions have changed over the course of relatively recent history.

The Split Infinitive Rule: This is a famous stickler preoccupation. The belief is that it is somehow wrong to say things like ‘to faithfully uphold’ or ‘to boldly go’ because in these cases an adverb (faithfully/boldy) is being used to break-up the infinitive form of a verb (to uphold/to go). Crystal notes that this rule only seems to have entered English grammar books in the 19th century and was an example of Latin reasoning (i.e. the belief that English should copy the conventions of Latin) which has been popular at various times over the history of the English language. In other words, it originated in 1800s as a particular manifestation of a recurrent cultural fad. For some bizarre reason, fealty to the rule lingers and, as Pinker argues, may even have been responsible for Chief Justice John Roberts’s bungled administration of the presidential oath of office to Barack Obama back in 2009. This is bizarre because, as many have pointed out, the English language doesn’t really have an infinitive form of the verb. Instead, it has a subordinate (‘to’) combined with a simple form of the verb (‘uphold/go’): the infinitive is already split. Good writers have routinely and consistently breached the ‘rule’, much to that chagrin of the sticklers. It is odd that some continue to insist upon it.

Concern about ‘Proper Words’: One of strangest of all stickler beliefs is that there is a fixed font of words, and that some so-called words aren’t really words and so shouldn’t be used. Examples include words like ‘irregardless’ or ‘gotten’ (to name but a few). This betrays a misunderstanding of how language works. Nothing illustrates the historical and conventional nature of language more clearly than the passing in and out of existence of new words (read Shakespeare to see some famous examples of this). We need new words to explain new phenomena (‘selfie’, ‘googling’ etc.), and we abandon old words when they are no longer needed. The only standard for whether something counts as a word is whether it is being widely used and has become conventionally understood. So, of course, ‘irregardless’ and ‘gotten’ are words. They are widely used and conventionally understood. You may not like them, but they are words irregardless of what you might like.

The conventionality of language is also illustrated by syntactic rules. In the case of English, it is common to adopt a subject-verb-object order (e.g. ‘John saw the dog’). But in other languages different orders are common. For example, Japanese commonly adopts a subject-object-verb order (i.e., roughly equivalent to ‘John the dog saw’). Both syntactical structures seem ‘normal’ to their relevant communities.

So much for the stickler’s metaphysics. What about their ethics? Even if you accept that language is a messy nest of conventions, you might nevertheless think that we ought to follow certain rules lest we wander into pragmatic anarchy. I agree with this to an extent (as I’ll explain below). It’s probably not a good thing to constantly invent your own new words, or ditch the traditional orthographic rules, but I still think it is a mistake to adopt the deontic attitude of the sticklers. This is for two reasons. First, the rules that are beloved by the sticklers are often barriers to good communication. Second, the deontic attitude seems to encourage an overly moralistic approach to the teaching of style.

There are several examples of how following the stickler’s rules create barriers to good communication. Take the split infinitive rule. Sticklers would have you believe that Captain Kirk should say ‘[boldly to go] or [to go boldly] where no man has gone before’ instead of ‘to boldly go where no man has gone before’. But the latter would seem to preferable to the former. Not just because the phrase has become deeply embedded in the popular psyche, but because the adverb is supposed to modify the verb: it is a particular attitude toward going somewhere that Kirk is invoking. It makes sense to stick the adverb in front of the verb. Similarly, the oft-quoted rule about writing in the active voice can be an impediment to good communication. Use of the active voice directs the reader’s attention to the doer of an action (John kicked the dog) but oftentimes you will want to direct their attention to the done-to (The dog was kicked by John). If you rigidly stick to the rule, you will make your prose more difficult to follow.

As for the moralising attitude, it is present in passages like this (from Lynne Truss’s Eats, Shoots and Leaves):

If the word does not stand for ‘it is’ or ‘it has’ then what you require is ‘its’. This is extremely easy to grasp. Getting your itses mixed up is the greatest solecism in the world of punctuation… If you still persist in writing, ‘Good food at it’s best’, you deserve to be struck by lightning, hacked up on the spot and buried in an unmarked grave.

I know that Truss’s tongue was firmly in cheek when writing this. But similar pronouncement’s are found throughout the work of the sticklers (Oliver Kamm’s book Accidence will Happen aptly illustrates the tendency). And even if this doesn’t always end with hacked-up bodies in unmarked graves, it does seem to end with a sneering condescension towards the idiots who just can’t get it right. I don’t think such an attitude is becoming in an educator.


3. Pragmatic Prescriptivism?
Where does that leave us? It leaves with the pragmatic approach to style. We cannot plausibly conceive of language as a legislative or Platonic phenomenon. We must conceive of it as a conventional and evolutionary phenomenon. What’s more, we must recognise that there isn’t one set of agreed-upon conventions. If there was, we might be warranted in favouring a form of Stickler’s Heaven. But there isn’t. There are, instead, shifting and sometimes competing conventional systems. In certain contexts, it is conventional to use non-Standard spellings and idioms. If you are texting your friends, you can say things like ‘gr8’ or ‘c ya later’ (although, ironically, this seems less common now that there are fewer restrictions on message-length). If you are hanging out with your friends, it might be conventional to say things like ‘proper good innit!’ or ‘I’m well jel!’ or ‘I didn’t do nothing’. But if you are writing an academic essay…

…Here’s where I come back to my dilemma. When writing an academic essay, I think students probably should adopt a fairly traditional, so-called ‘Standard’ style of expression. This means they should probably avoid slang, non-Standard spellings, unusual punctuation and so forth. They should also probably master the different meanings of ‘enormity’, ‘meretricious’ and ‘disinterested’, and learn to put apostrophes in the conventional places. But why should they do this? If there is no right or wrong — if, as Pinker says, when it comes to English the lunatics are literally (or should that be figuratively?) running the asylum — then why can’t they mix and match conventional styles?

This is where the pragmatist's consequentialist ethic kicks-in. I think all pragmatists should adopt the following ‘master’ principle of style:

Pragmatist’s Master Principle of Style: One’s writing (or speaking) style should always be dictated by one’s communicative goals, i.e. what one is trying to achieve and who one is trying to achieve it with.

In the academic context, students are (in effect) trying to impress their teachers. They are trying to show that they understand the concepts and arguments which have been presented in class. They are trying to demonstrate that they have done an adequate amount of reading and research. They are, above all else, trying to defend a persuasive thesis/argument. What’s more, they are trying to do this for someone who isn’t sure that they are capable of it. As I say to my students, ‘you might know what you are talking about, and I might know what you are talking about, but I don’t know that you know what you are talking about — you need to convince me that you do’. The style they adopt should be dictated by those communicative goals.

This means that, in most cases, they should adopt a traditional and Standard style of expression. There are two main reasons for this. First, this is the style that dominates academia and adopting it eases communication. Students have to do a lot to convince me that they know what they are talking about. They won’t help their cause if they adopt countless neologisms and non-Standard idioms. It will put me in a bad mood. I’ll have to work that much harder to understand what they are saying. Second, adopting that style allows students to earn acceptance and respect within the relevant academic community. Certain conventions may be absurd or ridiculous, but it is easier to break them once you have earned respect. Oliver Kamm gives the example of the actress Emma Thompson who urged a teenage audience to avoid overuse of ‘like’ and ‘innit’ ‘Because it makes you sound stupid and you’re not stupid’. This feels right. It is not that students are genuinely stupid for adopteding non-Standard styles; it is that they will be perceived to be so and that, in most cases, is not a good thing. There is a pragmatic case for some forms of linguistic snobbery.

That said, there are no hard-and-fast rules. This is one of the discomfiting features of the pragmatic approach to language. We can’t fall back into the reassuring embrace of ironclad prescriptivism. Some academic styles are maddeningly opaque; it would probably be a good thing to break with their conventions. Sometimes a bit of slang can liven up an otherwise staid piece of prose. Sometimes you have to coin a new word or misappropriate an old one to label a new concept. You have to exercise wisdom and discernment, not blind-faith in a set of rules. This takes time and practice.

I have only one rule: the more you read and write, the easier it becomes.


* As far as I am aware, there may be a Chomskyan linguistic theory that does favour a quasi-Platonic view of language structures. But this arises at a very abstract level, not at the level of particular languages, nor at the level of style. Such Chomskyans would, I am sure, accept that there are many contingent cultural variations in semantics, orthographics and preferred idioms.

Wednesday, August 26, 2015

The Argument from Abandonment and Suffering




(Previous Entry)

The argument from abandonment and suffering is a specific version of the problem of evil. Erik Wielenberg defends the argument in his recent paper ‘The parent-child analogy and the limits of skeptical theism’. That paper makes two distinctive contributions to the literature, one being the defence of the argument from abandonment and suffering, the other being a meta-argument about standards for success in the debate between skeptical theists and proponents of the problem of evil.

I covered the meta-argument in a previous post. It may be worth revisiting that post before reading the remainder of this one. But if you are not willing to revisit that earlier post, allow me to briefly summarise. Skeptical theism is probably the leading contemporary response to the evidential problem of evil. It casts doubt on our ability to identify God-justifying reasons for allowing evil. But skeptical theism is usually formulated in very general terms (e.g. ‘we wouldn’t expect to know all of the possible god-justifying reasons for allowing evils to occur’). Wielenberg’s meta-argument was that it is much more difficult to justify such skepticism in relation to specific instances of evil. In those more specific cases, there may be grounds for thinking that we should be able to identify God-justifying reasons.

And that’s exactly what the argument from suffering and abandonment tries to maintain. The argument builds upon the parent-child analogy, which is often used by theists to justify skeptical theism. So we’ll start by looking at that analogy.


1. The Limitations of the Parent-Child Analogy
I wrote a longish blogpost about the parent-child analogy before. That blogpost was based on the work of Trent Dougherty. Wielenberg effectively adopts Dougherty’s conclusions and applies them to his own argument. So if you want the full picture, read the earlier post about Dougherty’s work. This is just a short summary.

The parent-child analogy is the claim that the relationship between God and human beings is, in certain important respects, very similar to the relationship between a parent and a child. Indeed, the analogy is often explicitly invoked in religious texts and prayers, with their references to ‘God the father’ and the ‘children of God’. Proponents of skeptical theism try to argue that this analogy supports their position. It does so because parents often do things for the benefit of their children without being able to explain or justify this to their children. For example, parents will often bring their infant children for rounds of vaccination. These can be painful, but they are also beneficial. The problem is that the child is too young to have the benefit explained to them. From the child’s perspective, the harm they are suffering is inscrutable. The skeptical theists claim that we could be in a similar position when it comes to the evils that befall us. They may have some greater benefit, but God is simply unable to explain those benefits to us.

Proponents of the problem of evil are often unswayed by this line of reasoning. They accept that, in certain instances, parents are unable to justify what they do to their children, but this is usually a temporary and regrettable phase. When a child grows up and is capable of some understanding, however limited it may be, a loving parent will try to explain why certain seemingly bad things are, ultimately, for the best. Imagine if you had a four year-old child who had to have their leg amputated for some legitimate medical reason. This would, no doubt, cause some anguish to the child. But you, as a loving parent, would do your best to explain to the child why it was necessary, using terms and concepts they can grasp. What’s more, you would definitely not abandon the child in their time of need. You would be there for them. You would try to comfort and assist them.



The result is that parent-child analogy can cut both ways. Skeptical theists and proponents of the problem of evil simply emphasise different features of the analogy. The theists highlight cases in which a parent is unable to explain themselves, but these are extreme cases. In many other cases, there is good reason to think that God (qua parent) would try to explain what he is doing to humanity and would not abandon humans during a time of great suffering and need.


2. The Argument from Abandonment and Suffering
It is precisely those latter features of the parent-child analogy that Wielenberg tries to exploit in his argument. His key observation is that there are cases of seemingly gratuitous suffering that are accompanied by a sense of divine abandonment (i.e. by a feeling that God is no longer there for you and that he may not even exist). He cites two prominent examples of this. One is that of C.S. Lewis, who famously experienced a sense of abandonment after the death of his beloved wife. Lewis wrote about this eloquently:

Meanwhile, where is God? ... [G]o to him when your need is desperate, when all other help is in vain, and what do you find? A door slammed in your face, and a sound of bolting and double bolting on the inside. After that, silence. You may as well turn away. The longer you wait, the more emphatic the silence will become. 
(Lewis 1961, 17-18 - quoted in Wielenberg 2015)

The other example is that of Mother Teresa whose posthumously-published letters revealed that she felt a sense of abandonment throughout most of her life and suffered greatly from this. These examples are interesting because they involve prominent theists. But there are presumably many others who suffer and feel a sense of abandonment without ever recovering their faith. They might be even more potent in the present context.

The combination of such suffering and abandonment is particularly troubling for the theist. There are two reasons for this. One is because it runs contrary to the tenets of the parent-child analogy: the combination of suffering and abandonment is exactly what we would not expect to see if God is like a loving parent. The other is because the sense of abandonment often exacerbates and compounds the suffering. It is precisely because we lose touch with God that we suffer all the more. Again, this is not something we should expect from a loving parent.

To set out the argument in more detail:


  • (1) A loving parent would never permit her child to suffer prolonged, intense, apparently gratuitous suffering combined together with a sense that she has abandoned them (or that she does not exist) unless this was unavoidable.

  • (2) God is relevantly like a loving parent.

  • (3) Therefore, if God exists, he would not allow his creations to suffer prolonged, intense, apparently gratuitous suffering combined together with a sense of abandonment unless this was unavoidable.

  • (4) People do suffer prolonged, intense, apparently gratuitous suffering combined with a sense of abandonment.

  • (5) God, if he exists, should be able to avoid this.

  • (6) Therefore, God does not exist.



This version of the argument is slightly different from the version that appears in Wielenberg’s article. The main difference being that I have divided his premise (4) into two separate premises (4) and (5). I did this because I wanted to highlight how the avoidability of the suffering and abandonment is an important component in the argument, and something that a clever skeptical theist might try to dispute (as we shall see in a minute).

On the whole, I think this is a strong atheological argument. I think the combination of suffering and abandonment is potent. And I think the traditional forms of skeptical theism are ill-equipped to deal with it. Wielenberg points out that because the argument is analogical it doesn’t really rely on an explicit noseeum inference. But even if you were to translate it into a form that did rely upon such an inference, the inference would be specific and justified. The whole point here is that we should not expect to see the combination of seemingly gratuitous suffering and abandonment if God exists. This is true even if there are goods and evils (and entailment relations between the two) that are beyond our ken. The result is that the likely explanation for the combination of seemingly gratuitous suffering and abandonment is that these cases involve actually gratuitous suffering, and this in turn is incompatible with the existence of God.


3. The Possibility of Positive Skeptical Theism
There is one potential response to the preceding argument. The response is implicit in the extant literature. This is DePoe’s positive skeptical theism. This version of skeptical theism differs from the others in that it doesn’t appeal to the mere likelihood of beyond-our-ken reasons for God’s allowing evil to occur. Instead, it argues that there are positive justifications for God’s creating epistemic distance between us and him.

DePoe’s position is thus slightly more theodical than skeptical. It builds upon theodical work done by the likes of Richard Swinburne and John Hick by arguing that there are reasons to expect a world in which God makes his existence uncertain. The reasons have to do with specific goods that are only possible if God’s existence is uncertain. For DePoe, there are two such goods worthy of particular consideration: (i) the possibility of a genuine loving response to God in faith; and (ii) the possibility certain acts of supreme human love and compassion (I seem to recall Swinburne arguing that genuine moral responsibility was only possible in a world with some epistemic distance). I would tend to question whether these are truly good (and not simply ad hoc responses to the problem of evil) and whether the goodness is sufficient to justify the kinds of evils we see in the world, but I will set those worries to the side.

The important point here is that if positive skeptical theism is correct it has the potential to undermine the argument from suffering and abandonment. Where Wielenberg suggested that the combination of suffering and abandonment is exactly what we would not expect to see if God exists, DePoe is saying that this is something we should expect to see. Thus, God may not be able to avoid suffering and abandonment if he wants to realise the (greater?) goods alluded to by DePoe.

Wielenberg argues that this is an unpromising line of response. The reason is that DePoe’s positive skeptical theism opens up the problem of divine deception. The argument here is a little bit tricky so I’ll try to set it out carefully. It starts with an assumption:

Assumption: There cannot be any actually gratuitous evils — they are incompatible with God’s nature.

This is an assumption we have been working with throughout this post and it is one that DePoe and many other theists accept. It creates a problem in the present context because, as was argued above, there do nevertheless appear to us to be cases in which evil is seemingly gratuitous. This means that DePoe must be committed to the following:

DePoe’s Commitments: God must have created the world in such a way that (a) there are no actually gratuitous evils but (b) there are many specific instances of evil that appear to us to be gratuitous.

This, in turn, implies:

DePoe must believe in a world in which God has arranged things so as to systematically mislead us as to the true nature of good and evil (i.e. as to what is actually gratuitous evil and what is not)

DePoe’s God is a deceptive god: He achieves the necessary epistemic distance by deceiving us as to the true nature of good and evil.

This is problematic. For one thing, the notion of a deceptive god may be incompatible with certain conceptions of divine moral perfection (viz. a perfect being cannot be deceptive). For another, once you accept that God is deceptive in one domain it becomes more likely that he is deceptive in others. This may undercut the warrant that a religious believer has in certain sources of divine revelation. It is unlikely that many theists will be willing to pay that cost.




4. Conclusion
In sum, the argument from abandonment and suffering is a particularly strong version of the problem of evil. It highlights cases in which people suffer great harms and experience the absence of God. This is something we should not expect to see if God is like a loving parent. Would a loving parent really abandon her children (or cause them to believe in such abandonment) after they have suffered some great harm? Surely not. Yet God seems to do so repeatedly. Traditional versions of skeptical theism are ill-equipped to deal with this argument because in this case the noseeum inference is being explicitly justified. DePoe’s positive skeptical theism might proffer a response, but it does so at the cost of believing that God is a systematic deceiver.

Tuesday, August 25, 2015

On the Limitations of General Skeptical Theism


Erik Wielenberg


Erik Wielenberg has just published a great little paper on skeptical theism and the problem of evil. I don’t mean to use the word ‘little’ in a pejorative sense. Quite the contrary. I use that descriptor because the paper manages to pack quite a punch into a relatively short space (a mere 12 pages of text). The ‘punch’ consists of two interesting arguments. The first is a meta-argument about standards of success in the debate between skeptical theists and proponents of the problem of evil. The second is a strengthened version of the problem of evil, which focuses specifically on the problem of suffering and abandonment.

The second argument is the real centrepiece of the article and I will cover it in a future post. Today, I want to deal with the meta-argument. I do so because it sets the stage for the argument from suffering and abandonment, and because it is an interesting methodological point in its own right. I won’t delay any further; I’ll get straight into it.


1. The Problem of Evil and the Noseeum Inference
Everyone is familiar with the problem of evil. They all know that God is supposed to be a maximally powerful, maximally knowledgeable, and perfectly good being. They also know that there are many real world instances of evil. This evil can take many forms, with the most commonly lamentable form being the suffering of conscious creatures. The problem of evil simply points to the difficulty of reconciling the existence of such suffering with the existence of God.

Of course, it’s a little bit more complicated than that, and I don’t want to completely rehash the centuries-long debate about the problem of evil here. Instead, I want to hone-in on its most popular modern form. Back in 1979, the (sadly) recently-deceased William Rowe published an influential article entitled ‘The Problem of Evil and Some Varieties of Atheism’. In it, he presented an evidential version of the problem of evil, which has become the most widely-discussed contemporary variant on the problem.

Rowe’s argument appealed to the concept of gratuitous evil. This is a type of evil that is not logically or metaphysically necessary for some greater good. In other words, it is a type of evil that a perfectly good being could not permit. This is widely accepted by both theist and atheist alike. What is disputed is whether there are any actual instances of gratuitous evil. Rowe tried to argue that there are. He did this by highlighting examples of real-world suffering that don’t seem (in light of everything we know) to have any God-justifying reason for their existence. His famous example of such evil is a fawn who suffers horribly in a forest fire, with no one around to help or learn from the experience. He argues that we can infer the likely existence of actually gratuitous evils from the existence of such seemingly gratuitous evils.

To put it more formally, Rowe’s version of the problem of evil takes (roughly) the following form:


  • (1) If there are any actually gratuitous evils, then God does not exist.
  • (2) There are seemingly gratuitous evils.
  • (3) We can warrantedly infer the likely existence of actually gratuitous evils from the existence of seemingly gratuitous evils.
  • (4) Therefore, God is unlikely to exist.


The critical premise here is (3). As Rowe’s critics point out, this premise relies upon a ‘noseeum’ inference. In other words, it relies upon the assumption that if there were God-justifying reasons for allowing some evil we could expect to see them. This is something skeptical theists take issue with. The question is whether they are right to do so. To figure this out we need to consider their position in a little bit more detail.


2. Skeptical Theism and the Noseeum Inference
As Wielenberg points out, skeptical theism has two components: (i) a theistic component and (ii) a skeptical component. The theistic component is relatively straightforward. It consists in either the belief in God as classically understood (i.e. as a perfect being), or as understood by holders of some particular faith. Wielenberg uses a specifically Christian version of theism in his analysis because that is the version held by those toward whom he directs his arguments.

The skeptical component is slightly more complicated. The general gist of it is that we should be skeptical of our ability to fully know what God knows and that this skepticism undercuts the noseeum inference at the heart of Rowe’s argument. A number of more specific conceptualisations of the skepticism have been offered over the years. There is, for example, William Alston’s version, which focuses on different parameters of cognitive limitation that seem to apply to humans; and there is Michael Bergmann’s version which focuses specifically on the representativeness of our knowledge of good and evil and the entailment relations between the two.

Wielenberg doesn’t weigh the pros and cons of these different conceptualisations. Instead, he suggests the following as a version of skeptical theism that captures the core idea and does justice to some of the leading conceptualisations (most particularly the Bergmannian form):

SC1: It would not be surprising if there are possible goods, evils, and entailments between good and evil that are beyond our ken (but not beyond the ken of an omniscient God).

Skeptical theists think that a principle like SC1 is sufficient to undermine Rowe’s argument from evil. Are they right to do so? Here’s where Wielenberg’s meta-argument enters the fray.


3. The Need to Distinguish between General and Specific Noseeum Inferences
Wielenberg’s argument is that, to date, participants in the debate about skeptical theism and Rowe’s argument have paid insufficient attention to the difference between general and specific versions of the evidential problem of evil. The failure to do so means that the ability of skeptical theism to undercut the problem of evil is overrated, at least when that view is proffered in response to more specific versions of the problem.

Allow me to explain. The general and specific versions of the evidential problem work like this:

General Evidential Arguments: There are many instances of seemingly gratuitous evil; therefore there are probably some instances of actually gratuitous evil; therefore God does not exist.

Specific Evidential Argument: Specific instance of evil E is seemingly gratuitous; therefore E is probably actually gratuitous; therefore God does not exist.

To put it another way, general evidential arguments say ‘Look, there are all these instances of evil that seem to be gratuitous. They cannot all be necessary for some greater good. Therefore, it is likely that at least one of them is actually gratuitous.’ And specific arguments say ‘Look, there is this specific instance of evil. We have tried really hard and we cannot come up with a God-justifying reason for allowing this evil. Therefore, it is likely that this specific instance of evil is gratuitous.’
These argumentative forms rely on different noseeum inferences:

General Noseeum inference: Moves from the existence of some seemingly gratuitous evils to the existence of at least one actually gratuitous evil.

Specific Noseeum inference: Moves from the seemingly gratuitous nature of E to its actually gratuitous nature.

The differences are crucial because it is much easier to be skeptical about general noseeum inferences than it is to be skeptical about specific ones. The general noseeum inference confidently assumes we should be able to ‘see’ god-justifying reasons for allowing evil wherever they may arise. A principle like SC1 successfully undermines such confidence. But the specific noseeum inference does not share this feature. It assumes merely that we should be able to see god-justifying reasons in some particular case. A principle like SC1 cannot undermine our confidence in inferring from that particular case.

This can be demonstrated more formally. Let’s take Rowe’s case of the fawn suffering in the forest fire as an example of a specific evidential argument from evil. It fits the bill because it points to one particular instance of evil and makes inferences about its likely gratuitous nature (Wielenberg calls this the ‘Bambi’ Argument). Now consider the following two variations on skeptical theism. The first is SC1, which we already had, and the second is SC1a which is a more detailed variant on SC1:

SC1: It would not be surprising if there are possible goods, evils, and entailments between good and evil that are beyond our ken (but not beyond the ken of an omniscient God).

SC1a: It would not be surprising if there are possible goods, evils, and entailments between good and evil that are beyond the ken of human beings (but not beyond the ken of an omniscient God) but it would be surprising if any such possible goods, evils, or entailments had anything to do with fawns.

SC1 and SC1a are logically compatible. SC1 is a general and vague type of skepticism; it doesn’t rule out the possibility of sound moral knowledge in particular cases (indeed, that possibility is something skeptical theists need to preserve if they are to avoid other problems with their position). SC1a is merely adding to SC1 a specific case in which we can expect to have pretty sound moral knowledge.

And here’s the critical point: because SC1 and SC1a are logically compatible, SC1 cannot by itself undermine Rowe’s specific evidential argument from evil. If a proponent of SC1 tried to challenge the argument, they could always be rebuffed on the grounds that SC1a (which is consistent with their general skepticism) does not undermine the argument. This is illustrated below.




In other words, to defeat a specific version of the evidential problem you need to have a specific version of skeptical theism — one that accounts for our inability to make warranted inferences about the likely gratuitous nature of some specific type of evil. You cannot simply fall back on general formulations of skeptical theism.

That’s Wielenberg’s meta-argument and he tries to leverage it to his advantage in formulating the argument from abandonment and suffering. I’ll talk about that some other time.

Monday, August 24, 2015

Is God the source of meaning in life? Four Critical Arguments




(Previous Entry)

Theists sometimes argue that God’s existence is essential for meaning in life. In a quote that I have used far too often over the years, William Lane Craig puts it rather bluntly:

If there is no God, then man and the universe are doomed. Like prisoners condemned to death, we await our unavoidable execution. There is no God, and there is no immortality. And what is the consequence of this? It means that life itself is absurd. It means that the life we have is without ultimate significance, value or purpose. 
(Craig 2007, 72)

It is clear from this that, for Craig, God is essential for meaning. Without him our lives are absurd. Is this view of the relationship between God and meaning correct? Is God the source of meaning in life? Or could our lives have meaning in His absence?

In the previous entry in this series, I looked at Megill and Linford’s recent argument about the relationship between God and meaning. To recap, they argued that God’s existence is sufficient for meaning in life. This is because God, being omnibenevolent and omnipotent, would not create beings with meaningless lives. To do otherwise would be to create a sub-optimal world in which people are susceptible to gratuitous suffering, and since it is widely-accepted that gratuitous suffering is incompatible with the existence of God it cannot be the case that He would create such a world. Megill and Linford also argued that this conclusion could be used to craft a novel argument for atheism, viz. if there is at least one meaningless life, then God does not exist.

This is an interesting and provocative argument, and it clearly suggests that God might be important for meaning. But it does not vindicate Craig’s position. It shows that God’s existence is sufficient for meaning; it does not show that God is necessary for meaning (i.e. that God is the source of meaning). This is an important distinction. If there is no necessary relationship between God and meaning, then it is possible to have a purely secular theory of meaning. And if it is possible to have a purely secular theory of meaning, then it is also possible for their novel argument for atheism to work (as I explained at the end of the last post).

The second half of Megill and Linford’s paper is dedicated to defending the view that God is not the source of meaning in life. They present four different arguments in support of this view. I want to look at each of them in the remainder of this post. I have given these arguments names, but be warned, in case you want to read their original paper, the names are my own invention. One other forewarning: the claim that God is sufficient for meaning is taken for granted in the following discussion. This does have an effect on the plausibility of some of what follows, though this will be flagged when appropriate.


1. The Possible Worlds Argument
The first argument asks us to imagine two different possible worlds:

G: A world in which God definitely exists and which is a perfect duplicate of the actual world.
NG: A world in which God definitely does not exist and which is a perfect duplicate of the actual world.

Both of these worlds are identical in terms of the lives that pass in and out of existence; the events that take place; and the outcomes that are achieved. The only difference is that God exists in G but not in NG. Linford and Megill suggest that both worlds are epistemically possible, i.e. for all we know we could be living in G or NG. What effect does this have on the meaning of our lives?

If we live in G, then our lives definitely have meaning. This follows from the argument in part one: if God exists, he would not allow us to live meaningless lives. That’s obvious enough. What if we live in NG? Well, then it depends on whether God is necessary for meaning or not. If he is necessary for meaning (i.e. if he is the source of meaning) then our lives in NG are meaningless. But if he is not necessary, then there is some hope (it depends on what the other potential sources of meaning are).

Let’s assume for now that God is necessary for meaning. This forces us to conclude that our lives in NG are meaningless. Is that a plausible conclusion? Megill and Linford argue that it is not. If it were true, then it would also follow that the actual content of our lives had no bearing on their meaningfulness. Remember, our lives are identical in G and NG; the only difference is that God exists in one and not in the other. But surely it is implausible to conclude that what we do (the actions we perform, the events we participate in etc) have no bearing on the meaningfulness of our lives? This gives us the following argument:



  • (1) Imagine two (epistemically) possible worlds: G and NG. God exists in G and not in NG, but otherwise both worlds are identical to the actual world in which we live. Thus, the content of our lives is the same in G and NG.

  • (2) If God is necessary for meaning, then our lives are meaningless in NG; if God is sufficient for meaning, then our lives meaningful in G.

  • (3) Therefore, if God is necessary for meaning, the actual content of our lives has no bearing on whether or not they are meaningful (from 1 and 2).

  • (4) It is implausible to assume that the content of our lives has no bearing on their meaningfulness.

  • (5) Therefore, God must not be necessary for meaning.



For what it’s worth, the basic gist of the argument being made here — that if God is necessary and sufficient for meaning then what humans do with their lives would make no difference — has been exploited by others in the recent past. Still, the argument can be challenged from several angles. The obvious line of attack is to take issue with premise (1). That premise assumes that our lives really could be identical in G and NG, but surely that is false? Surely, if God exists, his existence would have to make some difference to the content or shape of our lives?

Megill and Linford consider two versions of this response. The first appeals to a necessarily interventionist God:


  • (6) Objection: God is necessarily interventionist, i.e. he changes the course of events in the world. Consequently, G and NG could not be identical.


Megill and Linford respond to this by defending a narrower version of premise (1). They concede that God could intervene in some people’s lives, but point out that it is accepted (by ‘most’ theists) that there are at least some individual lives that aren’t affected by divine intervention. Those lives would be identical across both G and NG and the argument could still go through for the people living those lives. Similarly, if the claim is that God’s intervention is itself necessary for meaning, you run into the problem that God does not intervene in all lives. That means that those lives will lack meaning, which is inconsistent with the argument presented in part one (i.e. that if God exists, all lives must have meaning).


  • (7) God does not intervene in all lives hence those lives could be identical across G and NG; furthermore, if such intervention is necessary for meaning, you run into the problem that lives in which God does not intervene would be meaningless, which is inconsistent with the claim that God is sufficient for meaning.


Some of that seems plausible to me but I wonder whether a theist could wiggle out of it by insisting that God does intervene (in some minimal way) in every life (e.g. through creation or at the end of life). Some people may not appreciate it or be aware of it, but that doesn’t matter: his minimal intervention is still the secret sauce that saves us from meaninglessness.

The other version of the objection focuses on the afterlife:


  • (8) Objection: G and NG are not identical because the afterlife would exist in G and the afterlife is what confers meaning on our lives.


This is certainly a popular view among theists. The earlier quote from Craig made a direct appeal to the importance of immortality in our account of meaning. Megill and Linford offer two responses. The first is to argue that an afterlife is epistemically possible on atheism. In other words, there is at least one epistemically possible atheistic universe in which humans live forever. So God isn’t necessary for immortality. The other response is to argue against the notion that immortality is necessary for meaning. They do this by appealing to the fact that some events of finite duration appear to have value, and that sometimes the value that they appear to have is a direct function of their brevity. They give the example of one’s days as an undergraduate student, which are probably more fondly remembered because they don’t last forever. They could also give the example of lives that go on forever but seem to epitomise meaninglessness, e.g. the life of Sisyphus.



  • (9) It is epistemically possible for their to be an afterlife in NG; and it is unlikely that immortality is itself necessary for meaning.



I suspect theists might respond by agreeing that immortality simpliciter is not necessary for meaning. What is necessary is the right kind of immortality and God provides for that kind of immortality (e.g. through everlasting life in paradise). In doing this, theists are making appeals to some feature or property that God manages to bestow on our lives to make them meaningful. To help us distinguish such claims, Megill and Linford appeal to something they call the fourfold distinction:


The Fourfold Distinction: When discussing the overarching ‘meaningfulness’ of our lives, it is worth distinguishing between four phenomena:
(i) The significance we attribute to our own lives;
(ii) The purpose to which we devote our lives;
(iii) The significance God attributes to our lives;
(iv) The purpose for which God created us.


The theist might concede that life in NG could have (i) and (ii), but it could never have (iii) and (iv). They are what make the crucial difference. They come from outside our own lives and confer meaning upon us. The other arguments presented by Megill and Linford try to deal with these sorts of claims.


2. The External Source Argument
The next argument is something I am dubbing the external source argument. It works like a dilemma involving a disjunctive premise (i.e. a premise of the form ‘either a or b’). The disjunctive premise concerns the possible sources of meaning in life. Megill and Linford suggest that there are only two possibilities: (a) the source is intrinsic/internal to our individual lives, i.e. human life is meaningful in and of itself; or (b) the source is extrinsic/external to our lives, i.e. what we do and how that relates to some other feature of the universe is what determines meaningfulness. The problem is that neither of these possibilities is consistent with God being the source of meaning.

The full argument works a little something like this:



  • (10) If life has meaning, then that meaning is either intrinsic/internal to life or extrinsic/external (i.e. dependent on what we do and how that relates to something external to us).

  • (11) If the meaning is intrinsic/internal to life, then God is not the source of meaning.

  • (12) If the meaning is extrinsic/external, then God might be the source of meaning (though that depends on what else we know about meaning and God’s relationship to it).

  • (13) We know that if God exists, then every life must have meaning (the sufficiency argument - from the previous post).

  • (14) Therefore, we know that if God exists, every life must have meaning irrespective of how that life is lived and how the person living it relates to God (from 13 and previous discussion).

  • (15) Therefore, God cannot be the external source of meaning.

  • (16) Therefore, either way, God cannot be the source of meaning in life.



This formalisation is my attempt to make sense of the argument presented in Megill and Linford’s article. The first three premises should be relatively uncontroversial. The argument does not assume that life has meaning, merely that if it does, the meaning must be internal or external. It is pretty obvious that internal meaning excludes God as the source. That just leaves the external possibility. The problem is that the sufficiency argument seems to suggest that how we live our lives makes no difference to their meaning, which in turn seems to rule out the claim that how we relate to God (or how he relates to us) is what infuses our lives with meaning.

So far, this is very similar to the previous argument. The chief difference comes when Megill and Linford develop the argument by the considering fourfold possibilities: (i) that the purpose to which we devote our lives matches the purpose for which God created us; (ii) that the purpose to which we devote our lives does not match the purpose for which God created us; (iii) that the significance we attribute to our lives matches the significance God attributes to us; or (iv) that the significance we attribute to our lives does not match the significance God attributes to us. They argue that none of these possibilities is consistent with God being the source of meaning.

I’ll briefly summarise their reasoning. Suppose (i) is true: our purpose matches God’s purpose for our lives. There are two problems with this. First, it is not clear how one being creating us for a purpose necessarily makes our lives meaningful. When we consider analogous cases (e.g. a scientist creating a child for the purpose of organ donation) we often find something lamentable or problematic about the life in question. We think it robs us of proper autonomy and choice. At the very least, it would seem to depend on the nature of the purpose and not on the mere fact that another being has created us for a purpose. Second, we have the NG problem, outlined in the previous argument. We could imagine two worlds (G and NG) in which we live for identical purposes, albeit in one of these world’s God does not exist. Does this rob us of something important? Megill and Linford suggest that it does not: if our lives are directed toward the same end, they should be equally valuable. I suspect a theist would challenge this on the grounds that there are certain divine purposes that simply would not be possible in NG.

Suppose (ii) is true: our purposes don’t match. If that’s the case, then it seems like God would have created a particularly odd world. If he is rational, then he would want to accomplish his goals through his actions. And if he is truly omnipotent and ominscient, then surely he would not fail to create beings that matched his goals?

Suppose (iii) is true: we attribute the same level of significance to our lives as God does. In that case, Megill and Linford think that we once again have the G vs NG problem: “we would attribute the same importance to our lives regardless of whether we lived in G or NG. Therefore it is difficult to see what difference God would make in this scenario.” (Megill and Linford, 2015).

Finally, suppose (iv) is true: there is a mismatch in the level of significance we attach to our lives. There are then two possible mismatches. Either we attribute more significance than God or less. If we attribute more, then Megill and Linford argue ‘our lives would be imbued with a deep sense of importance (even if inappropriate) in both G and NG. So it is difficult to see why would need to be in G as opposed to NG for our lives to have meaning.’ (Megill and Linford 2015) And if we attribute less meaning, then we are confronted with a variant on the problem of evil: people would be made to suffer needlessly by thinking that their lives were less important than they actually are.

I have my problems with all of this. While I agree with the insight at the heart of the argument (if God exists, then what we do will make no difference to the ultimate meaning/significance of the universe), I think Megill and Linford do a poor job showing that God cannot be an external source of meaning. One reason for this is that they don’t spend enough time distinguishing between the different concepts (i.e. purpose, meaning, significance); another is that many of the points made here simply rehash or repeat points that have already been made in their article. The main reason, however, is that throughout this section of their paper they seem to assume a largely subjectivist standard of success for their argument. In other words, they assume that if we think our lives have meaning (or significance or purpose or whatever) then that’s good enough. This certainly seems to be the assumption at play in the two quoted passages in the two preceding paragraphs. In both instances, Megill and Linford rule out the importance of God on the grounds that if we attribute a high level of significance to our own lives, they must have that level of significance. They don’t seem to countenance the view that our subjective beliefs might be wrong.

This is problematic because it is then all too easy for a theist to take advantage of the distinction between objective and subjective standards of success. The theist could argue that, irrespective of what we think about the purpose or significance of our lives, what matters is that there is an objective standard for these things. They could bolster this argument by pointing to secular philosophers who have argued for similar views. And then they could argue that God is the only thing that could possibly provide the appropriate objective standard. In this sense, they could argue that the debate is very similar to that about God’s role in grounding objective moral truths. The problem with Megill and Linford's argument is that it too readily assumes the presence of meaning/significance when we subjectively perceive it to exist.

Now, don’t get me wrong: I think there is plenty wrong with the claim that God is the only thing that could ground the appropriate objective standard. I have tried to explain why I think that in several previous posts. I just don’t think that this particular argument, one of four in Megill and Linford’s article, is making the best case for this view.


3. The No-Belief Argument
I’ll try to deal with the two remaining arguments more quickly. The first of these focuses on the role of theistic belief in any theistic account of meaning. I’m calling it the ‘no-belief’ argument because it highlights the potential irrelevance of belief in God for meaning, which is then alleged to be disturbing for the theist.

The argument starts with the supposition that God is necessary for meaning, i.e. that He is an external source of meaning in our lives. This means that we must stand in some sort of relation to God in order for our lives to have meaning. That relation could take many different forms. It could be that we have to achieve salvation with God in the afterlife. It could be that we need to follow a specific list of divine commandments. The precise details of the relation do not matter too much. What matters is whether belief in God is going to be an essential part of that relation. In other words, on the theistic account, is it the case that we must believe in God in order for our lives to have meaning?

You might argue that it is. If you are a theist, you would like to think that your belief makes some kind of a difference. But in that case you run into a version of the problem of divine hiddenness. There are some people who are blameless non-believers either because they were raised in a time and place where belief in God was not available to them, or because they have honestly tried to believe and lost their faith. Either way, if you think belief is necessary for meaning, it would follow that these people are living meaningless lives. This is incompatible with the sufficiency argument outlined in part one. Recall the conclusion to that argument: if God exists, all lives must have meaning. It follows therefore that belief in God cannot be necessary for meaning.

But then the theist is in the rather odd position of believing that God is necessary for meaning but belief in Him is not. This is certainly an odd view of meaning for people like William Lane Craig, who insist that achieving salvation through a personal relationship with God is the ultimate source of meaning and purpose. And it would probably be uncomfortable for many other theists.

My feeling is that although theist would be uncomfortable with this idea, this argument once again fails to really upset the view that God is a necessary, external source of meaning. I feel like a theist could bite the bullet on this one and accept that belief in God is not important, but continue to maintain that something else about God is important (e.g. that he will save us all in the end, irrespective of belief). I’ve certainly conversed with a number of liberal, universalist-style Christians who embrace this idea. Their views about God and meaning are often maddeningly vague, but they aren’t quite susceptible to this objection.


4. The New Euthyphro Argument
The final argument is a variation on the Euthyphro dilemma. As you probably know, the Euthyphro dilemma is a famous objection to theistically-grounded views of morality, such as Divine Command Theory. It is named after a Platonic dialogue. The dilemma poses the following challenge to the proponent of divine command theory: for any X (where is an allegedly moral act) is X morally right because it is commanded by God, or is it commanded by God because it is morally right? If it is the former, then it seems like the goodness of X is purely arbitrary (God could have commanded something else). If it is the latter, then it seems like God is not the true ontological foundation for the obligation to X. This is independent from God. Neither of these conclusions is entirely welcome.

Megill and Linford argue that a similar dilemma can be posed about the relationship between God and meaning. To anyone who claims that God’s existence is necessary for meaning, we can pose the following question: do our lives have meaning simply because God decrees that they do, or does God choose his decrees based on some independent standard of meaningfulness? To make this more concrete, suppose we accept the view that meaning is provided by God’s plan of salvation. We then ask: is this meaningful simply because it is God’s plan, or is it God’s plan because it is independently meaningful? If it’s the former, then we run into the problem that God could have picked any plan at all and this would have made our lives meaningful. For instance, God could have decided that rolling a boulder up and down a hill for eternity provided us with meaning. That doesn’t seem right. If it’s the latter, then we run into the problem that God is not the true source of meaning. It is an independent set of properties or values.

Megill and Linford develop this argument in more detail by asking whether any of the responses to the traditional Euthyphro dilemma can apply to this novel version. I won’t get into these details here because I have explored those responses before and I think they are equally implausible in this context. In other words, I think this argument basically correct. God cannot be the source of meaning and because meanings (like other values) are most plausibly understood as basic, sui generis and metaphysically necessary properties of certain states of affairs. I have defended this view on previous occasions.


5. Conclusion
This post has been quite long. Much longer than I originally anticipated. To briefly recap, the question was whether God was necessary for meaning. To be more precise, the question was whether God was the source or grounding for meaning in life. Megill and Linford presented four arguments for thinking that He could not be. My feeling is that only two of these arguments are really worthy of consideration: (i) the possible worlds argument, which is based on a thought experiment about different epistemically possible worlds; and (ii) the new Euthyphro argument, which is based on the classic Euthyphro objection to divine command theory. The other two arguments strike me a being more problematic.