Thursday, January 31, 2013

The Golem Genie and Unfriendly AI (Part One)

The Golem, by Philippe Semeria

(Overview and Framework)

This is the first of two posts on Muehlhauser and Helm’s article “The Singularity and Machine Ethics”. It is part on my ongoing, but completely spontaneous and unplanned, series on the technological singularity. Before reading this, I would suggest reading my earlier attempts to provide some overarching guidance on how to research this topic (link is above). Much of what I say here is influenced by my desire to “fit” certain arguments within that framework. That might lead to some distortion of the material I’m discussing (though hopefully not), but if you understand the framework you can at least appreciate what I’m trying to do (even if you don’t agree with it).

Assuming you’ve done that (and not really bothered if you haven’t), I shall proceed to the main topic. Muehlhauser and Helm’s article (hereinafter MH), which appears in the edited collection Singularity Hypotheses (Springer, 2013), tries to do several things, some which will be covered in this series and some of which will not. Consequently, it behooves me to give a brief description of its contents, highlighting the parts that interest me and excluding the others.

The first thing it does is that it presents a version of what I’m calling the “Doomsday Argument” or the argument for the AI-pocalypse. It does so through the medium of a clever thought experiment — the Golem Genie thought experiment — which is where I get the title to this blog post from. I’ll be discussing this thought experiment and the attendant argument later in this post. Second, it critiques a simple-minded response to the Doomsday Argument, one that suggests the AI-pocalypse is avoidable if we merely program AI to “want what we want”. MH argue this is unlikely to succeed because “human values are complex and difficult to specify” (MH, p. 1-2). I definitely want to talk about this argument over the next two posts. Third, and I’m not quite sure where to place this in the overall context of the article, MH present a variety of results from human psychology and neuroscience that supposedly bolster their conclusion that human values are complex and difficult to specify. While this may well be true, I’m not sure whether this information in any improves their critique of the simple-minded response. I’ll try to explain my reason for thinking this in part two. And finally, MH try to argue that ideal preference theories of ethics might be promising way to avoid the AI-pocalypse. Although I’m quite sympathetic to ideal preference theories, I won’t be talking about this part of the article at all.

So, in other words, over the next two posts, I’ll be discussing MH’s Golem Genie thought experiment, and the argument for the AI-pocalypse that goes with that thought experiment. And I’ll also be discussing their critique of the simple-minded response to that argument. With that in mind, the agenda for the remainder of this post is as follows. First, I’ll outline the thought experiment and the Doomsday Argument. Second, I’ll discuss the simple-minded response, and given what I think is the fairest interpretation of that response, something that MH aren’t particularly clear about. In part two, I’ll discuss their critique of that response.

My primary goal here is to clarify the kinds of arguments being offered in favour of the Unfriendliness Thesis (part of the doomsday argument). As a result, the majority of this series will be analytical and interpretive in nature. While I think this is valuable in and of itself, it does mean that I tend to eschew direct criticism of the arguments being offered by MH, (though I do occasionally critique and reconstruct what they say where I think this improves the argument).

1. The Golem Genie and the Doomsday Argument
One of the nice things about MH’s article is the thought experiment they use to highlight the supposed dangers of superintelligent machines. The thought experiment appeals to the concept of the “golem”. For those who don’t know, a golem is a creature from Jewish folklore. It is an animate being, created from inanimate matter, which takes a humanoid form. The golem is typically brought to life to serve some human purpose, but carrying out this purpose often leads disastrous unintended consequences. The parallels with other forms of human technology are obvious.

Anyway, MH ask us to imagine the following scenario involving a “Golem Genie” (effectively a superpowerful Golem):

The Golem Genie: Imagine a superpowerful golem genie materialises in front of you one day. It tells you that in 50 years time it will return to this place, and ask you to supply it with a set of moral principles. It will then follow those principles consistently and rigidly throughout the universe. If the principles are faulty, and have undesirable consequences when followed consistently by a superpowerful being, then disaster could ensue. So it is up to you to ensure that the principles are not faulty. Adding to your anxiety is the fact that, if you don’t supply it with a set of moral principles, it will follow whichever moral principles somebody else happens to articulate to it.

The thought experiment is clever in a number of respects. First, and most obviously, it is clever in how it parallels the situation that faces us with respect to superintelligent AI. The notion is that machine superintelligence isn’t too far off (maybe only 50 years away), and once it arrives we’d want to make sure that it follows a set of goals that are relatively benign (if not better than that). The Golem Genie replicates the sense of urgency that those who worry about the technological singularity think we ought to feel. Blended into this, and adding to the sense of urgency, is the notion that if we (i.e. you and I, the concerned citizens reading this thought experiment) aren’t going to supply the genie with its moral principles, somebody else is going to do so, and we have to think seriously about how much faith we would put in these unknown others. Again, this is clever because it replicates the situation with respect to AI research and development (at least as AI-pocalyptarians see it): if we don’t make sure that the superintelligent AI has benign goals, then we leave it up some some naive and/or malicious AI developer (backed by a nefarious military-industrial complex) to supply the goals. Would we trust them? Finally, in addition to encouraging some healthy distrust of others, the thought experiment gets us to question ourselves too: how confident do we really feel about any of our own moral principles? Do we think it would be a good idea to implement them rigidly and consistently everywhere in the universe?

So, I like the thought experiment. But by itself it is not enough. Thought experiments can be misleading, their intuitive appeal can incline us toward sloppy and incomplete reasoning. We need to try to extract the logic and draw out the implications of the thought experiment (as they pertain to AI, not golem genies). That way we can develop an argument for the conclusion that superintelligent AI is something to worry about. Once we have that argument in place, we can try to evaluate it and see whether it really is something to worry about.

Here’s my attempt to develop such an argument. I call it the “Doomsday Argument” or the argument for the AI-pocalypse. It builds on some of the ideas in the thought experiment, but I don’t think it radically distorts the thinking behind it. (Note: the term “AI+” is used here to denote machine superintelligence).

  • (1) If there is an entity that is vastly more powerful than us, and if that entity has goals or values that contradict or undermine our own, then doom (for us) is likely to follow.
  • (2) Any AI+ that we create is likely to be vastly more powerful than us.
  • (3) Any AI+ that we create is likely to have goals and values that contradict or undermine our own.
  • (4) Therefore, if there is AI+, doom (for us) is likely to follow.

A few comments about this argument are in order. First, I acknowledge that the use of the word “doom” might seem somewhat facetious. This is not because I desire to downplay the potential risks associated with AI+ (I’m not sure about those yet), but because I think the argument is apt to be more memorable when phrased in this way. Second, I take it that premise (1) of the argument is relatively uncontroversial. The claim is simply that any entity with goals antithetical to our own, which also has a decisive power advantage over us (i.e. is significantly faster, more efficient, and more able to achieve its goals), is going to quash, suppress and possibly destroy us. This is what our “doom” amounts to.

That leaves premises (2) and (3) in need of further articulation. One thing to note about those premises is that they assume a split between the goal architecture and the implementation architecture of AI+. In other words, they assume that the engineering (or coding) of machines goals is separable from the creation of its “actuators”. One could perhaps question that separability (at least in practice). Further, premise (2) is controversial. There are some who think it might be possible to create superintelligent AI that is effectively confined to a “box” (Oracle AI or Tool AI), unable to implement or change the world to match its goals. If this is right, it need not be the case that any AI+ we create is likely to be vastly more powerful than us. I think there are many interesting arguments to explore on this issue, but I won’t get into them in this particular series.

So that leaves premise (3) as the last premise standing. For those who read my earlier posts on the framework for researching the singularity, this premise will look like an old friend. That’s because it is effectively stating the Unfriendliness Thesis (in its strategic form). Given that this thesis featured so prominently in my framework for research, it will come as no surprise to learn that the remainder of the series will be dedicated to the addressing the arguments for and against this premise, as presented in MH’s article.

As we shall see, MH are themselves supporters of the Unfriendliness Thesis (though they think it might be avoidable), so it’s their defence of that thesis which really interests me. But they happen defend the thesis by critiquing one response to it. So before I can look at their argument, I need to consider that response. The final section of this post is dedicated to that task.

3. The Naive Response: Program AI+ to “Want what we want”
As MH see it, the naive response to the Doomsday Argument would be claim that doom is avoidable if we simply programme the AI+ to want what we want, i.e. to share our goals. By default, what we want would not be “unfriendly” to us, and so if AI+ follows those goals, we won’t have too much to worry about. QED, right?

Let’s spell this out more explicitly:

  • (5) We could (easily or without too much difficulty) programme AI+ to share our goals and values. 
  • (6) If AI+ shared our goals and values, then (by default) it wouldn’t undermine our goals and values. 
  • (7) Therefore, premise (3) of the doomsday argument is false (or, at least, the outcome it points to is avoidable).

Now, this isn’t the cleanest logical reconstruction of the naive response, but I’m working with a somewhat limited source. MH never present the objection in these explicit terms — they only talk about the possibility of programming AI to “want what we want” and the problems posed by human values that are “complex and difficult to specify” — but then again they never present the Doomsday Argument in explicit terms either. Since my goal is to render these arguments more perspicuous I’m interpolating many things into the text.

But even so, this reconstruction seems pretty implausible. For one thing, it is hopelessly ambiguous with respect to a number of key variables. The main one being the intended reference of “our” in the premises (or “we” in the version of the response that MH present). Does this refer to all actually existent human beings? If so, then the argument is likely to mislead more than it enlightens. While premise (6) might be strictly true under that interpretation, its truth is trivial and ignores the main issue. Why so? Well, because many human beings have values and goals that are antithetical to those of other human beings. Indeed, human beings can have extremely violent and self-destructive goals and values. So programming an AI+ to share the values of “all” human beings (whatever that might mean) isn’t necessarily going to help us avoid “doom”.

So the argument can’t be referring to the values of all human beings. Nor, for similar reasons, can it be referring to some randomly-chosen subset of the actually existent human population, since they too might have values that are violent and self-destructive. That leaves the possibility that it refers to some idealised, elite subset of the population. This is a more promising notion, but even then there are problems. The main one is that MH use their critique as a springboard for promoting ideal preference theories of ethics, so the response can’t be reformulated in a way that seems to endorse such views. If it did, then MH’s critique would seem pointless.

In the end, I suspect the most plausible reconstruction of the argument is one that replaces “share our goals and values” with something like “have the (best available)* moral goals and values or follow (the best available) moral rules”. This would avoid the problem of making the argument unnecessarily beholden to the peculiar beliefs of particular persons, and it wouldn’t preempt the desirability of the ideal preference theories either (ideal preference theories are just a portion of the available theories of moral values and rules). Furthermore, reformulating the argument in this way would retain some of the naivete that MH are inclined to criticise.

The reformulated version of the argument might look like this:

  • (5*) We could (easily or without too much difficulty) programme AI+ to have (the best available) moral goals and values, or follow (the best available) moral rules.
  • (6*) If AI+ had (the best available) moral goals and values, or followed (the best available) moral rules, it would not be “unfriendly”.
  • (7*) Therefore, the unfriendliness thesis is false (or at least the outcome it points to is avoidable).

This is still messy, but I think it is an improvement. Note, however, that this reformulation would force some changes to the original Doomsday Argument. Those changes are required by the shift in focus from “our” goals and values to “moral” goals and values. As I said in my original posts, about this topic, that is probably the better formulation anyway. I have reformulated the Doomsday Argument in the diagram below, and incorporated in the naive response (strictly speaking, (6*) and (7*) are unnecessary in the following diagram but I have left them in to retain some continuity with the text of this post).

That leaves the question of where MH’s critique fits into all this. As I mentioned earlier, their critique is officially phrased as “human values are complex and difficult to specify”. This could easily be interpreted as a direct response to the original version of the naive response (specifically a response to premise 5), but it would be difficult to interpret it as a direct response to the revised version. Still, as I hope to show, they can be plausibly interpreted as rejecting both premises of the revised version. This is because they suggest that there are basically two ways to get an AI to have moral goals and values, or to follow moral rules: (i) the top-down approach, where the programmer supplies the values or rules as part of the original code; or (ii) the bottom-up approach, where a basic learning machine is designed and is then encouraged to learn moral values and rules on a case-by-case basis. In critiquing the top-down approach, MH call into question premise (6*). And in critiquing the bottom-up approach, they call into question premise (5*). I will show how both of these critiques work in part two.

Monday, January 28, 2013

Should we thanatise our desires?


You might be perplexed by my title, but the concern, once more, is with the Epicurean attitude toward death. As we’ve learned over previous posts, the Epicurean project is determined, in part, to change our attitude toward death. Specifically, to change our attitude from one of fear to one of indifference and equanimity. As Steven Luper notes in his book The Philosophy of Death, there are at least three methods Epicureans can use to achieve this end. The first two involve the arguments — the experiential blank argument and the Lucretian symmetry argument — that I have explored in previous entries. In this post we consider a third method: thanatising our desires.

This method requires us to drastically alter our desires so that they are no longer thwarted by our deaths. In other words, to make our desires compatible with our deaths. Doing so, it is argued, will remove the distress and anxiety that our pending demise tends to cause. But how does this really work, and would it actually defeat the traditional view about the badness of death? The remainder of this post addresses these questions.

It starts by outlining what I am going to call the “desire-thwarting” account of the badness of death. This is introduced in contrast to the standard deprivation account of the badness of death. I do this first because it highlights some interesting dialectical features of the thanatisation strategy, and also because it parallels certain other debates in philosophy. Following this, I outline the Epicurean strategy for thanatising our desires, and consider some objections to this strategy.

In writing this post, I draw heavily on the discussion in the aforementioned book by Steven Luper — The Philosophy of Death.

1. Thwarting Desires and the Badness of Death
The deprivation principle provides the most common support for the badness of death. According to this principle, death is bad because it deprives us of the positive experiences we would and could have had if death had not occurred. As such, the deprivation principle uses the comparison of one’s wellbeing across possible worlds to determine the badness of the actual world.

The deprivation thesis is vulnerable to a number of counterexamples, ones that I considered when previously discussing Aaron Smuts’s article “Less Good but not Bad”. No doubt defenders of the thesis could revise their account in order to deal with these counterexamples, and no doubt opponents of the thesis could offer more resilient counterexamples. But I don’t want to get into that particular game of “refine-the-principle” here.

Instead, I want to focus on an alternative account of the badness of death, something I’m calling the “desire-thwarting” account. This relies on the Desire-Thwarting principle. Since I’m going to make some use of it in formalising the terms of the debate for the remainder of the post, it behooves me to offer a definition:

Desire-Thwarting Principle: X is bad for us if X thwarts or defeats our desires.

This principle offers a general account of prudential badness, one that differs from the deprivation principle. According to this principle it is bad for me that I didn’t earn enough money last year to take a five-week holiday in Hawaii because that thwarted my desire to go to Hawaii for five weeks. It is not bad because it deprived me of the experience of having a five week holiday.

The distinction here is subtle, but an analogy might be used to underscore its significance. When it comes to moral responsibility, there are two general accounts of the conditions that make us responsible for what we do. The first relies on the principle of alternative possibilities, and holds that we are responsible for action A, if and only if we could have avoided performing A. Thus, it conditions our responsibility on what could have happened, in another possible universe. The second account, developed by many but perhaps most carefully by John Martin Fischer (ironically, a fan of the deprivation account of death), tries to eliminate this conditioning on other possible worlds. Instead, it argues that we are responsible in virtue of what happened in the actual sequence of events that led up to A (e.g. the fact that we intended to do A).

The distinction between these two accounts of moral responsibility parallels the distinction between the two accounts of the badness of death. The deprivation thesis is like the principle of alternative possibilities: it conditions prudential badness on what could have happened. Contrariwise, the desire-thwarting principle is like the actual sequence account of moral responsibility: it conditions the badness of death on what actually happened in this world, namely that desires were thwarted.

Hopefully this makes things clearer. Grasping the distinction is significant if you want to fully appreciate the strengths (and weaknesses) of the Epicurean strategy we are about to address. To see this, consider how the desire-thwarting principle can be used to argue for the badness of death:

  • (1) X is bad for us if X thwarts our desires. 
  • (2) Death thwarts our desires. 
  • (3) Therefore, death is bad for us.

This conclusion can, in its turn, be used to defend the notion that we should be anxious about our deaths:

  • (4) If X is bad for us, it is rational to fear it. 
  • (5) Therefore, it is rational to fear our death (from 3 & 4).

This is a significant conclusion since it is the exact opposite of what the Epicureans want. They want us to be able to approach our deaths with tranquility and equanimity (ataraxia). Furthermore, because this argument does not rely on the deprivation thesis, they cannot respond to it by appealing to the experiential blank or Lucretian symmetry arguments since those arguments are mainly concerned with defeating that thesis. So they need some other way to respond.

2. Thanatising our Desires
This is where the strategy mentioned at the outset of this post comes into play. A key vulnerability in the preceding argument is premise (2). To be sure, most people would agree that death usually thwarts our desires, but their assent may be too quick. Is there any reason to think death necessarily thwarts our desires? Could we not render our desires compatible with our deaths? If so, could we avoid the implications of the desire-thwarting argument?

Epicureans think we can. As part of their more general strategy for achieving happiness and tranquility in life, the Epicureans argue that we can reconstruct our desires so that they are no longer dependent on contingencies that are either difficult or impossible to bring about. To take an obvious example, suppose I have desire to own my own private jet. But I am also committed to my career as an academic. Since the latter entails earnings that will never be sufficient to pay for the former, I am probably doomed to disappointment. As such, I should drop my desire for the jet. That way, I avoid the regret, anxiety and disappointment caused by the constant thwarting of my desire to own the jet.

So the basic recipe for tranquility is that I should do a full inventory of my desires and remove all those that are dependent on difficult or impossible contingencies. Desires that are contingent upon my not dying, or that are unlikely to be fulfilled before my death, are obvious candidates for removal. Since my death is (at least for now) nigh on impossible to avoid, my failure to remove those desires will result in inevitable disappointment. This gives us a response to the desire-thwarting argument:

(6) Thanatisation Strategy: It is possible to remove all desires that are contingent upon your not dying.

Two questions arise: (i) Is it actually possible? Could we remove all such desires? and (ii) Would we really be better off if we did? Let’s look briefly at both.

One might doubt the possibility of this strategy because desires are sometimes claimed to be beyond the scope of voluntary control. In other words, we want whatever we happen to want, and no amount of rational persuasion or coercion can change that. I vacillate on this notion, sometimes believing, other times not. Right now, my feeling is that desires are manipulable to some extent. Oftentimes, this is because one desire trumps another and so enables us to develop methods for eliminating or minimising it. For instance, my desire to stay thin might trump my desire to eat more chocolate, thus enabling me to downplay and eventually eliminate the latter. Arguably, something similar could be true in the case of death: my desire to achieve tranquility and equanimity might allow me to eliminate those of my desires that are contingent on my avoiding death (though, obviously, the desire to avoid death could equally trump my desire for equanimity).

But supposing it were possible, would it really make me better off? We’ll look at Luper’s criticism of the strategy in a moment. When we do so, we’ll see that he certainly thinks it wouldn’t. But before doing that, it’s worth knowing the underlying Epicurean reason for thinking it would. The Epicurean ethic was founded in hedonic utilitarianism. They believed that the only thing that was intrinsically good or bad for you was your conscious pleasure or pain. Desires were mere pathways to achieving states of pleasure, not in themselves intrinsically valuable, and just as capable of causing stress and anxiety (when thwarted) as they were of causing pleasure. The claim then was that one could have a life filled with intrinsic goods (pleasure), without having complex, anxiety-inducing desires to go with it. Thus, this really would make you better off than you might otherwise have been.

Note, however, that the success of the strategy in defeating the desire-thwarting argument is not dependent on the underlying truth of the hedonistic view. Indeed, it is actually consistent with the general view that the fulfillment of desires is intrinsically good, and that the thwarting of desires intrinsically bad. The strategy only claims that death need not thwart our desires.

3. Criticisms of the Epicurean Strategy
Luper has some pretty stern criticisms of this strategy in his book. One initial point which he makes is that thanatising all our desires seems obviously and deeply impractical because death can — technically — strike at any time (through accident or illness or so forth). And since there is always somewhat of a time lag between desire-formation and fulfillment, one always runs the risk of death thwarting one’s desires.

This is particularly true of categorical desires, which are the projects and goals around which we typically organise our lives. Take for example the desire to be a world-famous philosopher. This takes time, and at any step along the road one’s desire to achieve that fame could be thwarted by death.

There is, however, an obvious solution to this. And it is one that Luper recognises: add a “death”-exception clause to every categorical desire. Thus, for instance, change the desire “to become a world-famous philosopher” to “become a world-famous philosopher unless I die first.” Or even, “to become a world-famous philosopher unless I die first or unless that project becomes unfeasible for other reasons.

This is too easy. We need to ask what the addition of such an exception clause would really do to our lives. Luper suggests it would lead a strange bifurcation in the mind: one is both committed and yet not committed to one’s projects. One is thus oddly reckless as regards the fulfillment of one’s goals, prone to abandon them when the path to their fulfillment becomes too difficult or when death looms too large. This would lead to an impoverished existence. Indeed, in the end, Luper suggests one would really only be left with the general desire to “live one’s life enjoyably, if one lives at all”. But this would be to approach life as a sequence of (potentially) disconnected pleasurable moments, and to treat categorical desires as optional extras, probably best avoided.

There is an argument lurking here, familiar to those who have read up on the topic of immortality and death before. It was originally made by Bernard Williams in defending the notion that immortality would be tedious. Williams argument was based on the premise that the meaningful life demanded an abundance of categorical desires: projects around which one could organise one’s life. Williams’s observation was that an immortal existence would entail the exhaustion of all such desires, and their replacement by “conditional” or “contingent” desires for food, sex and other ephemeral pleasures. The conclusion was, thus, that immortality would lead to a meaningless existence.

Now, there are problems with Williams’s argument — ones that have been discussed before on the blog — but they need not detain us since they tend to focus more on the claim that immortality would exhaust all categorical desires. Very few deny the underlying premise that categorical desires are needed for a meaningful life (though I will in a moment). But if this premise is accepted, there is trouble for the Epicurean. If Luper’s reasoning is followed, the thanatising strategy seems to lead to something very similar to the meaningless existence abjured by Williams. Which suggests the following argument can be made:

  • (7) In order for life to be meaningful (worth living) one must have a set of categorical desires around which one’s life can be organised.
  • (8) If the thanatisation strategy is followed to completion, one will seek to eliminate all of one’s categorical desires.
  • (9) Therefore, the thanatisation strategy, if followed to completion, would lead to a meaningless life.

I need to be clear about something: I doubt that Luper would make this argument, though he hints at it in the text. Nevertheless, I want to evaluate it because I think it’s interesting. In doing so, I would like to suggest that it can be resisted. In particular, I would suggest that the argument relies on deeply ambiguous and contested concept, namely “meaningfulness” or “worthwhileness”, and that this may actually undermine premise (7). In brief, there are different ways of cashing out this notion of “meaning”, and the one that I typically prefer (meaning = access to value) would not necessarily support premise (7). Quite the contrary in fact. All that would matter for meaning is that one has a life in which one can access intrinsically valuable things. That may require categorical desires, but then again it may not. Indeed, if Epicurean hedonism is to be accepted, it would not. Given this, it’s no surprise to see that Luper, in his analysis, falls back on the deprivation thesis when responding to the thanatisation strategy. That would support the badness of death, but for a distinct set of reasons.

In addition to this, I suspect that premise (8) is dubious. While I certainly agree that the thanatisation strategy would render our categorical desires incredibly fragile, and would mandate a reckless and carefree attitude toward their fulfillment, I don’t think that this leads to their total disintegration. I don’t exactly see why one couldn’t still hold onto life projects but build in the death-exception clause to each and every one. It might be odd, for sure, but it doesn’t seem to necessarily reduce all desires from the categorical to the contingent and ephemeral type.

Anyway, those are just some thoughts. Luper might be on stronger grounds when he argues that certain states of being — such as the state of being truly in love — are incompatible with the thanatisation strategy. As Luper puts it, if I do not care about what happens to my wife after she dies, then I do not really love her. But Epicureanism seems to demand that I be indifferent to my wife’s well-being after I die (I shouldn’t desire for her to do well because that is an impossible to desire for me to fulfil). Thus, Epicureans cannot truly love someone. If one then adds the premise that the good life requires true love, one gets a counter-argument to Epicureanism.

There is some merit to this notion, and much that could be said in response, but I will say just three things here. First, I’m not entirely sure that posthumous desires of the sort required by true love are precluded by Epicureanism. For one thing, there may be cases in which one can know that one’s beloved will do well after one dies. Second, the suggestion that posthumous concern is absolutely essential for true love is at least open to doubt. And third, I wonder whether this isn’t a little too “all-or-nothing” in its presentation. Maybe one can’t have true love on Epicureanism, but maybe one can have a reasonable facsimile of it. And if so, maybe the gain in terms of tranquility and equanimity, outweighs the loss of true love?

4. Conclusion
To sum up, the Epicurean project is designed (in part) to cure us of our fear of death, and replace it with a sense of equanimity and tranquility. Several different methods are used by Epicureans to achieve this end. In this post, we looked at one of those methods, one that called for us to thanatise our desires. This requires us to eliminate all desires that are contingent upon our not dying. Doing so, it is argued, will remove the anxiety associated with death, and leave us to focus on what truly matters, which is achieving conscious pleasure.*

This strategy is disputed for several reasons. There are those who doubt its practicality since it assumes that our desires are readily manipulable and eliminable when this may not be the case. Similarly, there are those who question (ironically) its desirability, suggesting that if followed to its logical conclusion it leads to an impoverished way of life — a life devoid of categorical desires, commitment, love and all the things that make life worth living.

I do not really know where I stand on all this, but by critically engaging with the arguments I hope to get closer to a definite view.

* I suspect that scholars of Epicureanism may balk at the simple-minded connotations that the phrase “conscious pleasure” evokes, I use it for convenience here, dimply aware of the fact that there is a more sophisticated understanding of that concept present in many Epicurean writings.

Friday, January 25, 2013

The Lucretian Symmetry Argument (Part Two)

(Part One)

This series is about the Lucretian symmetry argument against the badness of death. According to this argument, death is like the pre-natal state of non-being (PNNB) in all important respects. And because PNNB is neither bad for us, nor something we should be worried about, it follows (by analogy) that death is not bad for us, nor something we should be worried about.

This argument is defeated if there are significant disanalogies between the two states. In part one, we considered one such disanalogy. This came from the work of Brueckner & Fisher (B&F) and is summarised in the following argument:

  • (6) We care about our future experiences in a way that we don’t care about our past experiences; more precisely: we prefer to have positive experiences in the future, and negative experiences (if necessary) in the past. 
  • (7) Death deprives us of future positive experiences; PNNB only deprives us of past positive experiences. 
  • (8) Therefore, death deprives us of something we care about, but PNNB does not. Adding the premise that deprivation is bad for us, we get a significant disanalogy between death and PNNB, one that undercuts the original Lucretian argument.

In this post, we’re going to consider a recent critique of B&F’s argument. The critique comes from the work of Fred Feldman — who is himself a firm believer in the badness of death — and focuses on how the conclusion (premise 8) of B&F’s argument is to be interpreted. Since this means we’ll have to refer to different versions of the principle throughout the remainder of this post, I will hereinafter call it the Asymmetry Thesis or AT for short.

Here’s the structure of the remainder of the post. In the first section, I follow Feldman and disambiguate two versions of the AT: (i) a de re version; and (ii) a version. In the second section, I consider Feldman’s critique of the de re version, and in the third section I consider his critique of the de dicto version. I also add, in this section, two more critiques to mix, ones that Feldman himself makes of B&F. I then conclude by addressing B&F’s response to Feldman.

1. De Re and De Dicto versions of the Asymmetry Thesis
The de re and de dicto distinction is widespread in philosophy but I don’t think I’ve ever brought it up on this blog before. Partly, that was because I didn’t know too much about it in the early days, and so tended to studiously avoid mentioning it, afraid that my ignorance would be revealed through my mishandling of this imposing latin terminology. But as it turns out the distinction is not too difficult to grasp, or at least not too difficult for the purposes of this particular blog post.

So what is it? Take the following sentence:

S1: John wants to marry the most beautiful girl in Ireland.

The italicised portion of the sentence can be understood in two different ways. In the first instance, “the most beautiful girl in Ireland” might be taken refer to a very specific girl in the real world that I want to marry. My current girlfriend for example: who I happen to think is the most beautiful girl in Ireland. In the second instance, the phrase may taken to refer to whichever girl it happens to be that matches that general description.

The first way of understanding the sentence corresponds to the de re interpretation; the second way of understanding the sentence corresponds to the de dicto interpretation. The etymology of the words is helpful here. “De re” means “of the thing”, whereas “de dicto” means “of the word”.

With any luck, this makes the distinction tolerably clear. There are many arcane and fascinating discussions we could get into about it, but there’s no need to do so now. As long as you appreciate the difference between the two interpretations of S1 you should be fine.

Anyway, Feldman holds that the Asymmetry Thesis (AT) can be interpreted in a de re sense or in a de dicto sense. To see this, consider first the original version of the AT from B&F’s argument.

Asymmetry Thesis: Death deprives us of something we care about, but PNNB does not.

Feldman’s claim is that the “something” in the AT can be understood to either: (a) refer to a specific thing (or things) that the person who dies happens to care about; or (b) refer to the general category of things that the person would care about and that they might have experienced, if they did not die. Here, once again, the first of these corresponds to the de re interpretation and the second to the de dicto interpretation.

Let’s formulate both interpretations a little more precisely (both of these are lifted from Feldman’s article):

Asymmetry Thesis (dr): When death is bad for a person D, it is bad for D because there are certain pleasant experiences, such that his death deprives D of those experiences, and D cares about those experiences. (Contrariwise, PNNB is not so bad for D because even though there are some pleasant experiences such that D’s PNNB deprives D of those experiences, D does not care about them).
Asymmetry Thesis (dd): When death is bad for a person D, it is bad for D because D cares about the fact that if he dies, he will be deprived of some pleasant experiences (though he may not know what these will be) that he otherwise would have enjoyed. (Contrariwise, PNNB is not bad for D because, even though it deprives D of pleasant experiences, he does not care about the fact that if he is born late he will be deprived of some pleasant experiences.)

It would be worthwhile getting comfortable with the distinction between these two versions of the thesis before moving on. Otherwise, let’s proceed.

2. Problems with the De Re Version
We start with Feldman’s critique of the AT(dr). In his opinion, the AT(dr) does a poor job of explaining why death is bad. To be more precise, he feels that the AT(dr) is vulnerable to a number of counterexamples. Consider the following:

SIDS Case: James is 6-month old baby. He is healthy and has a loving family. If all goes well he can expect to have a “wonderful life filled with pleasant experiences”. But, unfortunately, all does not go well. James dies in a suspected SIDS case aged 6-months and 2 days.
Car Crash: Eleanor is a young woman (aged 20) who is about to complete her college degree. She doesn’t know it yet, but in her future lies a wonderful career and a serene retirement, provided she can avoid dying in a car crash in the next 20 minutes. This, alas, she does not do.

Both of these cases undermine the AT(dr). This is because the deceased in both cases share a fundamental property that excludes the applicability of the AT(dr) account of death’s badness. The property in question is ignorance. Neither James nor Eleanor are aware of the specific positive experiences that death deprives them of, but surely this doesn’t make their deaths less bad. And yet, since the AT(dr) holds that death is only bad if the subject is actually aware of the specific experiences of which they will be deprived, it would follow that the AT(dr) is false.

As a critique of B&F’s argument, this is an interesting one. Why? Because it actually works from the position that death is indeed (contra Epicurus and Lucretius) bad for the one who dies. Thus, it only defeats B&F’s argument on the grounds that it offers a faulty account of the badness of death; it does not thereby support the symmetry argument. The goal, presumably, would be to replace the AT(dr) with a better account of the badness of death, one that could also defeat defeat the symmetry argument.

Now, I have to say this is a slightly odd way of arguing about this topic. To presume that death is bad (which is what is being presumed by Feldman’s analysis of SIDS and Car Crash), in this particular dialectic looks like being dangerously close to begging the question. This problem will recur later in the discussion when we look at B&F’s response to Feldman. For now, let’s proceed to Feldman’s critique of the AT(dd).

3. The Problems with the De Dicto Version
The critique of the AT(dd) actually follows very similar lines. The idea behind the AT(dd) is that the person need not be aware of the specific experiences that death will deprive them of in order for it to be deemed bad, but rather they need only have the general concept of some positive future experience that death might deprive them of. This is what differentiates it from the AT(dr).

But the SIDS case again provides a counterexample. The six month old child does not have the general concept of future positive experiences, so according to the AT(dd) his death is not bad. But doesn’t it seem odd to suggest that his death is not bad because he lacks the necessary conceptual machinery?

Another counterexample might work here too:

Suicide: Mike is 40 years old and suicidal. He believes there is no hope left for him: he has lost his job, his marriage has broken up, and he believes his continued existence will be nothing but misery. What he doesn’t realise is that in about six days time, he will undergo a dramatic reversal of fortune which will lead to incredibly powerful and rewarding experiences in the future. Unfortunately, Mike commits suicide just before this happens.

The problem is that Mike has a mistaken set of beliefs about his future prospects. He doesn’t actually care that his death might deprive him of some non-specific positive experience because he doesn’t believe that he will have any such experiences. But it still seems like his death is, all things considered, bad for him. This runs contrary to the AT(dd). Or so Feldman believes.

Perhaps the problem here is that the AT(dd) is, like the AT(dr), indexed to the psychological characteristics of the one who dies. Thus, lack of mental development or mistaken beliefs undermine its account of the badness of death. We could solve this by coming up with a de dicto version of the AT that is not conditioned upon the makeup of the person who dies. But what would this look like. Feldman suggests the following:

AT(dd2): When death is bad for a person D, it is bad for D because other people care about the fact that if D dies, D will be deprived of some pleasant experiences (though they may not know what experiences these will be) that D would otherwise have enjoyed. (And PNNB is not bad for contrary reasons).

The problem with this version is that it seems downright implausible. For if the other versions of the AT fail because they are indexed to the psychological peculiarities of the one who dies, then so too must this version fail because it is indexed to the psychological peculiarities of other people. Those other people could just as easily be mistaken or ignorant about possible futures.

This, then, is the main part of Feldman’s critique of B&F. He does, however, have two other observations that warrant some discussion here.

The first is that B&F’s argument is problematic in that it conflates axiology and psychology (I mentioned this problem in part one). It assumes that the fact that D does not care about something (e.g. PNNB) supplies us with reason to think that that something is not prudentially bad. But why should we accept this? Prudential axiology seems like it could be separate from our psychological quirks. Indeed, if we go back to the inspiration for B&F’s argument — the Parfittian thought experiments about our bias to the future — we find that Parfit was actually using those thought experiments to cast doubt on our general view about personal well-being. Perhaps, Parfit suggested, we are wrong to be biased toward the future. Maybe we should adopt a temporally neutral perspective on what is good or bad for us? If so, the symmetry argument may still stand, but lead to the opposite conclusion, namely: that PNNB is, contra Lucretius, bad for us.

The second problem is that B&F’s argument may simply beg the question against Lucretius. The Lucretian argument was explicitly designed to counter the asymmetric psychological attitudes that everyone has toward death. Lucretius knew that people worried about death, and that death lay in the future. He just didn’t think this attitude was rational. One of the goals of the symmetry argument was to offer a defeater for this attitude. All B&F have done is re-asserted the fact that we have asymmetric attitudes and used that to defeat the argument. Surely more is needed?

4. Brueckner and Fisher’s Response
B&F have offered a short response to Feldman’s critique. The response leaves a lot to be desired, but if you’ve been following the basic logic of Feldman’s critique you can pretty much guess where it goes. The essence of Feldman’s critique was this: the AT, in all its forms, conditions the badness of death on the attitude of some actual person or group of persons. The person who dies, in the one instance, or a non-specified group of “other people” in the other instance. But since both groups of people may suffer from psychological quirks or incapacities that result in them failing to “see” what it is rational to care about in the future, the AT fails to explain the badness of death when those quirks or incapacities are present.

Well, in that case, why not simply avoid conditioning the badness of death on the attitudes of actual people? Why not, instead, condition it on the attitudes of some hypothetical, idealised, “rational” person? That’s what we do in ethics all the time: if real people don’t have the attitudes we would like them to have, we imagine idealised people who do. David Boonin does this when responding to Don Marquis’s “Future Like Ours Argument” about the ethics of abortion.

It should come as no surprise to learn that this is what B&F do too with their revised de dicto version of the AT:

AT(dd*): When death is bad for an individual D, it is bad for D because it is rational for D to care about the fact that if D dies, D will be deprived of some pleasant experiences (though D may not know what experiences these will be) that D would otherwise have enjoyed. (Contrariwise, PNNB is not bad for an individual because, even though it deprives him or her of pleasant experiences, it is not rational for an individual to care about the fact that if he or she is born late he or she will be deprived of some pleasant experiences.)

The use of the word “rational” is key here. With it, B&F make the appeal to an idealised perspective on our psychological attitudes. No longer are they focusing on what someone does care about, instead they are focusing on what the person should care about, if they were being rational.

They claim that this version of the AT is immune to the counterexamples posed by Feldman. For instance, look at the case of the six month-old child. Clearly, it is in the child’s interest to have food, even if the child lacks the ability to conceptualise the fact that this is good. Why is that? Because from the idealised perspective of the rational person, obtaining nourishment is good. The same reasoning applies, a fortiori, to the case of the SIDS baby: their death is bad for them because, from the idealised perspective, it deprives them of something they ought to care about if they were rational.

B&F suggest that this revised version of the AT advances the case against the Lucretian symmetry argument. In a technical sense, they are correct: they have supplied a new principle that avoids Feldman’s critique and this could be used to rebut the Lucretian argument (following the method I laid down in part one). Further, they argue that this doesn’t simply beg the question against Lucretius, because his argument was purely about asymmetric attitudes toward death, not about asymmetric attitudes toward life in general. They claim that AT(dd*) is not simply being posited without support but that it is being derived from the more general rationality of asymmetric, future-biased attitudes toward life.

But is this persuasive? One clear problem is that the rationality of this more general attitude is not fully established. B&F acknowledge this in a footnote, and at the very end appeal to an article that one of them has written (Fischer) that vaguely sketches a defence of it. This defence relies on the claim that a future-biasing in our desires is rational because it is evolutionarily advantageous. To quote from the authors: “there would appear to be a clear survival advantage to any creature who cares especially about future good experiences, as opposed to past good experiences.”

I haven’t read Fischer’s earlier piece, but I’m quite familiar with this style of argument and I find it highly circumspect, particularly in this context. For one thing, I don’t see any strong reason to think that evolutionary goals are either constitutive of, or correlative with rational truths. Indeed, I think one could plausibly argue that the exact opposite it true. But in addition to this, I think the appeal to evolutionary goals is suspicious in this context. Why? Because evolution as a process is (at least partly) biased against death. Which suggests to me that the survival advantage of future-biasing may simply be feeding off the asymmetric attitude that evolution has toward death. Which means that B&F’s argument is circular: the AT(dd*) is defended by appeal to the general rationality of asymmetric attitudes toward the future, which are in turn defended by appeal to survival advantage, which is essentially an asymmetric attitude toward death in another name.

Or so it seems to me.

4. Conclusion
So where does that leave us? Probably more confused than we were when we started out. So let’s try to distill the main threads of argument.

In part one, we saw how the Lucretian argument challenges the badness of death by drawing an analogy between it and PNNB. B&F rejected this by appealing to the fact that we seem to care about the future in a way that we don’t care about the past. They claimed that this meant that death deprived us of something we care about, but PNNB does not. This was their “asymmetry thesis” (AT).
In this post we have encountered the problems arising from this thesis. Starting with Feldman, we saw how the de dicto and de re versions of the AT are vulnerable to various counterexamples. These counterexamples exploited the fact that the AT seems to condition the badness of death on the actual attitudes of particular people. B&F reply to Feldman by reformulating the AT so that it relies on the attitudes of a hypothetical, idealised, and rational person. But as I have just argued, their defence of this thesis leaves something to be desired.

None of which means that the Lucretian argument is sound. If nothing else, we are still left with the live possibility, hinted at by Parfit, that the argument simply proves that PNNB is bad for us.

Thursday, January 24, 2013

The Lucretian Symmetry Argument (Part One)

Death looms large for most of us, even if we try not to think about it. But should we be worried at the prospects of our eventual demise? Should we do everything we can to avoid it (e.g. by opting for cryopreservation)? Or should we approach it with indifference and equanimity?

The Epicurean school of thought supports the latter view — that of equanimity and indifference — with two classic arguments. The first, which we might call the experiential blank argument, holds that death is nothing to us because “we” won’t be around to experience it. This argument was the subject of a previous series of blog posts. The second, which is typically called the Lucretian symmetry argument, is the subject of this series of blog posts.

I’ll spread the discussion of the argument over a couple of blog posts. In the remainder of this post, I’ll outline the general form of the argument and consider possible strategies for responding to it. I’ll also consider Brueckner and Fischer’s (B&F’s), by now famous, objection to the argument. In subsequent posts, I will consider Feldman’s recent riposte to B&F and, in turn, their reply to Feldman.

1. What Lucretius Said
Once we get into the debate between Feldman and B&F, things will start to get pretty complex (and, if I’m honest, slightly arcane). But it’s reassuring to know that the debate between these parties is rooted in a fairly simple argument. The argument comes from a couple of passages from Lucretius’s On the Nature of Things. Here they are:

“In days of old, we felt no disquiet...So, when we shall be no more - when the union of body and spirit that engenders us has been disrupted - to us, who shall then be nothing, nothing by any hazard will happen any more at all. 
Look back at the eternity that passed before we were born, and mark how utterly it counts to us as nothing. This is a mirror that Nature holds up to us, in which we may see the time that shall be after we are dead.”

The passages present a simple analogy (although you have to get past the archaic language in this translation to see it clearly). The analogy compares two states: (i) post-mortem non-being, or, more simply, “death”; and (ii) pre-natal non-being, which we’ll label PNNB. The suggestion is that these two states are similar in all crucial respects because they both entail our non-existence. The claim is then made that since PNNB “counts to us as nothing” so too should death count to us as nothing.

The phrase “counts to us as nothing” requires some unpacking. As with the other Epicurean arguments about death, there is a danger that we will conflate two distinct conclusions that the arguments might allow us to reach. The first conclusion has to do with the prudential axiology of death (i.e. whether death is good or bad for you). The second conclusion has to do with the psychological attitude one should have toward death (i.e. whether one should be afraid or not). The two conclusions may be connected, but if they are, this connection will have to be made explicit.

That’s exactly what this formal reconstruction of the argument attempts to do (inspired but modified from Feldman):

  • (1) The state of pre-natal non-being is not bad for us. 
  • (2) Post-mortem non-being (death) is the same as pre-natal non-being, in all important respects.
  • (3) Therefore, death is not bad for us. 
  • (4) If something is not bad for us, then it is irrational to fear it. 
  • (5) Therefore, the fear of death is irrational.

There are two-steps to the argument. The first taking us to the axiological conclusion; the second to the psychological conclusion. The steps are bridged by way of the principle posited in premise (4). One may doubt whether this principle is true. For instance, it could be that we rationally fear things that are not bad for us (e.g. things that are bad for others). I won’t be considering such concerns here (I will do some other time). Instead, I shall assume that prudential badness and the rationality of fear are intimately linked in the manner suggested by premise (4).

This shifts most of the focus back to the first step of the argument. As we can see, that step is strictly analogical in nature. As a result, that part of the argument is not logically sound: the conclusion does not actually follow from the premises as a matter of deductive certainty. This is because, following Douglas Walton, I view analogical arguments as presumptive arguments. These are arguments that lead to defeasible conclusions that can be set aside under certain conditions. Those conditions relate to: (a) the truth of the claim made about the first relevant case; and (b) the strength of the similarity between the two relevant cases: death and PNNB. If the claim made about PNNB is not true, or if the two cases are not similar “in all important respects”, then the conclusion does not follow.

The first possibility — i.e. that PNNB is, contra Lucretius, actually bad for us — will resurface later in the discussion. In the meantime, the second possibility — that death and PNNB differ in certain crucial respects — will monopolise our attention.

2. The Simple Disanalogy and the Parfittian Bias
An obvious difference between PNNB and death is this: our existence precedes our death, but not our PNNB. That much seems banal, even platitudinous in its obviousness, and by itself it does not seem like it could defeat the Lucretian argument. One imagines Lucretius replying with a confident “So what? That’s not a significant difference.”

But maybe he’s wrong. Maybe this difference is significant. Consider the following thought experiment from Derek Parfit:

Amnesia Operation: You are in hospital for a serious, and extremely painful operation. The operation cannot be performed with anaesthetic, and the pain is so severe that doctors have decided to administer powerful memory-erasing drugs to everyone who undergoes it, believing that this is the only way for people to overcome the trauma. The medication is administered before the operation and essentially blocks out your memory for several hours on either side of the operation. You awake in your hospital bed, unable to remember what has happened. You ask the nurse on duty, but she can only tell you that either (a) you just woke up from the operation or (b) you are due to undergo the operation in the next half hour.
Query: Which possibility do you prefer?

Parfit reckons that most people would prefer possibility (a). This suggests that we have a certain “Con Bias Toward the Future”. In other words, we prefer not to have negative experiences lying in our future.

Now, Parfit has his own reasons for discussing this case, and they will feature in part two, but for the time being what’s important is how two other philosophers — Brueckner and Fischer — have used this idea to respond to the Lucretian symmetry argument. Taking Parfit’s notion onboard, B&F argue that not only do we have a “Con” bias toward the future, we also have a “Pro” bias toward the future. In other words, given the option, we prefer to have positive experiences in the future. To see this, simply modify the Parfittian thought experiment:

Pleasure Drug: You are in the hospital to test some new drug. The drug gives you intense feelings of pleasure for approximately one hour. However, soon after you will forget everything. You wake up in your hospital bed and ask the nurse on-duty what your status is. She tells you that you either: (a) took the drug yesterday but have now forgotten; or (d) are due to take the drug later today.
Query: Which possibility do you prefer?

B&F suggest the answer is obvious: you would prefer (d). This, they argue, justifies their belief in the pro-bias toward the future. The gist of the idea is depicted below.

3. The Future Bias and the fear of Death
These biases are all well and good, and the thought experiments used to uncover them are interesting, but where does it get us? Surely, all they really show is that we are (or, at least, might be) biased toward the future. How can we use that fact to defeat the Lucretian argument?

It takes a bit of work, but we can eventually use it to undermine the analogy Lucretius tried to draw. That process starts with the following argument:

  • (6) We care about our future experiences in a way that we don’t care about our past experiences; more precisely: we prefer to have positive experiences in the future, and negative experiences (if necessary) in the past. 
  • (7) Death deprives us of future positive experiences; PNNB only deprives us of past positive experiences. 
  • (8) Therefore, death deprives us of something we care about, but PNNB does not.

This is clearly articulates the difference between the two cases. But it doesn’t quite get us to the rejection of premise (2). Premise (2) talks about death and PNNB being similar “in all important respects”. Before we can conclude that premise (2) is flawed we have to know that the deprivation of something we care about is an important difference between the two cases.

It is at this point that something called the Deprivation Thesis enters the fray. According to the deprivation thesis, something is bad for us if it deprives us of something we would have had but for that thing. It is a counterfactual principle, one that compares our welfare across possible worlds in order to determine whether the actual world is bad for us. If we append the deprivation thesis to the end of the previous argument...

  • (9) If X deprives us of something we care about, then X is bad for us.

…we can reject premise (2). This is depicted in the diagram below.

But is this really a persuasive rebuttal to the Lucretian argument? Is there not something slightly suspicious about appealing to the Deprivation thesis in order to defeat Lucretian symmetry, especially given that it was really this deprivation-based account of the badness of death that Lucretius was trying to undermine? We’ll look at these questions in part two.

Sunday, January 20, 2013

Book Recommendations ♯8: On Politics by Alan Ryan

Political history fascinates me. And philosophy is clearly one of my passions. Consequently, it should come as no surprise to learn that the history of political philosophy is something I’m interested in. That’s why Alan Ryan’s recently published meisterwerk — On Politics: A History of Political Thought from Herodotus to the Present— is something I heartily recommend. (Aside: Would it really be that surprising if the history of political philosophy was not one of my interests? Maybe not, since the syllogism I offered exhibits the compositional fallacy)

The book is a bit of monster: it weighs in at just over a thousand pages, and in its hardback form (which is the only form it’s available in for now) it will put a serious dent in your bookshelf. Furthermore, in terms of content it covers an impressive sweep of history: as the subtitle says “from Herodotus to the Present”. Despite this, the immensely readable. Ryan is a long-time political philosopher and theorist (currently based in Princeton, I believe) and the book is very much the product of years of his years of personal reflection, research and teaching on the topic. As a result, there is an intimacy to the book that might so easily have been lost in its epic sweep.

In each chapter, Ryan looks at a particular figure or movement in the history of political thought. He sketches the socio-political backdrop to that figure or movement, and then exposits and critically engages with some of the key ideas and concepts. The format here is similar to that adopted Russell’s infamous A History of Western Philosophy — and Ryan acknowledges the influence in his introduction — but it is less showy and opinionated than Russell’s work. Don’t get me wrong, I love Bertrand Russell and enjoy his writing immensely, but whereas Russell tends to insert himself too much between the reader and the subject, Ryan tends to stand to one side, guiding you through the mire of political thought and conversation, providing some helpful commentary and observations along the way, but never forcing you towards one particular view.

The book is divided into two volumes. The first dealing with the period of history from the Ancient Greeks to the Renaissance (or as Ryan puts it “from Herodotus to Machiavelli”); and the second, which is nearly twice the length of the first, dealing with the period from Hobbes to the present day. Generally speaking, I’d be much more interested in the material covered by the second volume, but I found myself surprisingly captivated by the first volume. I suspect this is because I have recently been reading books (both fictional and non-fictional) dealing with the Holy Roman Empire and the role of the Catholic Church in the political life of Europe, and so it was interesting to read about the intellectual backdrop to some of that in more detail. Nevertheless, I imagine that others would find this pretty interesting stuff too since that era of history is often neglected in contemporary discussions of political force.

Now, I’ll be honest and say that I haven’t read the whole thing — it is over a thousand pages long — but I have dipped in and out repeatedly, and read substantial chunks. In so doing, I’ve been relieved to find that the book easily accommodates this a la carte method of reading. But despite this, the book does tell a coherent story about the development of political philosophy, and the changing conceptions and justifications of political structures that have come with it.

One criticism of the book, and an obvious one at that, is its neglect of non-Western sources. Only in the latter stages does it take a truly global perspective. This will no doubt irk some readers, but it hasn’t bothered me too much. Perhaps this is because I know so little about non-Western philosophy, an ignorance that is in no way helped by reading a book like this, but fortunately or unfortunately: ignorance is bliss.

Saturday, January 19, 2013

Is Craig's Defence of the DCT Inconsistent? (Part Two)

(Part One)

This is the second part in a short series of posts looking at Erik Wielenberg’s recent article “An Inconsistency in Craig’s Defence of the Moral Argument”. Unsurprisingly, given its title, the article tries to show that the manner in which William Lane Craig — that most famous and indefatigable of Christian apologists — defends the moral argument for the existence of God leads him to contradict himself.

Part one traced out the various dialectical steps that Craig takes when defending the moral argument (more precisely: when defending his version of the modified DCT). The central premise of Craig’s moral argument holds that (a) only God provides a sound foundation for the existence of objective moral truths; and (b) objective moral truths are in need of a sound foundation. Critics challenge both (a) and (b).

In defending (a), Craig develops his version of the modified DCT. The modified DCT is designed to overcome classic Euthyphro-style objections to the DCT. It holds, contra the Euthyphro, that God’s nature is such that he cannot command that certain acts are permissible (e.g. the torture of the innocent). Thus, according to the modified DCT, there are such things as “N-commands”:

N-Commands: God’s nature is such that there are certain things that he forbids, and certain others that he obliges, in every possible world. That is: there are certain logically necessary moral duties.

In defending (b), Craig rejects the approach of non-theistic, non-natural moral realism (NTNNMR). According to NTNNMR, (at least some) moral truths do not need to be explained or grounded. Rather, they are logically necessary and hence self-explanatory. Craig accuses proponents of NTNNMR of adopting a “shopping list” approach to metaethics, of helping themselves to the moral entities they prefer, and thus he imposes the following condition of success on metaethical theories:

Craig’s Condition: Any approach to metaethics that posits the existence of logically necessary connections must adequately explain those necessities.

Wielenberg’s argument is that these two things — N-commands and Craig’s Condition — lead to a contradiction. Let’s see exactly how this works.

1. Wielenberg’s Argument
Superficially, there’s nothing contradictory about Craig’s commitment to the existence of N-commands and his condition of success for metaethics. The former says only that certain moral duties exist as a matter of logical necessity; the latter says that logically necessary connections must be explained. It’s only if an additional premise is added to the mix — viz. that N-commands are unexplained logical necessities — that a contradiction emerges.

This means that Wielenberg is proposing that the following argument is a good one.

  • (1) Craig’s modified DCT posits the existence of N-commands: divine commands, which define the scope of our obligations, and which flow as a matter of necessity from God’s nature.
  • (2) Any successful metaethical theory must explain posited logical necessities, otherwise it fails.
  • (3) N-commands are unexplained logical necessities.
  • (4) Therefore, Craig’s modified DCT fails.

Clearly, this argument is valid: the conclusion follows from the conjunction of the premises. Furthermore, premises (1) and (2) look to be pretty solid. As outlined above, Craig seems to be committed to them in his defence of the DCT. If he rejects (1), he opens himself up to Euthyphro-style objections that he worked so hard to avoid. Similarly, if he rejects (2), one of the key assumptions of his moral argument is undermined.

So premise (3) is where the action is. And premise (3) is obviously going to be controversial. Certainly, when I first read Wielenberg’s article, I thought to myself “but, of course, Craig will argue that N-commands are explained. He will say that they are explained by the divine nature.” Wielenberg’s task is to show that this response doesn’t work.

2. Are N-commands Unexplained?
The simple answer to Wielenberg’s argument — and the one I suspect Craig would give — is that N-commands are explained. They are, after all, grounded in the divine nature. So something like this will be used to rebut premise (3) of Wielenberg’s argument.

  • (5) N-commands are explained: they are explained by the fact that they flow necessarily from God’s moral nature.

How exactly does this work? Let’s take a command like “Love thy neighbour and do them no harm” (not a quote, I hasten to add). The idea is that this is explained by the fact that God’s nature is good and it consists of the property of lovingkindness. Thus, the goodness of lovingkindness explains the command in question. A similar story can be told about other moral commands such as the command not to torture the innocent. This is explained by the fact that it is contrary to the divine nature, hence bad, and its badness then explains why it is impermissible.

This simple answer is dubious. As Wielenberg points out, and as Craig seems to agree, goodness and badness do not provide sufficient explanations for obligatoriness and impermissibility. In other words, the mere fact that something is good cannot tell us (for sure) whether it is permissible or obligatory. Quoting Craig:

It is good that I become a wealthy philanthropist…; it is also good that I forgo the pursuit of wealth to become a medical missionary to Chad. But obviously I cannot do both, since they are mutually exclusive. I am not, therefore, obligated to do both, though both are good. Goods, then, do not imply moral obligations. (Craig in Is Goodness without God good enough? 2009, p. 172)

Wielenberg argues that something similar is true in the case of the relationship between evil and the impermissible. Because there are situations in which any act one performs is bad (i.e. because there are moral dilemmas), there are situations in which the mere fact that something is bad does not tell us whether it is forbidden.

This seems right on the money to me. Indeed, if one goes back to the originator of the modified DCT — Robert Adams — one finds that this is one of his main reasons for defending the DCT. Adams argues that God’s commands are needed in order to explain the existence of moral duties because without a command from an authoritative being, we cannot tell the difference between an act that is obligatory and one that is supererogatory. I discuss this argument elsewhere, but in essence it holds that there is an explanatory gap between the value status of an act or state of affairs and the deontic status of an act or state of affairs. The former does not entail the latter.

This suggests that N-commands are fundamentally mysterious entities. The fact that God’s nature is essentially good, does not by itself explain why certain things are necessarily impermissible or obligatory. Or, at any rate, this is what Wielenberg argues. He provides additional support for this view by appealing to sceptical theism (a position that Craig also endorses). According to sceptical theism, we should doubt our ability to explain and justify the connections and entailments between good, bad, right and wrong. Thus, to the extent that Craig endorses sceptical theism, it seems like he should also accept that N-commands are unexplained logical necessities.

We’ll summarise and create an argument map:

  • (6) God’s moral nature cannot explain N-commands: there is an explanatory gap between goodness/badness and obligatoriness/impermissibility.

I have to say: I’m not entirely convinced. It seems to me that Adams’s explanatory gap may not hold true in all cases. It may be that it only holds true in those cases which involve dilemmatic choices. But are such dilemmatic choices a necessary feature of the moral universe? Do they arise in all cases of N-commands? Obligations such as “Do not torture an innocent child for fun” seem like they would never feature as part of a credible moral dilemma.

Nevertheless, I think there are other problems for Craig. Leaving the issue of obligations to one side, Craig’s theory also posits certain necessary connections between God’s nature and the properties of goodness and badness. So, for example, Craig says that lovingkindness is good because it is one of God’s properties and God is essentially and necessarily good. But this in itself posits a deeply mysterious necessary connection between God’s properties and goodness. This is something I discussed before.

In addition to this, there is a more general problem for Craig. This is that the condition of success he imposes on metaethical theories is absurdly high. We simply cannot explain all logically necessary connections. The reasons were well articulated by Simon Blackburn and I want to close by exploring them.

3. Blackburn’s Dilemma
Simon Blackburn is a well-known Cambridge-based philosopher. Perhaps his most famous contribution to the philosophical world comes in the shape of his quasi-realist, expressivist account of morality. Interesting and all as that account is, I want to focus on another of his contributions to philosophy here.
In one of his papers, Blackburn formulates a dilemma for those wishing to explain the sourcehood of necessity. This has subsequently become known as “Blackburn’s Dilemma”. Here is my paraphrase of the dilemma:

Blackburn’s Dilemma: Either the necessity of a necessary truth is to be explained by a contingent truth or it is to be explained by another necessary truth. If the necessity of a necessary truth is explained by a contingent truth, then that contingent truth could have been otherwise and hence the necessary truth need not have been necessary. Therefore, the necessity of a necessary truth cannot be explained by a contingent truth. But if a necessary truth is explained by another necessary truth, then we have not explained its necessity, we have simply transferred its necessity elsewhere and started off on a regress of necessary truths that need to be explained.

The two horns of the dilemma are illustrated in the diagram below. The two horns of the dilemma are illustrated in the diagram below. We call the first the contingency horn and the second the necessity horn.

Blackburn's Dilemma

Now, as it happens, there are certain technical problems with how Blackburn formulated the dilemma, but these need not detain us here since the overall thrust of the dilemma remains intact (for a discussion of the technical problems and how they can be overcome, I recommend this article by Hanks). Furthermore, that overall thrust seems to highlight the main problem with Craig’s Condition.

If Craig really thinks that every logically necessary connection must be explained before there can be a successful metaethics, he is doomed to permanent disappointment. Every time he purports to explain a posited logical necessity he will either have to appeal to a contingent fact (in which case the necessity will not explained) or to another logical necessity (in which case the problem is pushed back). We see this pattern above: even if God’s goodness did explain the existence of N-commands, the logically necessary connections between God’s nature and the property of goodness would need to be explained. Thus the problem of satisfying Craig’s condition remains, simply being pushed back one step.

In sum, then, it would seem like Craig’s best bet would be to abandon his strict condition of success. But in so doing NTNNMR becomes a live possibility once more. And so the explanatory battle between NTNNMR and modified DCT must be fought on different turf.

Friday, January 18, 2013

Is Craig's Defence of the DCT Inconsistent? (Part One)

He's smiling now...

Forgive me. I am going to start with a self-indulgent bit of blog history.

I started this blog over three years ago. At the time, I saw it as an outlet. I was completing my PhD and finding that I was reading lots of things that weren’t directly relevant to my research. I found a lot of that stuff interesting and I didn’t want to let it go to waste. So this blog became my information dump: when I read something interesting I would write up a blog post about it so it I would have a permanent record of how I understood the arguments it made, which I could return to at a later date.

Originally, my main extra-curricular interest was in the philosophy of religion and ethics. Consequently, the majority of my early posts tended to cover those topics. Practically none of my early posts covered material related to my own research, mainly because I used this blog to get away from my PhD.

Obviously, things have changed quite a bit since then: my posts have become longer and more complicated (neither of which is necessarily of good thing), and the subject matter has moved away from looking at philosophy of religion toward looking more and more at ethical-legal issues that happen to be the focus of my current research and teaching. Nevertheless, I remain interested in the philosophy of religion, and like to occasionally do blog posts on it.

This post is going to be an example of that continuing interest. In it, I’m going to take a look at a recent paper by Erik Wielenberg entitled “An Inconsistency in Craig’s Defence of the Moral Argument”, which unsurprisingly argues that William Lane Craig’s defence of the modified Divine Command Theory (hereafter “modified DCT”) is best by a (fatal) inconsistency.

The paper is quite short, and readily available online, but I’m going to try to add some value to it by simplifying its elements, diagramming its main argumenst, and offering some commentary of my own. To that end, I’ll spread my discussion over two separate posts. In the remainder of this post, I will do three things. First, I’ll quickly sketch the basic structure of the dialectic between Craig's moral argument and its critics. Second, I'll explore one branch of the dialectic, which challenges the role of God in the explanation of moral facts. And third, I'll consider another branch of the dialectic, which challenges the assumptions underlying Craig's moral argument.

1. The Moral Argument and its Discontents
William Lane Craig, like many others, believes that there are objective moral truths. He believes that there are states of affairs in the world of which it is true to say “that state of affairs is good/bad" and that there are actions in the world of which it is true to say “that action is right/wrong”. What’s more he believes this without thinking that the truth conditions of either statement is wholly dependent on subjective states of those who might utter them. To put it more succinctly, he believes that there are objective (mind-independent) moral values and objective moral duties.

He also believes that objective moral truths can only exist if they have some “sound foundation” or explanation. What’s more, he believes that the only possible foundation or explanation for such truths is theistic in nature. This suggests that he is committed to something like the following argument:

  • (1) There are objective moral truths. That is to say: there are objective moral values and objective moral duties. 
  • (2) There is a sound foundation (explanation) for objective moral truths if and only if God exists (i.e. only God provides a sound foundation for objective moral truths). 
  • (3) Therefore, God exists.

Those of you who are familiar with Craig’s work will realise that this is not a perfect replication of the moral argument that he typically presents in his debates and writings. To be precise, Craig doesn’t usually frame the second premise in terms of “sound foundations” or “explanations”; rather, he frames it in terms of the existence of such facts in the first place. In other words, he says such truths cannot exist if God does not exist. But the modest framing that I have adopted above is more appropriate given the dialectic that commonly arises between Craig and his critics.

What is that dialectic? Well, obviously, as with any logical argument of the sort presented above, there are two potential sites of criticism. One could criticise premise (1) and thereby reject the notion of objective moral truths. That would be radical and discomfiting to many, but there are some who take that approach. We won’t, however, be looking at it here. The other option is to criticise premise (2). That’s the one that’s relevant here and we’ll be spending the remainder of the post looking at it.

Significantly, the criticism of premise (2) can follow at least two separate branches (there is at least one more). The first branch — which we shall call the theistic branch — challenges Craig’s contention that God provides a sound foundation for objective morality. The second branch — which we shall call the foundationless branch — challenges the assumption of premise two, namely: that we need to provide a foundation for objective moral truths in the first place. For Craig’s argument to succeed, he has to cut off both of these branches of criticism.

Wielenberg’s claim is that in trying to do so, Craig contradicts himself. Thus, the way in which he prevents the theistic branch of the criticism from taking hold is in direct contradiction to the way in which he prevents the foundationless branch of criticism from taking hold. To see this, we need to carefully trace out the dialectic that takes place along both branches. The remainder of this post tries to do so.

2. The Theistic Branch of the Dialectic
The most direct critique of Craig’s argument takes place along the theistic branch of the dialectic. As a first step, the critic can pose the question to Craig: why think God provides a sound foundation for moral truths? Craig might (if he were extremely naive, which he isn't) duly oblige by saying: through his commands God tell us what is right and wrong, and good and bad. Thus, God's commands provide the foundation we need. This is the unmodified DCT.

The critic will immediately highlight a problem with this. As noted long ago by Plato in his dialogue Euthyphro, making moral truths dependent on God’s commands in this manner seems to render them disturbingly arbitrary. Allow me to explain. Take an objective moral duty of the following sort:

Dutyct: It is morally wrong (read: impermissible) to torture an innocent child for fun.

This would seem to be an uncontroversial example of a moral duty. What’s more, it seems like the kind of moral duty that simply has to hold true, irrespective of the circumstances. In other words, it seems like there could never be a scenario in which it is morally acceptable to torture an innocent child for fun. There is no possible world in which such torture is permissible.

But what if God commanded it? What if he said: you must torture an innocent child for my amusement. If one is a proponent of the unmodified DCT, then it would seem like one is committed to the view that if God commands it, it becomes morally acceptable. So in that case, if God issued that command, the torture of the innocent child would become permissible. The deontic status of an act is suddenly dependent on the arbitrary whim of God.

That seems unpalatable to many — including many theists who accept that God must provide the ultimate foundation for moral truth. So they’ve come up with an escape route: the modified DCT. This was (I believe) originally formulated by Robert M. Adams, but Craig has become a staunch proponent of it in latter days.

The essence of the modified DCT is that moral duties are indeed grounded in God’s commands — thus, Dutyct is true only because God has commanded it — but that there are constraints on what God can command. To be precise, because God is essentially and necessarily good, he could never command (for example) the torture of an innocent child for fun. Thus, certain key moral duties are not dependent on some arbitrary divine whim.

Of course, the upshot of all this is that certain moral truths must hold as a matter of logical necessity. For example, the torture of an innocent child is always and everywhere wrong because God’s nature is such that he forbids it in every possible world; because it is not logically possible for a being with that nature to command otherwise. Following Wielenberg, we call such moral truths “N-Commands”:

N-Commands: God’s nature is such that there are certain things that he forbids, and certain others that he obliges, in every possible world. That is: there certain logically necessary moral duties under the modified DCT.

This is a neat solution to the arbitrariness objection, but it comes at a cost. Or so Wielenberg will argue. To see what that cost, we need to proceed to the foundationless branch of the dialectic.

(Note: You may wonder what happens to objective moral values under the modified DCT. Does God still provide the foundation for them? After all, the modified DCT solves the abritrariness problem by saying that God’s nature is such that he commands certain things as a matter of necessity. But that seems to focus solely on obligations and duties, not on values. As it happens, Craig argues that God’s nature is the grounding for moral values. Thus, God remains the foundation for all moral truths. This grounding of values in the divine nature has problems that I’ve explored before on the blog.)

3. The Foundationless Branch
The second critique of Craig’s argument follows a less direct path, but it highlights a venerable and increasingly popular view among metaethicists.

What is this view? It can be called, for want of a better name, non-theistic non-natural moral realism (NTNNMR). It doesn’t exactly trip off the tongue. I know. But it accurately describes the view, which is something. NTNNMR holds that moral truths do exist, but that they don’t really need an explanation or grounding. They simple are true. Thus, the second premise of Craig’s argument relies on a faulty assumption. Objective moral truths don’t need what Craig claims he can provide.

There is much to be said for this view. That there are certain things that are self-explanatory, self-grounding or brute is widely accepted. After all, few people think that explanations or grounding exercises can continue indefinitely. There must be some stopping points after which it makes no sense to ask for explanations or foundations. Our conception of reality must bottom-out somewhere.

But Craig nevertheless objects to NTNNMR. Why so? Let’s hear from the man himself:

If our approach to metaethical theory is to be serious metaphysics rather than just a “shopping list” approach, whereby one simply helps oneself to the supervenient moral properties…needed to do the job, then some sort of explanation is required for why moral properties supervene on certain natural states.” (Craig in Is goodness without God good enough?, p. 180)

So Craig thinks that proponents of NTNNMR aren’t serious metaethicists. Their claim that moral facts need no grounding is a case of special pleading. No serious metaethicists play this game: they all think the supervenience of moral properties on natural properties requires some explanation.

But as Wielenberg points out, this is an odd claim to make. The supervenience relation is one of logical necessity. To say that the moral property of wrongness (call this M1 supervenes of the natural states of childhood, torture, innocence and amusement (call these “N1 - N4), is to say that in any two possible worlds in which N1 - N4 hold, so too does M1. In other words, it is logically necessary that if N1- N4 is true, so too is M1.

The upshot of this is that when Craig complains about the “shopping list” approach to metaethics employed by proponents of NTNNMR, he is complaining about a failure to explain logically necessary connections. This suggests that Craig imposes the following success condition on a metaethical theory:

Craig’s Condition: Any approach to metaethics that posits the existence of logically necessary connections must adequately explain those connections.

Now, Wielenberg exploits the appeal to this condition in his challenge to Craig. We’ll look at that the next day. But I want to close by dwelling on the substance of this condition for a moment.

To me, it is somewhat redolent of the principle of sufficient reason (PSR). The PSR states (in one form) that for every fact F there is a sufficient explanation of that fact. But Craig’s Condition is both less and more extreme than that famous principle. It is less extreme in that it only applies to the explanation of moral facts — a restriction that could be called into question by critics of Craig’s position. But it is also more extreme in that it applies to logically necessary facts. Many would hold that logically necessary facts fall outside the remit of the PSR since they can be self-explanatory. So in calling for logically necessary connections to be explained, Craig is doing something quite extreme. Of course, Craig defends this by claiming that just because a fact is necessary does not mean it cannot be explained. For example, he claims that “2+2=4” is a necessary truth, but nevertheless it is explained by the Peano axioms This suggests that at least some logically necessary facts can be explained. All that needs to be shown then is that moral facts are among those necessary facts that both have and need an explanation. But can this be shown?

Wielenberg doesn’t answer this question in his short article. That is understandable. He has other fish to fry. Still, I find it quite interesting. And as it happens I have a piece currently under review somewhere that tries to critique Craig (and some other theistic metaethicists) on this very point. That, however, is a topic for another day. For now, I shall conclude this post.

In part two, I’ll outline Wielenberg’s critique in full detail, and consider his defence of it. Stay tuned.