Tuesday, March 18, 2014

Plantinga vs. Peels on Naturalism and Moral Realism (Part Two)

(Part One)

This is the second in a two-part series on Rik Peels’s recent paper “Are naturalism and moral realism compatible?”. The paper challenges Alvin Plantinga’s argument that naturalism and moral realism are incompatible. As I noted in part one, the debate between Plantinga and Peels can be boiled down to four separate theses, two of which are defended by Plantinga and two of which are defended by Peels.

Here are the two defended by Plantinga:

PT1: The most important argument for the compatibility of moral realism and naturalism (Jackson’s argument) fails.
PT2: Given the reasons for the failure of Jackson’s argument, it is unlikely that any defence of the compatibility of naturalism and moral realism could succeed.

And here are the two defended by Peels:

RT1: Jackson’s argument can be repaired so that it avoids the failure condition highlighted by Plantinga.
RT2: There are other ways of arguing for the compatibility of naturalism and moral realism that completely avoid Plantinga’s criticism.

I covered Plantinga’s defence of his two theses in part one. To briefly summarise, Plantinga claimed that Jackson’s attempt to prove the compatibility of naturalism and moral realism rested on the principle that if natural properties and moral properties are necessarily co-exemplified, then it is possible for the two doctrines to be compatible (where “compatible” means that there is at least one possible world in which naturalism and moral realism are both true). Plantinga rejected that principle, holding that Jackson couldn’t rule out the possibility that the necessary co-exemplification of natural and moral properties was attributable to a third factor, namely: God.

In today’s post, I move on to consider Peels’s defence of his two theses, which if true would combine to defeat Plantinga’s theses.

1. Can Jackson’s Argument be Repaired?
Peels maintains that Jackson’s argument can be repaired so that it avoids Plantinga’s objection. To see how he maintains this, we have to go back for a moment to Jackson’s claim about the connection between moral properties and natural properties. In effect, Jackson’s claim about necessary co-exemplification is a claim about a dependency relationship between natural properties and moral properties. But the dependency relationship of a particular sort. It is an asymmetrical one. An action or state of affairs has the moral properties that it has because it has certain natural properties; it does not have certain natural properties because it has certain moral properties.

To put it another way, a proposition like “It is impermissible to torture children for fun” has the moral property of impermissibility because the act in question would involve inflicting pain on a child for personal amusement (all natural properties); the property of impermissibility could not cause an action to have those natural properties.

Now, Plantinga maintains that this asymmetrical dependency could arise because of, say, God’s command. That’s fair enough. But it is important to appreciate the modal standard that Plantinga is adopting when he makes this counterargument. He is not claiming that it is metaphysically possible for this explanation to obtain. After all, Plantinga thinks that God is a metaphysically necessary being: if He exists, he exists in all possible worlds. Rather, Plantinga is appealing to the epistemic possibility of that explanation. For all we know, it could be true that God acts in that way.

The problem is that in appealing to this standard of epistemic possibility, Plantinga’s paves the way for the naturalist to repair Jackson’s argument. For if Plantinga is allowed to appeal to epistemic possibility in proving incompatibility, so too is the naturalist allowed to appeal to epistemic possibility in proving compatibility. What’s good for the theistic goose is good for the naturalist gander. Or something like that.

Peels even goes so far as to suggest one way for the naturalist to argue for the required epistemic possibility. The naturalist could argue that the asymmetrical dependency arises because moral facts supervene on natural facts in a primitivist fashion. He explains this by way of an analogy. Consider the two propositions:

P - Friesland (an area of the Netherlands) has many lakes.
P* - It is true that Friesland has many lakes.

The second of these propositions supervenes, in a primitivist fashion, on the first. But if primitive supervenience of this sort is possible, then it also for moral propositions to supervene, in a primitivist fashion, on naturalistic propositions. As follows:

Q - Act X causes a child great pain for amusement.
Q* - It is wrong that act X causes a child great pain for amusement.

I have to admit, when the analogy is spelled out explicitly (something that Peels does not do in his article) it seems a little less compelling to me, but perhaps I can still accept the point: it is at least epistemically possible for this primitive form of supervenience to account for the dependency relationship highlighted above.

So where does that leave us? Well, it leaves us with the refutation of Plantinga’s original argument. All Plantinga has done is to show us that there is one epistemically possible scenario in which naturalism and moral realism are not compatible. But that’s irrelevant to the naturalist argument. The naturalist can simply point to another scenario (the primitivist scenario) in which it is epistemically possible for them to be compatible.

2. Is there another way to prove compatibility?
Peels’s second thesis claims that Plantinga neglects another way in which the naturalist can prove the compatibility of naturalism and moral realism. As you may recall from part one (if you read it), Plantinga works with the following method for proving the compatibility of any two propositions:

Standard Compatibility Proof: For any pair of propositions p and q, if you can show that there is at least one possible world in which both p and q are true, then you have proven the compatibility of p and q.

But far from this being the “standard” method for proving compatibility, Peels argues that there is another method that is far more commonly used. That method is as follows:

Alternative Compatibility Proof: For any pair of propositions p and q, if you can show that p does not entail the falsity of q, and that q does not entail the falsity of p, then you have proven the compatibility of p and q.

This method can be illustrated using Peels’s own example of claims about people from Friesland (I’m guessing he lives there or has some connection with the region, or else that people from Friesland are the butt of many Dutch jokes and hence this is a highly amusing aside within an otherwise quite dry and technical philosophy paper). Consider these two claims:

P - Sven is Friesian.
Q - Sven likes ice-skating.

These two claims are compatible because there is nothing about the first that entails the falsity of the second, and nothing about the second that entails the falsity of the first.

So how does this help the naturalist? It allows the naturalist to argue in a new fashion viz. that there is nothing about the truth of naturalism that entails the falsity of moral realism, and vice versa. Now, that’s obviously a highly controversial claim — one that theists might be keen to deny — so its important to explain how it works in this debate.

The first thing is to deal with a fairly standard critique. This would be the argument that, because naturalism requires the falsity of theism (see the definition in part one), and because God is a metaphysically necessary being, any claims about what naturalism entails or does not entail are going to be trivially true. This follows from the infamous Stalnaker-Lewis analysis of counterfactual claims with impossible antecedents (so-called “counterpossibles”). According to Stalnaker-Lewis, all counterpossibles are trivially true.

Peels responds to this by simply rejecting the Stalnaker-Lewis analysis. I think this is fair. Indeed, I’ve encountered many papers recently that contest that analysis. I won’t summarise Peels’ reasons for rejecting the analysis here. Instead, I’ll just highlight the obvious point that theists themselves don’t respect the Stalnaker-Lewis analysis. After all, despite their commitment to the metaphysical necessity of God, they often maintain that counterfactual claims about what would be entailed by the non-existence of God are non-trivially true. Consider two examples:

CF1 - If God does not exist, then moral values do not exist.
CF2 - If God does not exist, then life has no meaning.

Theists usually claim that these are significant truths about the nature of an atheistic universe, not trivial ones.

But that points to another problem with Peels’s approach to compatibility: the fact that theists will commonly argue that naturalism does, in fact, entail the falsity of moral realism. Theists can defend this by citing a number of reasons, e.g. by claiming that moral facts are “queer” (to use Mackie’s term) and don’t fit within the naturalist picture. That’s all true, but it really only gets us into the more general debate about theistic and naturalistic metaethics. And just as there are arguments for saying that naturalism and moral realism are not compatible, so too are there many arguments for saying that they are, and that it is really theism and moral realism incompatible.

This debate will, no doubt, cycle on. Peels’s only point is that, as things currently stand, naturalists have no decisive defeater for their belief in compatibility. Until they are presented with one they can maintain their commitment to both doctrines. That’s a pretty modest, and deflationary, conclusion, but that’s all that’s needed as suggested that Plantinga’s ambitious attempt to prove the general incompatibility of natural and moral realism is misguided.

Wednesday, March 12, 2014

Plantinga vs Peels on Naturalism and Moral Realism (Part One)

Naturalism is a core commitment for many contemporary philosophers. Moral realism is a belief shared by most moral philosophers. Are the two theories compatible? Alvin Plantinga has argued that they are not. Plantinga bases his argument on a close analysis of Frank Jackson’s attempt to defend the compatibility of the two doctrines, which he views as the best currently available defence. In his critique of Jackson, Plantinga makes the case for the following two theses (PT = ‘Plantinga’s Thesis’):

PT1: The most important argument for the compatibility of moral realism (Jackson’s argument) fails.
PT2: Given the reasons for the failure of Jackson’s argument, it is unlikely that any defence of the compatibility of naturalism and moral realism could succeed.

Rik Peels has recently written a response to Plantinga in which he defends two contrasting theses. They are (RT = ‘Rik’s Thesis’):

RT1: Jackson’s argument can be repaired so that it avoids the failure condition highlighted by Plantinga.
RT2: There are other ways of arguing for the compatibility of naturalism and moral realism that completely avoid Plantinga’s criticism.

The goal of this series of blog posts is to analyse the debate between Peels and Plantinga. I start today by looking at Plantinga’s side of it. I should say at the outset that, although I read Plantinga’s paper when it came out a couple of years ago, this particular presentation of his argument is filtered through the lens of Peels’s summary. There's a lot more in Plantinga's paper beyond this particular argument, as I recall.

1. Background and Some General Methodological Worries
To fully understand Plantinga’s argument we need to be clear about what we mean by “naturalism” and “moral realism”. Sometimes philosophers have a pretty ‘thick’ sense of what naturalism means, often equating it with a kind of scientific reductivism or physicalism. Plantinga doesn’t understand it like that. He defines naturalism as the view that neither God nor anything like God exists. In other words, he defines it in opposition to theistic and other supernaturalistic views. I like this minimalistic approach as it means one can still be a naturalist and accept the existence of certain non-reducible, non-physicalistic entities (e.g. numbers or abstract properties).

Moral realism then has two aspects to it. First, it is the view that moral propositions are capable of bearing a truth value, i.e. it makes sense to say that propositions of the sort “X is wrong” or “X is good” are true or false (and not mere matters of opinion). Second, it is the view that moral facts are mind independent, i.e. that the truth or falsity of propositions like “X is wrong” or “X is good” is not dependent on the beliefs or desires of any particular individual or group. I should clarify that this does not mean that moral propositions can never be defined by reference to mental states. The capacity to suffer or to feel pleasure could be essential to the truth or falsity of particular moral claims. It’s just that the truth or falsity of those propositions holds irrespective of the opinions of others.

Having defined these theories, Plantinga presents a general objection (or “worry”) for any purported defence of the compatibility of naturalism and moral realism. This general objection works from what Plantinga deems to be the standard method for proving the compatibility of two claims:

Standard Compatibility Proof: For any pair of propositions p and q, if you can show that there is at least one possible world in which both p and q are true, then you have proven the compatibility of p and q.

You might think that this sets the bar pretty low for any purported defence of the compatibility of naturalism and moral realism. After all, it’s pretty easy to imagine possible worlds with all sorts of strange combinations of truths. “Not so” says Plantinga. Given our definition of naturalism, and given certain orthodox conceptions of God, the bar has actually been set quite high.

The problem is this: According to the Anselmian definition, God is a metaphysically necessary being. If He exists at all, then He must exist necessarily. Hence, if He exists, there can be no possible world in which He fails to exist. But the compatibility of naturalism and moral realism demands that at least one such possible world exists. And so this implies that any successful defence of the compatibility of those two theses would also have to be a successful proof of the non-existence of God. Plantinga thinks that this is unlikely.

To put it more succinctly, Plantinga holds to the following argument:

  • (1) God, if he exists at all, is a metaphysically necessary being: He cannot fail to exist in any possible world.
  • (2) Naturalism is the view that neither God, nor anything like God exists.
  • (3) Any successful proof of the compatibility of naturalism and moral realism must show that there is at least one possible world in which both theories are true.
  • (4) Therefore, any successful proof of the compatibility of naturalism and moral realism must show that there is at least one possible world in which God does not exist (from 2 and 3).
  • (5) Therefore, any successful proof of the compatibility of naturalism and moral realism must be a proof of the non-existence of God (from 1 and 4)

One can see where Plantinga is going with this: it is, indeed, unlikely that any successful proof of compatibility could also convince us of the non-existence of God. But, at the same time, this methodological point seems dangerous. For one could easily flip it on its head. If you think there is at least one possible world in which moral realism is true and God does not exist, then you would have to disclaim the Anselmian view. This seems to imply that someone like Richard Swinburne (who accepts that certain moral truths do not rely on the existence of God) is not a theist. Surely that is an equally bizarre result?

2. Plantinga’s Critique of Jackson’s Argument
Leaving that general point to one side, we turn to Plantinga’s critique of Jackson’s attempted compatibility proof. Jackson’s naturalistic moral theory is complex, but his compatibility proof is pretty simple. He tries to show that there are some necessary moral truths (i.e. moral propositions that could not fail to be true) that can be cashed out in terms of purely naturalistic properties. That is to say, he tries to show that the moral properties in certain moral propositions hold solely in virtue of the exemplification of certain naturalistic properties.

Some examples might be in order. Take a proposition like “taking another person’s private property without consent is morally wrong”. This proposition is (plausibly) necessarily true. It also looks like the moral property contained in the proposition (the “wrongness” of the act) is wholly dependent upon the exemplification of certain natural properties (like “property” “absence of consent” and “taking”). Another example would be the proposition “torturing innocent children for fun is wrong”. This is also (plausibly) necessarily true, and the wrongness of the act depends upon the exemplification of natural properties like “childhood”, “innocence”, “extreme pain” and “funniness”.

Jackson’s point is a familiar one. He is claiming that in some cases, moral properties like “wrongness” strongly supervene on natural properties: changes in the former always entail changes in the latter. Or, to put it yet another way, moral properties are necessarily co-exemplified with certain natural properties.

But how does that give us a compatibility proof? Simple: it shows us that everything that is needed in order for a certain moral proposition to be true, can exist, without referring to, entailing or requiring the existence of God. So, in other words, everything that accounts for the wrongness of torturing innocent children is accounted for by the natural properties exemplified in an act of torturing innocent children. We have no need of the God hypothesis. Consequently, there are possible worlds in which moral propositions are true and in which God does not exist.

This all works with either a “sparse” or “abundant” take on the relationship between moral properties and natural properties:

Sparse View: Moral properties and natural properties are identical, i.e. moral properties simply reduce to natural properties.
Abdunant View: Moral properties and natural properties are necessarily co-exemplified, but not necessarily one and the same thing (ontologically speaking).

So one need not be a reductivist in order to embrace Jackson’s argument.

Despite this, Plantinga holds that Jackson’s argument is flawed. This is because Jackson’s argument relies on the following principle:

Jackson’s Principle: If (some) natural properties and moral properties are necessarily co-exemplified, then naturalism and moral realism are compatible.

This principle is false. As Plantinga sees it, the necessary co-exemplification of natural and moral properties does not entail the compatibility of naturalism and moral realism. This is because there could be a third factor (namely: God) which explains why those properties are necessarily co-exemplified. Take the divine command theory of morality. According to this theory, X is right or wrong solely in virtue of whether it is approved by God. Plantinga argues that God’s approval may always be tied to the exemplification of certain natural properties. Hence, necessary co-exemplification could arise without it following that God does not exist.

To use an example, it could be that the injunction against torturing innocent children for fun is dependent on God’s disapproval and that God disapproves of the act because it exemplifies certain natural properties. Thus it could be, for all we know, that the necessary co-exemplification of natural and moral properties is explained by God’s commands. Plantinga’s point is made: necessary co-exemplification does not entail compatibility. (This raises all sorts of Euthyphro-style questions, which, unfortunately, we’ll have to set aside for the time being).

That gives us a defence of PT1. What about PT2? It doesn’t take too much work to get there as well. All we have to do is point out that any purported defence of the compatibility of naturalism and moral realism (i.e. any attempt to identify a possible world in which both doctrines are true) is vulnerable to this “hidden third factor”-style of explanation. That hidden third factor could always be God.
Okay, that brings us to the end of part one. As we have seen, Plantinga thinks that it is very difficult to prove the compatibility of naturalism and theism, and that the best current attempt (Jackson’s) clearly fails for reasons that are likely to apply to all other attempted compatibility proofs. In part two, we’ll look at Rik Peels’s response to Plantinga.

Sunday, March 9, 2014

New Paper: Sex Work and Technological Unemployment

Roxxy - The world's most sophisticated sex robot?

I have a new paper out. This one is in the Journal of Evolution and Technology and asks the question: are sex workers (specifically prostitutes) vulnerable to technological unemployment? I look at the arguments for and against, and then consider some social policy implications. Here are the full details:

Title: Sex Work, Technological Unemployment and the Basic Income Guarantee (Official; Academia.edu; Philpapers.org)
Abstract: Is sex work (specifically, prostitution) vulnerable to technological unemployment? Several authors have argued that it is. They claim that the advent of sophisticated sexual robots will lead to the displacement of human prostitutes, just as, say, the advent of sophisticated manufacturing robots have displaced many traditional forms of factory labour. But are they right? In this article, I critically assess the argument that has been made in favour of this displacement hypothesis. Although I grant the argument a degree of credibility, I argue that the opposing hypothesis -- that prostitution will be resilient to technological unemployment -- is also worth considering. Indeed, I argue that increasing levels of technological unemployment in other fields may well drive more people into the sex work industry. Furthermore, I argue that no matter which hypothesis you prefer -- displacement or resilience -- you can make a good argument for the necessity of a basic income guarantee, either as an obvious way to correct for the precarity of sex work, or as a way to disincentivise those who may be drawn to prostitution.

This is very much a first pass at answering the question. More work undoubtedly needs to be done. But I think it is an interesting question nonetheless.

Saturday, March 8, 2014

William Lane Craig and the Argument from Successive Addition

I’ve been on a bit of a roll with William Lane Craig-related blog posts recently. I thought I might continue the theme by addressing another one of his arguments today. So far I’ve just been looking at various aspects of his moral argument, but now I want to switch focus and look at part of his defence of the Kalam Cosmological Argument (KCA).

Those who are familiar with the KCA will know that its second premise reads as follows:

  • (KCA2) The universe began to exist.

They will also know that Craig supports this premise of the argument with four separate sub-arguments, two of which are “in principle” arguments, based on the concept of an actual infinite, and the other two of which are “in fact” arguments, based on existing scientific theories. Although it is often great fun to debate these scientific theories, they are really only a sideshow. Craig himself acknowledges that the primary warrant for premise (2) comes from the “in principle” arguments.

Those two arguments have different targets. The first one targets the general possibility of an existent actual infinite (i.e. it says that an actual infinite cannot exist). The second targets the possibility of an actual infinite being formed by successive addition. I’ve looked at the first argument before, when discussing Hedrick’s critique of the Hilbert’s Hotel Argument. In this post, I want to look at the second argument. This argument really only kicks-in if the first argument fails. If an actual infinite cannot exist at all, then it certainly cannot exist through successive addition. The consensus of critics seems to be that this is a good thing for Craig since the second argument is the weaker of the two.

Anyway, my discussion of the successive addition argument will be broken down into three parts. First, I’ll look at Craig’s argument itself and present it in somewhat formal terms. Second, I’ll outline two initial criticisms of the argument. Third, I’ll take a longer look at one of the analogies Craig uses to support the argument (the reverse-countdown analogy).

Nothing I say here is original. I’m going to be basing most of this off the work of Wes Morriston, drawing in particular on these two articles.

1. Craig’s Successive Addition Argument
A firm grasp of the concept of an “actual infinite” is crucial to understanding Craig’s argument. An actual infinite is best defined in terms of set theory. To give a succinct definition: a set can be said to contain an actually infinite number of members if the set is equivalent to a proper subset of itself, where “equivalency” is understood in terms of the ability to put members of respective sets into a one-to-one correspondence with one another.

This probably sounds terribly obscure, but it makes sense if you use an example. Take the set of all natural numbers (0, 1, 2, 3…). This set is an actual infinite because it is possible to put all the members of that set into a one-to-one correspondence with a proper subset of the natural numbers (e.g. all the even numbers). As follows:

(0, 1, 2, 3, 4, 5, 6….)
(0, 2, 4, 6, 8, 10, 12)

The point is that an actual infinite is a complete set with an actual infinite number of members. This is to be contrasted with a potential infinite, which is simply a set that is constantly growing without limit.

Craig’s claim is that if the universe never began to exist then it must contain an actually infinite number of past events. Now, he would like to say that an actual infinite number of past events cannot exist at all, but if he can’t say that he would like to make the narrower claim that an actual infinite number of events cannot be formed through successive addition. This is exactly what has to happen if the universe is infinitely extended into the past. For each past event that occurs becomes the member of a set, and each subsequent event gets added into this set, resulting in a set that must have an actual infinite number of members.

The problem according to Craig is that this could never happen. You cannot form an actual infinite by adding members to a set like this. Consider an example. Take the set of years since 1914 (I steal this from Morriston). The set currently contains 100 members. Next year it will contain 101. The year after that it will contain 102. And so on. Suppose this sequence of adding years continues forever, will the resultant set ever end up containing an actual infinite? No; no matter how long this goes on it will only ever contain a large but finite number of members. The same thing is true of the set of past events.

To summarise, Craig’s argument is the following:

  • (1) A temporal series of past events is a collection formed by successive addition.
  • (2) A collection formed by successive addition cannot be actually infinite.
  • (3) Therefore, the temporal series of past events cannot be actually infinite.(from 1 and 2)
  • (4) If the universe never began to exist, then the temporal series of past events would have to be an actual infinite.
  • (5) Therefore, the universe must have begun to exist (from 3 and 4).

And this, of course, is just the second premise of the KCA.

Is this argument any good? Let’s see.

2. Does the argument beg the question?
The key to this argument is the first step (from 1 & 2 to 3). I just added in the second step to show how you get from there back to the KCA. It is possible to critique both of the premises of this first step in the argument. For instance, premise (1) relies on an A-theory of time, which is contestable. Nevertheless, we’ll ignore objections to premise (1) here and focus instead on premise (2).

The simplest objection to premise (2) is that it begs the question against the defender of the beginningless universe. Look back to the analogy we used to support it. Starting with a first member, it does indeed seem to be true that you could never reach an actual infinite number of members, but that’s only if we assume that we have to start with a first member. In other words, the analogy is only compelling if we assume that the sequence had a beginning. But that’s exactly what is in dispute when it comes to the history of the universe.

As I say, this is the simplest objection to premise (2). It’s no surprise then to learn that Craig is aware of it and tries to evade it in various ways. I’m not sure he ever succeeds in doing so, but let’s explore a couple of his methods of evastion now. The first method of evasion forces us to re-orient our perspective on the problem. Instead of (erroneously) imagining a sequence with a first member and working forwards from there to the present, instead we are asked to imagine working backwards from the present. Doing so, we see that in order for the present event to occur, so too must the event prior to that, and then the event prior to that, and then the event prior to that, and so on ad infinitum.

The problem? Well, if every event requires a prior event in order to come into existence, and if the sequence of events extends forever in the reverse-temporal direction, it seems like the present could never have arrived. As Craig puts it:

Before the present could occur, then the event immediately prior to it would have to occur; and before that event could occur, the event immediately prior to it would have to occur; and so on ad infinitum. So one gets driven back and back into the infinite past, making it impossible for any event to occur. Thus, if the series of past events were beginningless, the present event could not have occurred, which is absurd. 
(Reasonable Faith, 122)

This is an odd argument. It is really a claim about the impossibility of a infinite causal sequence (which turns it into a Thomist cosmological argument). But even then there doesn’t seem to be serious objection to the notion of a beginningless past. It may be true that every event needs a cause, but with an infinite past every event does have a cause. Causation is perfectly well-defined at every stage in the sequence, it just happens that the whole sequence itself does not have an external cause of its existence. But if you claim that it must have an external cause, you’re getting into a different argument.

Morriston puts the point rather nicely. He argues that what Craig is doing here is confusing a claim about our inability to trace back an actual infinite sequence of events, with a claim about the impossibility of an infinite sequence of events. But that’s like claiming that there cannot be an actual infinite number of natural numbers simply because we cannot count them all. The latter does not imply the former.

3. The Reverse Countdown Analogy
The other strategy Craig uses to defend premise (2) is something I am here calling the “reverse countdown analogy”. This is a thought experiment that Craig presents in virtually all his debates and scholarly writings. I’ll leave him to explain it:

…suppose we meet a man who claims to have been counting from eternity, and now he is finishing: −5, −4, −3, −2, −1, 0. Now this is impossible. For, we may ask, why didn’t he finish counting yesterday or the day before or the year before? By then an infinity of time had already elapsed, so that he should have finished. The fact is, we could never find anyone completing such a task because at any previous point he would have already finished. 
(Philosophical and Scientific Pointers to Creation Ex Nihilo, p. 189-90)

It’s important to realise that this analogy is being used to support the same basic point as the previous argument, namely: that if the past were infinite then the present could never have arrived. Nevertheless, this analogy is harder to deal with than the previous argument.

As Morriston notes, what Craig is really doing here is considering two separate historical series: (a) the series of past times (TS); and (b) the series of past counting events (ES). The man we are asked to imagine is enumerating the members of the series of counting events (E-n…E0), but he is doing so while overlaid on the series of past times (T-n…T0). Craig is then asking us:

Craig’s Question: Why, if this man has been counting down from eternity, does he reach E0 at T0 and not at T-1 or T+100 or whatever Tn we care to imagine?

This seems to commit Craig to the following argument (this is Morriston’s formulation):

  • (6) If a beginningless count is possible, then there must some reason why the whole series of counting events is located at the series of temporal locations that terminates in the present (i.e. there must be some answer to Craig’s Question).
  • (7) No such reason/answer can be given.
  • (8) Therefore, a beginningless count ending in zero is not possible (and hence the present moment T0 can never arrive).

There are a few problems with this argument and the analogy that is used to back it up. For one thing, the original thought experiment could be said to confuse the concept of counting an infinite number of negative numbers with counting all the negative numbers up to zero. But leave that aside. A bigger problem is with premise (6), which seems to demand that reasons be given for any coincidence of this sort.

As Morriston sees it, to demand such reasons is to fall back on the much-contested principle of sufficient reason, i.e. on the belief that everything must have a reason for its existence. But this seems an extravagant demand, particularly when it comes to explaining coincidences between our measures of time and past events.

To understand this point we must ignore some features of Craig’s thought experiment. We must realise that the counting man is a distraction. What Craig is really concerned with is why the set of all past events (Morriston calls it the set of macro-events) not just the set of counting events would terminate at T0. But this concern assumes that the flow of all past events is distinguishable from time — and hence that there is some reason for the coincidence between the two sequences. That doesn’t seem right, even on an A-theory of time: the flow of events surely is the passage of time? They are the same thing. If that’s right, then Craig’s demand for some explanation of the coincidence is getting tangled up in questions about what explains our metrics of time. And there won’t be any interesting answers to questions of that sort. Consider the question: “Why does the series of times end at this time rather than at some other time?” The answer will simply be: because that’s how we have chosen to measure time. Nothing more definitive can be said.

Morriston uses an analogy to underscore this observation:

Suppose that we have a bolt of cloth, and a measuring stick, calibrated in inches, that we want to use to measure a ten inch swatch of cloth. Obviously, we can line up the end of the cloth with the end of the measuring stick, or we can line it up with the one inch marker on the measuring stick, or with the two inch marker, and so on. It’s completely arbitrary which we decide to do. As long as we can do simple subtraction, we’ll have no trouble measuring out a ten inch swatch of cloth. Now suppose someone asks, “Why is the edge of the stick lined up with the end of the cloth? Why not the one inch mark?” This is hardly a question that “cries out” for a “sufficient reason” type answer. 
(Must metaphysical time have a beginning? 2003, p. 294)

The ultimate point is this: Craig tries to put the burden of proof on the defender of a beginningless countdown to explain the coincidence between ES and TS, but there is no reason to think that such a coincidence demands a reasoned explanation, particularly if those series are the same thing.

Let me close with one final observation, this time from Keith Yandell. Recall, how Craig is trying to show that there is something absurd or contradictory inherent in the notion that the past had no beginning or that the present moment has arrived from a beginningless past. Yandell suggests that this is not the case:

[T]o say that the universe is beginningless is to say that, for any past time T, the universe existed at T, and at T-1 as well. For any such time T you mention, there is a finite distance between T and now. So the universe could have chugged along from T until now. There is hence no past time such that it is impossible for the universe to chug along from that time until now. What the idea of the universe being beginningless entails is that, for any past time T, the universe actually has chugged along from it until now. Since that it not impossible, it is not impossible that the universe is beginningless. What, exactly, in Craig’s argument shows that this line of reasoning is inconsistent [or absurd]? 
(Does God Exist? The Craig-Flew Debate, p. 106)

”Nothing” is the answer.

Friday, March 7, 2014

Are there problems with the metaphor of mind-uploading?

The dream of one day being able to upload your mind to a computer or other artificial device is shared by many transhumanists. Central to the dream is something we can call the “transference thesis”:

Transference Thesis: It will be possible (somehow) to transfer one’s mind from its current home in the human brain to an artificial (potentially immortal) substrate such as a digital computer.

How plausible is the transference thesis? In his article, “Why Uploading will not Work, or the Ghosts Haunting Transhumanism”, Patrick Hopkins argues that it isn’t. This is because proponents of mind-uploading have duped themselves with a combination of bad metaphors and bad metaphysics. I want to assess some of his arguments over a series of posts.

I start, today, by looking at his argument from metaphor. This argument claims that the language of transference, beloved by so many, creates a misleading framework in which to debate the issue of mind-uploading. To be more precise, the argument claims that the metaphor is inconsistent with certain other transhumanist beliefs, and tricks us into thinking that mind-uploading will be easier than it is likely to be.

I analyse Hopkins’s argument in three stages. First, I look at the importance of metaphors in human thought, and the role they play in the mind-uploading debate. Second, I try to reconstruct Hopkins’s argument. And third, I try to evaluate this argument, and show how it ultimately forces us to confront some of the tricky metaphysical issues at the heart of the uploading debate.

1. The Importance of Metaphors and the Language of Uploading
Hopkins’s argument starts from the belief that metaphors are central to human thought. In this regard he is in good company. Cognitive psychologists have long held that metaphors create frameworks that shape how we think about particular issues and, more importantly for present purposes, how appealing we find particular arguments and worldviews. George Lakoff is perhaps the most ardent supporter of this take on metaphors. Indeed, Lakoff has dedicated a good portion of his adult life to examining the metaphors in political debate, even going so far as to suggest that the American left is losing ground to the right because of their inability to frame their position with appealing metaphors (such as the Right’s metaphor of the state as a family).

I’m not sure about the merits of Lakoff’s take on contemporary politics, but the larger point about the role of metaphors seems pretty robust. Indeed, we often don’t notice how pervasive metaphors really are in everyday thought patterns. There are many compelling illustrations of this (Pinker’s book The Stuff of Thought has some good example). Hopkins uses the simple example of metaphors about the strengths and weaknesses of arguments. We talk about arguments having “holes” in them, being “flimsy” or “weak”, or about criticisms being “on target”. These are all physical and spatial metaphors, adapted no doubt from our experience of the world, and used to help us think about abstractions like arguments.

Hopkins’s claim is that the metaphor of “transference” plays a similarly pervasive role in the mind-uploading debate. He defends this claim by reference to several transhumanist discussions of the possibility of uploading, each of which relies heavily on the transference metaphor. I won’t go through the full list here (he uses seven examples), but the following two are representative (sources can be found in Hopkins's article):

”Uploading is the transfer of the brain’s mind pattern onto a different substrate (such as an advanced computer) which better facilitates said entity’s ends.” (Kadmon 2003)
”Mind uploading, sometimes called whole brain emulation, refers to the hypothetical transfer of a human mind to a substrate different from a biological brain, such as a detailed computer simulation of an individual brain.” (Sentient Developments 2009)

If we accept that the transference metaphor is indeed central to the transhumanist view of mind-uploading, we can proceed to consider Hopkins’s main argument.

2. The Argument from Metaphor
That argument starts with the ostensible goal of demonstrating that the transference metaphor is on a collision course with another core philosophical commitment shared by most transhumanists, namely: the commitment to physicalist/materialist theories of the mind. Although that’s where it starts, the argument also develops a secondary strand about how the metaphor might lead people to think that uploading is easier than they might otherwise be inclined to think. Let’s go through both phases of the argument now.

The first phase starts with the observation that transhumanists tend to be committed to physicalist/materialist theories of mind, and consequently that they tend to reject dualistic theories of mind. According to the former set of theories, the mind is constituted by the brain in some yet-to-be-worked out manner (functionalism, identity theory, etc.). According to the latter, the mind and brain are two distinct substances. The problem, for Hopkins, is that the transference thesis makes use of a dualistic conception of the mind.

Think about it. In order to transfer something from one place to another, that “thing” must be distinguishable from its physical and spatial location. If I want to transfer a tennis ball from one end of tennis court to the other, I can certainly do so, but only because the tennis ball is not the same thing as the tennis court. If the tennis ball and the tennis court were one and the same thing, it would be much more difficult (if not impossible) to achieve this goal. The point Hopkins’s is making is that the mind and brain are like this — one and the same thing — and so it is hard to see how the mind could be dislocated from the brain and transferred to another medium.

To set this out more formally:

  • (1) Transhumanists are materialists/physicalists.
  • (2) Materialists/physicalists do not view the mind as a “thing” that is separate from the human brain.
  • (3) The transference thesis (and uploading more generally) assumes that the mind is a thing that is separate from the human brain.
  • (4) Therefore, transhumanists cannot believe in the transference thesis (and uploading more generally).

Now let’s look at some potential critiques of this argument.

3. Criticisms of the Argument from Metaphor
The first problem I can see with this argument has to do with premise one. While I accept that most transhumanists probably are mind-body physicalists, I suspect there is a significant proportion of the transhumanist community that clings (or aspires) to more pluralistic or unusual metaphysical views. Thus, for example, there are a those who might accept to David Chalmers’ panpsychist view, which Chalmers himself uses to defend the possibility of uploading. (More precisely, Chalmers appeals to the notion of “organisational invariance”, which holds that any two systems with the same organisation will have the same conscious experiences).

The second problem has to do with the rather coarse-grained analysis of materialism and physicalism that is at work in premise (2). Anyone familiar with contemporary philosophy of mind will know that there are many different physicalist (and, indeed, dualist) theories, not all of which would accept premise (2). Token-identity theorists (i.e. those who think there is a strict one-to-one correspondence between brain states and mental states) might well agree with it, but others would be much less keen. Hopkins himself mentions the functionalist theory, which holds that it is abstract patterns, not brain states per se, that are the stuff of consciousness. It seems like those abstract patterns may be separable from the brain and capable of being transferred.

The third problem has to with premise (3). Some could argue that Hopkins overstates the role of the transference metaphor in the debate about mind-uploading. True, people are reaching for a convenient metaphor that conveys the idea of someone’s mind being instantiated in a different substrate, but in saying that they do not think that the mind is literally a distinct object that sits in the receptacle of the brain and can be easily transferred out of it.

Hopkins acknowledges this criticism but finds it lacking. He thinks that the spatial and transference metaphors are crucial to the uploader’s case. As he puts it himself:

In the descriptions of uploading, the very core of the concept is that a specific mind is transferred from a brain to a computer. Such descriptions do not simply make the materialist point that minds are the result of physical activity, rather they make claims about the preservation of identity. They say — and this is [the] whole point of uploading, the whole point of its connection to immortality and transcendence — that a specific, intact mind can be “transferred” (moved) from one embodiment to another. (p. 232)

In other words, the identity relation between the original embodied mind and the uploaded mind seems to require the transference metaphor (in its crudest sense) to be true. The mind must literally be a “thing” that can be moved about from receptacle-to-receptacle. But since this is not the view that physicalism/materialism entitles us to, the transference metaphor glosses over a key barrier to genuine uploading.

I think there is some merit to Hopkins’ response. I think preservation of identity is crucial to the uploader’s argument, and that it might be undermined by certain physicalist/materialist theories of mind. That said, I would argue that the preservation of identity between things is a tricky metaphysical concept, particularly if we think of “things” in terms of their functional patterns. The classic Ship of Theseus thought experiment illustrates the point rather nicely, as do some modern theories of identity-relations among informational patterns. For example, Dawkins’ now-classic conception of the gene as a bit of code or information might lend itself to the view that in evolution one “thing” (the gene) can be preserved across multiple instantiations. Indeed, it was this very conception that suggested to Dawkins’s publisher an alternative title for Dawkins’s infamous 1976 book The Selfish Gene. The publisher had suggested that it be called the “Immortal Gene”.

The point here is that something similar could be true of the mind. The mind could simply be an abstract pattern, whose identity, like that of the gene, can be preserved across multiple instantiations. That, at least, seems to be the view of one of the leading proponents of mind-uploading: Hans Moravec. Hopkins has some criticisms for Moravec too, but we’ll deal with those another day.

Thursday, March 6, 2014

Big Data and the Vices of Transparency

(Previous Posts)

Data-mining algorithms are increasingly being used to monitor and enforce governmental policies. For example, they are being used to shortlist people for tax auditing by the revenue services in several countries. They are also used by businesses to identify and target potential customers. Thanks to some high profile cases, there is now increasing concern about how their usage. Should they be restricted? Should they be used more often? Should we be concerned about their emerging omnipresence?

In an earlier set of posts, I looked at the case for transparency in relation to the use of such algorithms. Transparency advocates claim that full or partial disclosure of the methods for collecting and processing our data would be virtuous in any number of ways. For example, there are those who claim that it would promote innovation and efficiency, increase fairness, protect privacy and respect autonomy. I analysed their arguments at some length in that earlier set of posts.

Today, I want to look at the flip-side of the transparency debate. I want to consider arguments for thinking that transparency would actually be a bad thing. I look at two such arguments below. The first argument claims that transparency is bad because it thwarts legitimate government aims; the second claims that transparency is bad because it leads to increased levels of social stigmatisation and prejudice.

In writing this piece, I draw once more from Tal Zarsky’s article “Transparent Predictions”. This post is very much a companion to my earlier ones on the virtues of transparency and should be read in conjunction with them.

1. Would Transparency Thwart Legitimate Government Aims?
A simple argument for the vice of transparency holds that it would undermine legitimate government aims. Governments can use data-mining algorithms to assist them across a range of policy areas. I have already given the example of tax auditing and the attendant prevention of tax evasion. Similar examples could include combatting terrorism, enforcing aspects of criminal law, and predicting recidivism rates among convicted offenders so as to make rational parole decisions. If transparency prevented the government from doing those things, it might be lamentable.

In other words, it might be possible to make the following (abstract) type of argument:

  • (1) It is a good thing that the government pursues certain legitimate aims X1…Xn through the use of data-mining algorithms.
  • (2) Transparency would undermine the pursuit of those legitimate ends.
  • (3) Therefore, transparency would be a bad thing.

The wording of premise (1) is very important. It assumes that the government aims are legitimate, i.e. morally commendable, acceptable to rational citizens, optimal and so forth. If, for any given use of data-mining, you think the government aim is not legitimate, or that it is completely trumped by other, more important aims, then it is unlikely that you’ll be willing to entertain this argument. If, on the other hand, you think there is some degree of legitimacy to the particular government aims, or that these aims are not completely trumped but rather must be weighed carefully against other legitimate aims, then the argument could have some force. For if that’s the case you should be willing to weigh the benefits of transparency against the possible costs in order to reach a nuanced verdict about its overall desirability.

Anyway, this is just a way of saying that premise (1) is essential to the argument. Unfortunately, in this discussion, I’m not going to consider it all that closely. Instead, my focus is on premise (2). The key thing with this premise is the proposed mechanism by which transparency undermines the legitimate aims. Obviously, the details in any particular case will be fact-specific. Nevertheless, we can point to some general mechanisms that might be at play. Perhaps the most commonly-cited one is something we can call the “gaming the system”-mechanism. According to this, the big problem with transparency is that it will disclose to people the information they need in order to avoid detection by the algorithm, thereby enabling them to engage in all manner of nefarious activities.

A simple example, which has nothing to do with data-mining (at least not in the colloquially-understood sense) might help to illustrate the point. The classic polygraph lie detector may have had some ability to determine when someone was lying (however minimal). But once people were made aware of the theoretical and practical basis for the test they could avoid its detection by deploying a range of countermeasures. These are things like breathing techniques and muscle clenches that confound the results of the test. Thus, by knowing more about the nature of the test, people who really did have something to hide could avoid getting caught out by it. The concern is that something similar could happen if we disclosed all information relevant to a particular data-mining algorithm: potential terrorists, violent criminals and tax evaders (among others) could simply use the information to avoid detection.

How credible is this worry? As Zarsky notes, you have to consider how it might play out at each stage in the data-mining and prediction process. You start with the collection phase, where transparency would demand that details about the datasets used by the governmental algorithms be disclosed. These details might allow people to game the system, provided the datasets are sufficiently small and comprehensible. But if they are vast, there might be little scope for an individual to game the system. Similarly, release of the source code of the algorithm used at the processing stage would be valuable to a limited pool of individuals with the relevant technical expertise. Zarsky argues that the release of data about the proxies used by governments to identify potential suspects (or whatever) are likely to be the most useful to those want to game the system, but these proxies could fall into at least three different categories:

Other Illegal Acts: One thing that is often used by governmental agencies to predict certain kinds of wrongdoing is other illegal acts. Tarsky uses the example of being used as a proxy for . Now, we might want to prevent that type of wrongdoing anyway, so disclosure of this detail could have some positive effects (as that type of behaviour would be further disincentivised), but one could also imagine a potential terrorist capitalising on the disclosure of this information in a negative way. They will now know that they need to avoid the lesser type of wrongdoing in order to engage in the greater type of wrongdoing. That would be bad. Wouldn’t it?
Neutral Conduct: It could be that the proxies used are not other forms of wrongdoing but are instead completely neutral or positive behaviours (e.g. charitable donations could be an indicator of tax evasion). In other words, the proxies might not themselves be constitutive of bad behaviour, but they might be found to correlate with it. Disclosure of that kind of information would also seem to have negative implications for legitimate government aims. It would allow the nefarious people to game the system by avoiding those behaviours and may also encourage otherwise law-abiding people to avoid positive behaviours for fear of triggering the algorithm.
Immutable Character Traits: Another possibility is that the proxies cover immutable social or biological traits that correlate with wrongdoing. Disclosure of these proxies might not help people to game the system (assuming the traits are genuinely immutable) but they might have other deleterious effects.

This last example opens up the possible link between transparency and stereotyping. We’ll deal with this as a separate argument.

2. Would Transparency Increase Negative Stereotyping?
Another possible argument against transparency has to do with its potential role in perpetuating or generating new forms of social stereotype and prejudice. The argument is straightforward:

  • (4) It is bad to increase social prejudice and stereotyping.
  • (5) Transparency of the details associated with data-mining algorithms could increase social prejudice and stereotyping.
  • (6) Therefore, transparency of the details associated with data-mining algorithms is bad.

The value-laden terminology is important in understanding premise (4). You might object that certain forms of prejudice or stereotyping are morally justified if they accurately reflect the moral facts. For example, I have no great problem with there being some degree of prejudice against racists or homophobes (though I wouldn’t necessarily like that to manifest itself in extreme mistreatment of those groups). The assumption in this argument, however, is that prejudice and stereotyping will tend to have serious negative implications. Hence those two terms are to be read in a (negative) value-laden manner.

The key premise then, of course, is premise (5). Zarsky looks at a number of factors that speak in its favour, many of them resting on pre-existing weaknesses in human psychology. His main observation is that humans are not well-equipped to understand complex statistical inferences and so, when details of such inferences are disclosed to them, they will tend to fall back on error-laden heuristics when trying to interpret the information.

This can manifest itself in a variety of ways. Zarsky mentions two. The first is that people may fail to appreciate the domain-specificity of certain statistical inferences. Thus, if the algorithm says that law professors who write about transparency in particular settings (the example is Zarsky’s) are more likely to evade tax, the general population may think that this makes such law professors more likely to commit a whole range of crimes across a whole range of settings. The second way in which the problem could manifest itself is in people drawing conclusions about individual character traits from data that simply has to do with general correlations. Thus, for example, if an algorithm says that people from New York are more likely to evade tax, others might interpret this to be a fixed character trait of particular people from New York.

Both of these things could increase negative forms of social prejudice and stereotyping. And it is important to realise that these increases may not simply be in relation to classically oppressed and stereotyped groups (e.g. ethnic minorities), but may also be in relation to wholly novel groups. For instance, data-mining algorithms might (for all we know) find that hipsters are more likely to evade tax, or that people whose names end in “Y” are more likely to be terrorists. Thus, we might succeed in identifying new groups as the objects of our negative social judgments. This could be particularly problematic for them insofar as these new groups may have fewer well-established organisations dedicated to defending their interests.

Assuming the stereotyping and prejudice problem is real, how might it be solved? Increased opacity and reduced transparency is indeed one solution, but it is not the only one. As Zarsky points out, increased public education about the nature of statistical inferences, and the psychological biases of human beings, might also serve to reduce the problem. Arguably, this might be more the more ideal solution, if we accept that transparency has certain other benefits. Still, part of me thinks that the cost and effort involved would make it unattractive to many governments. Opacity may, alas, be the easier option for them.

Wednesday, March 5, 2014

William Lane Craig and the Ultimate Accountability Argument

Atheism is generally taken to entail that there is no afterlife. More specifically, it is taken to entail that there is no afterlife in which people are rewarded or punished for their behaviour here on earth. (I say “generally” because it is conceptually possible for an a-theist to embrace some sort of afterlife, provided it does not involve the existence of a God). Some theists think this is problematic, that it suggests something deeply implausible/unwelcome about morality in an atheistic world.

One such theist is William Lane Craig. In several of his papers and public debates, he has railed against atheistic moralists on the grounds that their conception of morality has no place for “ultimate accountability”. In this post, I want to look at Craig’s arguments and suggest that they are deficient. In doing so, I draw once more on Louise Antony’s excellent contribution to the book Debating Christian Theism, as well as some other sources (particularly Oppy’s Arguing about Gods).

I’ll divide my discussion into three parts. The first part tries to clarify the nature of the accountability argument. The second part looks at one interpretation of the argument — the justice interpretation — and suggests that it may lead to a corrupted view of moral behaviour. The third part considers another interpretation — the bindingness interpretation — and argues that it fails to make a significant case against atheistic morality.

1. Arguing about Accountability and Atheistic Metaethics
Metaethics is the branch of moral philosophy that is concerned with the ontology and epistemology of morality. In other words, with explaining the nature/grounding of moral values and duties, and with the epistemic route to knowledge of those values and duties. For our purposes, the most relevant of these inquiries is the former: the ontological inquiry. What exactly is it that best explains (accounts for, grounds) the existence of moral values and duties (if anything)?

Atheistic metaethics maintains that moral values and duties can exist in a Godless universe, that we do not need to ground or explain those values and duties by reference to God. Theistic metaethics maintains the exact opposite: that God is needed to ground/explain the existence of moral values/duties. How do we decide which theory to accept?

One popular strategy in metaethics is the following. Write down a list of abstract properties that you think are shared by moral claims — for example bindingness, motivational salience, impartiality, other-regardingness, and so on — and then try to see which theory or worldview can account for those properties. In other words, argue like this:

  • (1) Moral values and duties exist.
  • (2) Moral values and duties share properties P1…Pn.
  • (3) Theory X (or worldview X) can best account for all these properties.
  • (4) Therefore, theory X (or worldview X) is likely to be true.

There is a logical gap in this argument that I will ignore for present purposes. Also, as you can see, the argument supposes that moral values and duties exist (premise 1). That is certainly a supposition that Craig works with in his discussions, and one that I am happy to work with for the purposes of this blog post. Nevertheless, it should be noted that there are many metaethicists who would reject it. Oftentimes, they do this by challenging premise (3) and arguing that no true theory or worldview can account for the properties we think moral values and duties ought to have.

Anyway, one of the most common debates in metaethics has to do with the precise list of properties in premise (2). Some people are quite minimalistic in what they demand from morality, including only a handful of properties (e.g. impartiality and other-regardingness); others are much more demanding, including long lists of properties that must be accounted for lest we all become moral nihilists. There can be fruitful debate about which properties are truly necessary for morality to exist.

Craig’s accountability argument can be interpreted as a claim about one of the properties that must be included in any satisfactory metaethical theory. Specifically it can be interpreted as the claim that any satisfactory metaethical theory must allow for “ultimate accountability”, where this is understood viewed as some final reward/punishment for good/bad behaviour. And since the theistic worldview allows for this, it is more likely to be true than the atheistic. As follows:

  • (5) Objective moral values and duties exist.
  • (6) In order for objective moral values and duties to exist there must be ultimate accountability.
  • (7) The theistic worldview allows for ultimate accountability; the atheistic worldview does not.
  • (8) Therefore, the theistic worldview is more likely to be true.

Antony challenges premise (6) of this argument by looking at two different interpretations of the word “accountability” and arguing that neither is essential to a successful metaethics. We turn to those two different interpretations now. As we do so, we’ll see how it may also be possible to challenge premise (7).

(Note: There is another way to understand Craig’s argument, which is arguably more in keeping with his aims. This is to view it not as an argument about what is necessary for a successful metaethical theory, but rather as an argument about what is necessary for a meaningful and worthwhile life. I’m not going to discuss that interpretation in this post. I have, however, discussed Craig’s take on the meaning of life before and interested readers should consult those posts for more on this possible interpretation.)

2. The Need for Ultimate Justice

The first way we can interpret Craig’s plea for ultimate accountability is as a plea for ultimate justice. Justice requires that everybody be given their due, either in the form of rewards for good behaviour or punishments for bad behaviour. The problem with the atheistic worldview, on this interpretation, is that it relies solely on the human ability to dish out rewards and punishments, which is imperfect and incomplete. The theistic worldview has the advantage because at its root is an ultimate justice giver (God) who can ensure, with perfect success, that evil is punished and good is rewarded.

The Justice Interpretation: What is required for moral value to exist is that “evil and wrongdoing will be punished; righteousness will be vindicated. Despite the inequities of this life, in the end the scales of God’s justice will be balanced.” (Craig 2009, 31).

There are several problems with this interpretation. First, as Oppy points out, it assumes that justice can only be satisfied if good and bad behaviours are met by some corresponding reward or punishment. This is questionable insofar as bad behaviour could be “punished” simply through the absence of reward for other good behaviour, and vice versa. Hence, the argument must be reformulated so that it talks about net levels of reward and punishment. Furthermore, there are non-theistic metaphysical schemas in which there is some sense of ultimate justice (e.g. karma).

But that’s a relatively minor point. The more important one is that the demand seems obtuse. To say that moral value can only exist if there is some ultimate justice looks to be patently false. If I give 50% of my income to charitable causes, and thereby relieve a great deal of suffering in this world, are we really going to say that my actions are valueless simply because I couldn’t address all the suffering in the world, or because the impact of my actions was finite? Surely, the alleviation of suffering would bear some moral value, irrespective of these deeper temporal and metaphysical concerns. Imperfect justice does not imply a lack of justice, or of moral value more broadly.

More troublingly, there is the risk that the demand for some ultimate reward/punishment would actually corrupt our sense of moral value. If everything we do is ultimately going to be rewarded or punished in the end, then it’s hard to see why moral value doesn’t simply reduce to prudential value. Indeed, Donald Hubin makes a great case for this effect in the book Is Goodness without God Good Enough? when he argues that genuine self-sacrifice is impossible on the theistic worldview since you ultimately get rewarded for it in the end (i.e. you’re not really sacrificing yourself). This is an ironic turn of events since Craig often objects to atheistic morality on the basis that it is guilty of reducing moral value to prudential value. If you’re interested, there is nice exchange on this very point in the Craig-Kagan debate.

Kagan also makes another objection to ultimate accountability in his debate with Craig. He argues that one problem with the traditional Christian conception of the afterlife is that ultimate reward comes pretty cheap. All you need to do is to accept Christ as your saviour (or confess to your sins if you’re a Catholic) and you get it. Or so it seems. Christians can, of course, respond to this by rejecting this take on the conditions for salvation, or by rejecting the sacrament of confession, but it’s worth thinking about nonetheless.

3. The Need for Bindingness
A second way to interpret Craig’s demand for accountability is to view it as a sub-condition that must be satisfied in order for moral claims to be genuinely binding on human agents. Although there is disagreement on this point, many metaethicists accept that moral norms need to be motivationally salient, i.e. that when people know what the moral reasons for action are they ought then to be motivated to follow them. But how can this be if there is no ultimate reward or punishment? If the fate of our eternal soul is not at stake in our moral decision-making, then how can moral norms bind us in any meaningful way?

The Bindingness Interpretation: What is required for genuinely binding moral norms is some ultimate reward/punishment for our moral actions.

There are two problems with this approach to the argument. The first is that there are other, non-moral norms, that seem to be perfectly binding without the presence of a sanction of reward. For example, logical norms. It is perfectly straightforward to say that we are bound not to commit the fallacy of affirming the consequent, or to obey the law of the excluded middle, without also needing to identify some ultimate reward or punishment for these behaviours.

Some people might object to this on the grounds that obedience to logical norms can be re-described so that there is always some reward or punishment. So, for example, one must obey the law of the excluded middle on pain of having false beliefs, or being irrational. But moral norms can be re-described in these terms too. Thus, we can say that one must obey moral norms on pain of being wicked/evil (h/t Arif Ahmed for this point). Such redescription is easy; it doesn’t mean there is anything more or less real about the reward/punishment, or anything more or less “ultimate” about it.

The second problem with the bindingness interpretation has to do with the analogy to human laws that is often used in its defence. Theists sometimes argue that there cannot be binding moral laws without there being some moral law-giver who attaches sanctions to certain behaviours. The claim being that this is also needed for binding human laws. But this is false. Human laws do not derive their normative force from the mere presence or absence of sanctions. Rather, they derive their force from collective beliefs in the authority of the law-givers.

So in order for theism to be needed to generate binding norms, one would first need to accept that God’s law has the necessary authority. And whether one accepts that or not will depend on one’s analysis of the Euthyphro dilemma, as Antony nicely points out (i.e. it will depend on whether one accepts that God’s laws could have a content-independent bindingness). Fortunately, I’ve analysed this before.

In summary, the accountability objection to atheistic morality holds that the atheistic worldview cannot account for one key property of morality, viz. accountability. This property can be interpreted in different ways, but whichever way you look at it, it seems like one of two things is true: (i) it is not actually necessary for morality; and (ii) atheism can account for it just as well (if not better) as theism. Consequently, the accountability objection would seem to fail.

Tuesday, March 4, 2014

On Rubenfeld and the Riddle of Rape-by-Deception

[Warning: This post contains discussions of sexual assault and rape.]

People sometimes lie to get sex. That would appear to be uncontroversially true. Some of these lies are more important than others. In particular, some forms of deception seem to undermine what might otherwise be a valid consent to sexual contact. If Bob tells Jane that he is 6 5” prior to their having sex, then I suspect no one would say that her consent is invalid when it turns out he is really 6 3” (maybe some would). On the other hand, if Bob tells Jane he is performing a medical procedure, when in reality he is sexually penetrating her, then I suspect most people would say that consent is absent.

These two cases lie at opposite ends of a spectrum; there are many more problematic cases in between. Indeed, the whole impact of deception on sexual consent is fraught and open to much controversial debate. Recently, Yale legal scholar Jed Rubenfeld added to this controversy with his article on the so-called “riddle of rape-by-deception”. The article makes five significant claims. First, that sexual autonomy is the rationale for modern rape law. Second, that if this is the case, then all forms of rape-by-deception should be criminalised. Third, that this implication is problematic and reveals flaws in the sexual autonomy rationale. Fourth, that the autonomy-rationale should be replaced by a self possession rationale. And fifth, that this rationale implies that a force requirement should be an essential part of rape law.

Like many others, I think Rubenfeld’s overall argument is flawed. In this post, I want to explain why. In doing so, I eschew many of the legal aspects of his argument. Typical of the academic lawyer, Rubenfeld tries to wear a couple of hats in his article, one of which we may call the “legal-descriptive” hat, the other the “ethical-critical” hat. With the former hat he tries to explain the normative principles guiding current law; with the latter hat he tries to locate the ethical standards by which the current law should be judged and use this to guide the construction of alternative normative principles. Although there may be some practical value to wearing the former hat, it is not something I am interested in wearing in this post. As far as I can see, the legal-normative discussion is a distraction from the real core of Rubenfeld’s argument, which is ethical in nature. There are criticisms to be made of the legal aspects of his argument too, amply made by other people.

The remainder of this post will be broken down into three stages. The first will look at Rubenfeld’s critique of sexual autonomy and the criminalisation of rape-by-deception. The second will outline his alternative principle for rape law: the self-possession principle. The third will consider the problems with this proposal. This will highlight one of the main reasons for discussing Rubenfeld’s argument — apart from the intrinsic and instrumental interests of the particular subject matter — which is how it exhibits a classic flaw in applied ethical reasoning.

1. Rubenfeld’s Case Against Rape-by-Deception and Sexual Autonomy
Rubenfeld suggests that the most popular basis for modern rape law is the principle of sexual autonomy. This principle has two aspects to it:

Positive Sexual Autonomy: You have a right to have whatever kind of sex you like, with whomever you like, provided you respect their rights too.
Negative Sexual Autonomy: You cannot be obliged to have sex (of whatever variety, with whoever it might be) if you do not want to have it.

Consent is central to the operation of this principle. It is what transforms impermissible sex into permissible sex. In his analysis, Rubenfeld notes a number of problems with sexual autonomy, particularly in its positive form, and it is true that many jurisdictions restrict exercises of sexual autonomy in ways that are not consistent with the basic idea of respecting individual choice (e.g. by imposing restrictions on certain kinds of sexual activity), nevertheless many of these criticisms can be ignored here. The big criticism, and the one that is at the heart of Rubenfeld’s article, is the claim that if we truly wished to respect sexual autonomy, we would have to criminalise all forms of sex-by-deception (Rubenfeld uses the term “rape-by-deception” but that seems conspicuously question-begging to me, so I have changed it). He argues that this is counter-intuitive and undesirable. Hence, we should abandon the principle of sexual autonomy.

In essence, Rubenfeld defends the following argument:

  • (1) If we truly respected the principle of sexual autonomy, we would have to criminalise (as rape or serious sexual offence) all forms of sex-by-deception.
  • (2) But we shouldn’t criminalise (as rape or serious sexual offence) all forms of sex-by-deception.
  • (3) Therefore, we should not respect the principle of sexual autonomy.

Because of his mix of legal-normative and ethical approaches, Rubenfeld defends the premises of this argument on a variety of grounds, some of them involving analyses of case law, some of them relying on general ethical intuitions and ideals. In keeping with what I said in my introduction, I’ll try to ignore the legal analysis and focus on the ethical claims.

When it comes to defending the first premise, Rubenfeld argues that this seems to be the implication of deception in other contexts in which we wish to respect autonomy (e.g. permission to enter a property), and, perhaps more importantly, that it seems to be particularly true in the sexual context, given the intimate nature of the contact involved. So, for example, if someone wishes to have sex with a person of a particular age, race, height, profession, educational background (and so on), why shouldn’t we respect their wishes? Why shouldn’t deception as to those characteristics undermine consent?

In defence of the second premise, Rubenfeld initially appeals to the counter-intuitive results it would entail. For example, it would seem to imply that if Bob lied about his height, weight, age (or whatever), he would be guilty of rape and that doesn’t seem right, does it? Rubenfeld builds upon this by adducing further examples:

A. If sex-by-deception were to count as rape or serious sexual offence, then children who were statutorily raped, could also be guilty of an offence. The situation envisaged here is one in which an adult has had sex with a child (what counts as a “child”, legally speaking, varies from jurisdiction-to-jurisdiction) who has lied about their age. This has happened in several reported cases, and Rubenfeld suggests that the implication that these children are guilty of an offence is unwelcome.
B. If sex-by-deception were to count as rape or serious sexual offence, then many more women would be guilty of sexual crimes. Rubenfeld doesn’t adduce any evidence for this, but I suspect he is simply appealing to the popular stereotype of women lying about certain things in order to attract a sexual partner (age perhaps being the most stereotypical lie).
C. If sex-by-deception were to count as rape, it would imply that the assaulter in the Craigslist case was himself a victim of a sexual crime. This is a reference to a particular and highly controversial case. Roughly, this is what happened: Jebidiah Snape was the ex-boyfriend of a woman who was raped under conditions of extreme force (tied-up and held at knife point) by a man named McDowell. Snape had put an ad on Craigslist, along with photos of the woman, asking for “an aggressive man with no concern for women”. McDowell had responded to this ad and received further emails from Snape, pretending to be the woman, and claiming that she was looking for “humiliation, physical abuse and sexual abuse”. McDowell said that he acted, sincerely, on the basis of these communications. The point Rubenfeld makes is that if we take him at his word, he would himself be both the perpetrator of a violent rape and a victim of a serious sexual assault. This is a counter-intuitive result.

In addition to these three negative reasons for rejecting the criminalisation of sex-by-deception, Rubenfeld also argues that sex-by-deception has its merits. Sometimes we shouldn’t reveal everything prior to a sexual encounter, sometimes it is part of the whole culture of love (which Rubenfeld describes as a “vast engine of deception”). To criminalise all lies and all concealments would be impracticable and, indeed, undesirable since some of them are just part of the game.

No doubt there are problems with many of these claims, but with them Rubenfeld thinks has defended his second premise and endorsed his conclusion. The question then becomes: what alternative basis should there be for rape law?

2. Rubenfeld on the Right to Self-Possession and Rape
The answer lies in the principle of self-possession. This grounds rape law in a property-based theory. Some of you may know that rape law was classically conceived in these terms, albeit the relevant property interests were those of one person over another (i.e. right of the husband over his wife, right of the father over his daughter). Hence, rape was a crime because it interfered with those property rights. The difference with the self-possession theory is that it is the individual’s property right over themselves that is interfered with.

The idea is relatively straightforward. We each have (admittedly imperfect) control over our own bodies. This control plays a crucial role in the maintenance of selfhood and identity, and can have devastating and traumatic implications when it is lost. This is evinced by the problems faced by people who do lose this property of self-control through disease or accident. Its erosion is also the central problem with rape and crimes of sexual violence. As Rubenfeld sees it, the real wrong at the heart of these crimes is the wrong of somebody literally taking control of your body and using it for their own ends.

Of course, it then becomes critical that we have a clear idea of when exactly a person can be said to have “taken control” of another person’s body. Rubenfeld maintains that you do not lose control of your body through embarrassment, deception or fraud; you only lose it when someone exercises “such complete and invasive control over [your body] that your body is in an elemental sense no longer your own”. Rubenfeld uses analogies with torture and slavery at this stage to flesh out the concept. These are two paradigmatic instances of someone exercising total control over the body of another.

When it comes to rape law, Rubenfeld argues that the self-possession theory has a number of advantages. In particular, it doesn’t have the troubling implication of criminalising all sex-by-deception, and it captures (he argues) the phenomenology of rape victims (i.e. provides an explanation for why rape is such a traumatising and harmful crime).

It also has, he admits, a number of less welcome implications. First of all, it implies that a force element is essential to the crime. In other words, it holds that rape can only be committed if the perpetrator has exercised sufficient force (he says “violent force”) to take control of the victim’s body. This is troubling since there has been a long fight to rid rape law of a force requirement. Furthermore, the self-possession theory implies that the following should not necessarily be counted as examples of rape:

Unconscious sex: The sexual penetration of a person who is sleeping or otherwise unconscious should not count since it doesn’t involve the perpetrator taking forcible control of the victim’s body. Nevertheless, Rubenfeld concedes that it should be a crime, perhaps a simple assault or battery.
Statutory Rape: In many reported cases of statutory rape, no physical force is exercised over the victim, so this wouldn’t count as rape either. Rubenfeld argues that this is the legal position anyway since the law acknowledges there are distinct interests and harms at stake in such cases. Hence, this should really be an independent category of offence.
Intoxicated sex: In many reported cases, people have unwanted sex while highly intoxicated (but not unconscious or passed out) without force being used. These cases would no longer count as rape (if force were used, they would, but otherwise they wouldn’t).

Some of these implications might be thought disturbing and counter-intuitive, but Rubenfeld maintains they should be embraced in order to relocate rape law on a sound principled basis.

3. Criticisms of Rubenfeld’s Argument
Thus far, I’ve held off on criticising Rubenfeld’s argument, preferring instead to outline the key moves he makes in defending his self-possession theory. But with that task out of the way, I can at last turn to some criticisms. I have no intention of being exhaustive here. There are several published responses to Rubenfeld’s work, some of them quite lengthy. You are free to peruse those if you wish. I will focus on three lines of criticisms, two relatively specific, and one general.

The first criticism was foreshadowed in the introduction. Rubenfeld’s dismissal of sexual autonomy and rape-by-deception seems to be grounded in an overly-strenuous theory of what kinds of deception would undermine sexual autonomy. Some of the counter-intuitive and problematic implications he identifies only follow if we accept that every type of deception counts when it comes to sexual consent. But surely that is to accept too much? If I am consenting to medical treatment by a doctor, all I need to know is what the treatment will be, its likely rate of success and the doctor’s qualification to administer it. It does not matter if she lied about where she spent her summer holidays, or who her favourite band is.

The point is that certain facts are material to consent and certain others are not. No doubt it will be a laborious and messy exercise to figure out what is material and what is not; no doubt it will vary from case to case; and no doubt it it may be trumped by other considerations (e.g. suppose a racist mistakenly has sex with someone they thought belonged to their own race, should we allow their prejudice to ground a rape conviction?). Still, there is no reason not to try to develop a theory about what information really counts when it comes to sexual consent.

The second criticism simply has to do with the alleged implication of the self-possession theory for rape law. Rubenfeld insists that violent force is needed in order for someone to truly take control over another person’s body, but I just don’t see why that has to be the case. The slavery analogy, to which Rubenfeld appeals, seems to illustrate this point. Slavery isn’t simply maintained by force and cruelty; it is also maintained by cultural norms, and behavioural dispositions. A slave could live under the dominating control of a slave master, without that slave master ever forcibly restraining them. But if such a slave’s right to self-possession would be violated in that context, it is difficult to see why something similar couldn’t happen in the rape/sexual assault context.

Finally, and more generally, Rubenfeld’s whole method of argumentation is strangely self-contradictory. This criticism has been made by others, but I’ll try to summarise it briefly here. If we go back to his original argument against sexual autonomy, we see that it has the following abstract form:

  • (4) If we accept principle P, consequences Q, R and S follow.
  • (5) Consequences Q, R and S are undesirable/counterintuitive.
  • (6) Therefore, we should not accept principle P.

In other words, the argument urges us to reject a particular principle on the grounds that it leads to unwelcome results. The problem is that Rubenfeld’s preferred principle of rape law is vulnerable to an identical form of argument. As follows:

  • (7) If we accept the self-possession principle, then: (a) the force requirement is an essential part of rape law; (b) not all cases of unconscious sex will count as rape; c) not all cases of statutory rape will count as rape; and (d) not all cases of undesired intoxicated sex will count as rape.
  • (8) Consequences (a)-(d) are undesirable/counterintuitive.
  • (9) Therefore, we should not accept the self-possession principle.

Rubenfeld insists that we need some coherent principled basis for rape law and so we should embrace these consequences anyway. But it’s hard to see why the same argument could not be made in favour of the sexual autonomy principle that he rejects. It seems like he would have to argue that the undesirable consequences of his principle are less undesirable than the consequences of the autonomy principle, but I think he would have a hard time making that case. (All this is assuming the principles really have the implications he claims for them).

In the end then, Rubenfeld’s principle seems problematic because it fails to match the very same standards he sets for the autonomy principle.