Tuesday, July 31, 2012

The Ethics of Pre-Punishment (Part Three)

(Part One, Part Two)

This post is part of a series on the ethics of pre-punishment. The series looks at whether it is permissible to punish someone for crimes they have not yet committed, but will. Although drawing inspiration from science fictional examples, such as Minority Report, the ethical questions at the heart of the series have implications for the widespread practice of preventive detention.

To date, the series has looked at the arguments found in a classic set of papers from Christopher New and Saul Smilansky. Part one covered New’s argument, which claims that there is nothing ethically improper about punishing a person before they commit a crime, although there are some obvious epistemic limitations on this. Part two looked at Smilansky’s response to New. Smilansky argued that pre-punishment is ethically impermissible because it fails to accord proper respect to a person’s capacity to refrain from morally improper conduct. Part two also looked at New’s response to this argument which pointed out that respect can pull in the opposite direction too. If someone has declared an intention to break the law, why is it not respectful to assume they will be true to their word and punish them before they do so?

As I said originally, the New-Smilansky essays are probably the classics in this particular area. But many more philosophers have chimed-in in recent years. Now, I can’t possibly look at all the arguments that have been made, but I will look at one, coming from an article by Roy Sorensen entitled “Future Law: Prepunishment and the Causal Theory of Verdicts”.

In the remainder of this post, I will cover Sorensen’s argument. I do so in two stages. First, I will consider the structure of his argument and explain how it responds to the original argument offered by New. Second, I will examine the deeper justification that Sorensen offers for his argument.

1. The Causal Asymmetry Argument
As you recall from part one (and two), New’s central thesis could be summed up as follows:

Temporal Neutrality Thesis: It is morally acceptable to punish people both before and after the commit a crime, provided we know (or believe beyond a reasonable doubt) that they will commit the crime.
Epistemic Constraint Thesis: The only thing that really prevents us from prepunishing people is that we usually lack proper epistemic access to future crimes.

Although it might tempting for critics of this position to reject the temporal neutrality thesis, Sorensen avoids doing so directly. Instead, he offers an alternative thesis that does not reject the temporal neutrality thesis outright, but tends to have its falsity as an implication.

Sorensen’s alternative thesis is that verdicts which purport to justify the imposition of a punishment — such as the verdicts reached in a criminal trial — must be causally related to the crimes that underpin them in the appropriate manner. Specifically, the crime for which the punishment is imposed must cause the verdict. We can call this the causal asymmetry thesis since it holds that crimes must cause verdicts; and verdicts cannot cause crimes. In other words, it holds that there is an asymmetry between the causal powers of crimes and verdicts:

Causal Asymmetry Thesis: A verdict of punishment (Vp) for a crime C is justified, if and only if C causes Vp; Vp cannot be justified if Vp causes C.

Before considering the deeper justification for this thesis, a couple of limitations and implications are worth noting.

First, the thesis claims that verdicts of punishment must be caused by the crimes for which the punishments are imposed. It does not claim that punishments must be caused by the verdicts. This is a subtle, but nevertheless significant, difference. Pre-trial detention is common in many jurisdictions, and that detention can be taken into consideration when a final verdict of guilt is reached. For instance, a person who served six months in jail pre-trial will have that six months included as part of the one year sentence that is finally imposed on them. Sorensen’s thesis does not challenge the propriety of this practice.

One can see why. First, it’s an extremely common practice and such practices tend to go unquestioned in some dialectical contexts. Second, and perhaps more importantly, it’s not central to the original debate sparked by New. New was arguing that punishment could be justifiably imposed pre-crime, i.e. before a crime was actually committed. He was not talking about the justifiability of punishments post-crime. Indeed, he took the justifiability of those punishments for granted. In the case of pre-trial detention, the crime has already taken place, it’s just the verdict that has not. If the pre-verdict detention is punitive, it still post-dates the crime.

Despite the understandability of Sorensen’s position, the notion that punishments can be justifiably imposed pre-verdict, despite the fact that verdicts themselves need to be caused by the crimes, is odd. Sorensen hints at this oddness when he suggests that a person who is detained pre-trial, but eventually acquitted, could say that their pre-trial detention was not a form of punishment. Conversely, the person who was eventually found guilty could say that they had been punished by the pre-trial detention and have their post-trial sentence reduced as a result. But can it really be the case that the eventual verdict has a retrospective causal effect of this sort? This seems to raise the same kinds of question as did New’s suggestion that people could be pre-punished.

The other point worth addressing before moving on is that the causal asymmetry thesis can be used to explain why punishments are thought to necessarily post-date crimes. The reason is that, from a temporal perspective, causes usually precede their effects. Thus, in the normal run of events, pre-punishment is impermissible. But if time travel is possible, this need not be the case. Consider Bruno, who is sentenced to be punished for committing a murder in the future, but escapes via a time machine into the past. The future police procure their own time machine, follow him into the past, and punish him there. This sequence of events is depicted in the timeline below, with the letter E being used to represent the events and Ts being used to represent the times at which they take place.

On the face of it, the imposition of the punishment in the past looks to justified, and this seems to be because of the causal sequence of events: Bruno’s crime causes the verdict, not vice versa. The point can be emphasised using Lewis’s famous distinction between personal time and external. As Lewis argued, in time travel cases, it is important to disambiguate the different timelines involved. As follows:

Personal Time: The sequencing of the events from the perspective of the person who travels through time. This is always linear and forward-moving. The person continues to grow old, their hair lengthens etc.
External Time: The sequencing of events from a neutral, or third-person standpoint. This follows the sequencing of historical dates such 1901, 1902 and so on.

If we are looking at things from the perspective of personal time (e.g. Bruno’s perspective), then it is indeed true that crimes must precede verdicts of punishments. But if we are looking at things from the external perspective, this need not be so. What’s going on here is that Lewis’s concept of personal time is designed to maintain the ordinary relationships of cause and effect, whereas the concept of external time is not. So the reason why it looks like punishments must post-date crimes is because we yearn for the appropriate causal sequencing, not because the temporal neutrality thesis is false.

Now, I’m a big fan of time travel and its associated philosophy — you’re talking to someone who managed to sit through the entirety of Primer here — but I take it that this example is more fun than serious. The jury is still out on whether time travel is logically/metaphysically possible and what precise implications this would have for the notion of an external and personal timeline. Still, even if it is possible, in precisely the form envisaged in Bruno’s case, there is an obvious philosophical worry. It could be that, despite what has been said, the verdict of punishment does cause the crime. This could be the case if there is a closed temporal-causal loop between all the events depicted above such that: (i) E2 (the verdict) causes E3 (the time travel); (iii) E3 causes E4 (the punishment); (iii) E4 causes E1 (the crime); and (iv) E1 causes E2.

Would such a closed temporal-causal loop pose a problem for Sorensen’s theory? Looking at the deeper justification Sorensen offers for his thesis will help us to answer that question.

2. The Need for Grounding
Sorensen’s deeper justification for the Causal Asymmetry Thesis turns on the notion of appropriate grounding. Roughly, the idea is that every truth-bearing or justifiable statement needs an appropriate grounding. That grounding can vary from case to case, but it needs to be there. In the case of verdicts of punishment, the grounding is causal in nature.

The argument that Sorensen makes would appear to have roughly the following form:

  • (1) Where P is some proposition or claim, P is justified or true if and only if P is appropriately grounded in something else (G), where G is P’s object. 
  • (2) A verdict of punishment, Vp, is claim that it is justifiable to impose a punishment on someone for a crime C. 
  • (3) Therefore, Vp is justified if and only if it is appropriately grounded in C. 
  • (4) The appropriate grounding for a claim like Vp is causal in nature. 
  • (5) Therefore, Vp is justified if and only if it is causally grounded in C.

This is a tricky argument to evaluate, mainly because Sorensen doesn’t offer anything like a formal presentation of it. What’s given above is my own reconstruction, and it’s not pretty. Still, reading between the lines, I find Sorensen trying to offer defences of something like premises (1) and (4). Hence, I think it is a reasonable reconstruction of what he was trying to say.

Sorensen’s defence of (1) points to a number of analogous cases in which an appropriate grounding relationship seems to be necessary. He then inductively generalises from those cases to the principle stated in (1). He actually uses a joke as the basis for one of these cases, and I can’t resist sharing that joke here:

Colonel Henry Watterson was a newspaper editor during the late nineteenth century era of American railroad travel. As a journalist, he and his reporters were issued special railroad passes. These passes were non-transferable. This restriction was widely violated. During one crackdown, a conductor con- fronted a young man who presented the pass of a certain Mr. Smith who was a correspondent for Watterson’s paper. The suspicious conductor told the nervous man that Colonel Watterson happened to be on the same train. He escorted ‘‘Mr. Smith’’ for authentication. The conductor asked, ‘‘Mr. Watterson, is this man your employee?’’ To the young man’s amazement, the answer was ‘‘Yes’’. The conductor left. The relieved man began to effusively express his gratitude. ‘‘Compose yourself, young man. I don’t happen to be Colonel Watterson, but I am riding on his railroad pass.’’

I don’t know if that is actually funny, or whether I’ve been reading philosophy papers for so long that my sense of humour has become distorted, but nevertheless I smiled when I read that. And the cheap laugh has a payoff too. It makes a serious point about grounding relationships. As Sorensen points out, the punchline works because it reveals something important about chains of verification. One person can only vouch for the identity of another if their identity has been appropriately vouched for. The “Colonel Watterson” in this case had his identity vouched for in the same manner as Mr. Smith. And since we were unsure about the identity of Mr. Smith, there is no reason why we should be more sure of Watterson’s identity.

Similar points are made about the Liar Paradox. Sorensen contends that the reason “This sentence is false” is so problematic is that it is not appropriately grounded. The truth or falsity of a sentence must be assessed against something else, either another sentence or the external world. For instance, the reason that the sentence “there is a blue car in my driveway” is true is because it conforms with the actual state of the world. That is, the sentence is grounded in the state of the external world. The liar paradox lacks this feature: it tries to ground itself in itself. To borrow Hofstadter’s term, the sentence forms a “Strange Loop”. Instead of reaching out to the external world for grounding, it reaches back in on itself. It is this that leads to the paradox, and to Sorensen’s claim that, generally speaking appropriate grounding is needed for all claims.

This then ties into the defence of (4). The reasoning here is based on the intuitive oddness of cases in which (4) is violated, i.e to cases in which Vp is not causally grounded in its object. Consider Judge Farsighted who convicts a woman for an offence he knows she will commit upon release from prison. He knows the woman will resent her imprisonment and, upon release, will seek revenge by attacking him. Sure enough she does this when released. The police arrest her but Judge Farsighted releases her since she has already served her sentence for this crime.

The problem here is that the judge’s verdict (Vp) forms a strange loop with its supposed grounding (the attack on him). He justifies Vp by appeal to his future attack, but Vp is what causes his future attack. It just doesn’t seem right that verdicts of punishment could become self-fulfilling prophecies in this manner. Indeed, if judgments did have this effect, their nature would be radically altered. As David Duffy pointed out in the comments to part two, a speeding fine that is imposed in advance of the speeding, at the request of the putative speedster, just doesn’t have the look and feel of a punishment. Rather, it feels more like the speedster is trying to buy a permission to speed, and it looks like the person who imposes the fine is granting that permission.

Sorensen makes a similar point about forgiveness and permission: to forgive a person in advance seems like giving them a permission to do it. But, of course, all of this links back to the time travel case discussed earlier. The problem with Bruno’s scenario is that it may form a Strange temporal-causal loop. Every event in the chain seems to be appropriately grounded — in particular, the verdict is causally grounded in the crime — but the sequence of events loops back on itself. The question is whether that renders the verdict unjustified?

The answer is that it’s difficult to say. Sorensen acknowledges that there may be some benign loops or ungrounded chains. For instance, he imagines the case of the infinite recidivist who repeatedly gets one year added to their sentence for trying to escape from jail. Though the chain of events at the heart of this case is ultimately without grounding, each event within the chain has an appropriate grounding. So what’s the problem?

The analogies with the cosmological argument in the philosophy of religion are palpable here and the dialectical moves will be the same. Some people will insist that chains of this sort need an ultimate grounding, while other will argue that everything’s fine so long as the each link in the chain has an appropriate grounding. I’m honestly not sure where I come down on this issue. In any event, the worry maybe a relatively minor one since it would only really seem to arise in the case of a closed temporal loop, and such loops — at least when it comes to punishments — will be rare. To say the least.

Anyway, there is much more to Sorensen’s paper, including many more playful thought experiments and analogies, but I’d best leave it there. Hopefully, the gist of the overall argument is clear enough: pre-punishment is not (usually) morally acceptable because verdicts of punishment must be causally grounded in the crimes for which the punishments are imposed. To reverse the causal relationship between crime and verdicts of punishment would be do to something other than impose a punishment.

Saturday, July 28, 2012

Doping, Cheating and the Olympic Games

Earlier in the year I did a series of posts on the ethics of performance enhancement in sports (and education). Since the Olympic Games have just started I thought it might be worth reposting links to each of the entries in that series. This is perhaps not entirely in keeping with the ethos of the games, but is certainly an important issue in elite sports.

1. Partridge on Performance Enhancement in Swimming

2. Schermer on Enhancement and Cheating (Part One)

3. Schermer on Enhancement and Cheating (Part Two)

4. Tannjso on Enhancement and the Ethos of Elite Sport

5. Overview of the Arguments Against Doping in Sport (Part One)

6. Overview of the Arguments Against Doping in Sport (Part Two)

7. Doping, Slippery Slopes and Moral Virtues

Thursday, July 26, 2012

The Ethics of Pre-Punishment (Part Two)

(Part One)

This series of posts is looking at the ethics of pre-punishment. That is, at the moral propriety of punishing people for crimes that they have not yet committed, but will. This scenario has been popularised by the film (and short story) Minority Report, but it has been considered by philosophers as well. Furthermore, it is arguably a common practice in many parts of the world today where it masquerades under the title “preventive detention” so this is an issue of some practical import, though there may be some distinctions to be drawn between prepunishment and preventive detention.

In part one, I looked Christopher New’s argument in favour of pre-punishment. New argues for two theses, which I formulated in the following manner:

Temporal Neutrality Thesis: It is morally acceptable to punish people both before and after the commit a crime, provided we know (or believe beyond a reasonable doubt) that they will commit the crime.
Epistemic Constraint Thesis: The only thing that really prevents us from prepunishing people is that we usually lack proper epistemic access to future crimes.

New supported both theses with a thought experiment in which a man named Algy was fined for speeding before he actually sped. The fine looked to be morally acceptable because it was known (beyond a reasonable doubt) that Algy was going to commit the offence. Thus, it looked like pre-punishment was, at least in some cases, morally acceptable.

In this post, I do two things. First, I outline Saul Smilansky’s objection to New’s temporal neutrality thesis. And second, I outline New’s response to Smilansky. This is based on two articles from the journal Analysis, entitled “The Time to Punish” and “Punishing Times: Reply to Smilansky”, respectively. The wit of philosophers shines through in those titles.

1. Temporal Imbalances and Determinism
Smilansky accepts many of the key premises in New’s argument. For instance, he accepts that the pre-punishment scenario is most interesting from a non-consequentialist perspective (on the morality of punishment). This is because consequentialist theories seem to provide obvious resources for supporting punishment of the innocent, so it’s no great surprise to learn that they support pre-punishment. Furthermore, he accepts New’s epistemic claim that we can have proof beyond a reasonable doubt about future crimes. Thus, epistemically speaking, our judgments about pre-punishment and post-punishment could be on a par.

Where Smilansky disagrees with New is in the morality of pre-punishment from a non-consequentialist perspective. As I outlined at the end of part one, New’s main reason for thinking that pre-punishment is unobjectionable from a non-consequentialist perspective is derived from retributivism. As he sees it, retributivism justifies punishment on the grounds that it achieves moral balance: for every culpable wrong performed by X there must a corresponding harm imposed on X. But retributivism says nothing about the timing of the moral balance. True, in most cases we will not be aware that an imbalance has been created until after the fact so we won't be able to correct it until then, but if we are aware that it will be created in the future, acting pre-emptively to restore the balance is acceptable. This is depicted in the timeline below which depicts Algy’s punishment for speeding.

As you can see, Algy creates a moral imbalance at T3, but that imbalance is corrected at T2. Thus the state of affairs covering the events from T1 to T3 is morally balanced. The same thing could have been achieved by punishing Algy after T3, but from a moral perspective this does not matter. Morally speaking, punishment is justified on temporally neutral grounds.

Smilansky rejects this reading of the situation. He argues that there’s more to punishment than achieving moral balance. Another key justifying criterion for punishment is whether it respects persons as agents with responsibility for their actions. As agents, people have the moral capacity to make decisions for themselves. This includes a capacity to refrain from performing previously intended actions. In other words, Smilansky rejects New’s overly linear reading of the timeline in favour of a branched reading of the timeline. The branches in this timeline represent the moral opportunity to change one’s mind. This is represented for Algy below.

Smilansky’s argument is that pre-punishing Algy is objectionable because it fails to respect his moral capacity to refrain from speeding. He argues that this is also part of what makes punishing the innocent so objectionable: it fails to respect people for the choices they have made and can make. It treats them merely as a means to a more desirable end, not as morally autonomous agents capable of creating and following their own ends.

Obviously, Simlansky’s argument is tied to a particular view of the free will and determinism debate. The branching timeline that he envisages for Algy would make no sense for a hard determinist (one who believes there is really only one possible future). But, Smilansky argues, hard determinists would have a hard time justifying punishment anyway since on their worldview concepts of guilt and innocence seem to fall away. So maybe this whole dialectic is not for them; it is only for those who accept the morality of punishment. And for them, Smilansky claims, pre-punishment should be deemed unacceptable for the reasons just stated.

2. New’s Responses
New offers three responses to Smilansky’s respect argument. The first is that Smilansky’s argument could only supply justifying conditions for the punishment of actual persons, not juridical persons like corporations. New thinks that punishment of juridical persons is permissible for reasons that have nothing to do with their moral agency. Corporate agents do not have the same kind of agency as ordinary persons.

This is difficult claim to evaluate, particularly given that New spends very little time discussing it. He seems to throw it out as a possibility, and leave it at that. List and Pettit’s recent book Group Agency offers what I think is the best available discussion of corporate agency and responsibility and could be used to develop this argument. I haven’t conducted an in-depth study of the book, but as I read them List and Pettit argue that it is indeed possible to hold corporate agents responsible (like New seems to think) but that this is only because there are sufficient analogies between the conditions for individual responsibility and corporate responsibility. Thus, it may be that, contrary to what New argues, the punishment of juridical persons relies on similar agency conditions to those discussed by Smilanksy. In particular, it may rely on some close analogue to the capacity to refrain from certain actions.

New’s second response to Smilansky’s argument is rather meatier and, I think, more persuasive. New agrees with Smilansky that any theory of punishment will need to include some respect for persons as agents. He just disagrees that respecting persons agents necessarily involves respecting the possibility of them refraining from certain actions. As he sees it, if someone like Algy has declared an intention to perform a certain action and that action normal attracts punishment, we can respect him by assuming he will carry out his intention. Indeed, isn’t this more respectful than what Smilansky demands?

Smilansky’s respect for the capacity to refrain seems tantamount to assuming weakness of the will on the part of Algy, i.e. to assuming he won’t follow through on his declared intent. But weakness of the will undermines moral agency, so why should we respect it? I’ve no doubt Smilansky could respond by distancing his brand of respect from the problem of akrasia, but even if he manages this, the respect argument ends up a draw: it could just as easily support New’s position as it could Smilansky’s.

New’s final response is offer independent support for the temporal neutrality thesis. This support is provided by way of analogy with the obvious corollary of punishment, namely: reward. New asks us to imagine the Japanese government giving military honours to Kamikaze pilots before they flew on their fatal missions. Although we obviously don’t approve of such missions, New nevertheless asks us: is there anything deeply incoherent or problematic about pre-rewarding people in this manner? He thinks not. And since punishment is just the flipside of reward, there’s nothing problematic or incoherent about pre-punishment either.

This is an interesting argument, and one that I wish New developed in more detail. It is often annoying how the morality of rewarding is often left out of discussions of responsibility and punishment, despite its obvious connections to such discussions. New is to be commended for drawing out attention to it. Still, there are those who would question his assumption of symmetry between punishment and reward. I’ve certainly come across some who argue that our criteria for rewarding someone are justifiably laxer than our criteria for punishment, and that this is partly attributable to asymmetrical way in which we treat harm and benefit. We typically have much higher moral standards for avoiding harm than we do for securing benefit. Since punishment is a harm, it seems reasonable to suppose we have higher standards for its imposition than we do for reward.

Okay, that brings us to the end of this post. So far we’ve looked at the exchange between New and Smilansky. In the next post, I’ll branch out and consider the views of Roy Sorensen, who like Smilansky rejects the propriety of pre-punishment, but for a slightly different set of reasons.

Wednesday, July 25, 2012

The Ethics of Prepunishment (Part One)

Scene from Minority Report

One of my favourite science fiction films of the past decade — despite the presence of Tom Cruise as lead — is Minority Report which is based on a short story by Philip K. Dick. The film depicts an imagined future in which specially-reared mutants have the ability to foresee murders. Capitalising on this ability, the police force set up a pre-crime division. This division tracks down and incarcerates people before they commit a murder. In other words, it pre-punishes people for crimes that they have not yet committed, but will.

The film is great. It’s vision of the future is fascinating, with many interesting design elements that are not central to the plot. I highly recommend it (although there is the ending...). The short story is also great, but then again a lot of Philip K. Dick short stories are.

Many potential avenues for philosophical discussion are opened up by the plot, but the one I want to focus on here is the morality of the system of prepunishment itself. Philosophers think and write about everything, so it should come as no surprise to learn that they have thought and written about this topic at length. The classic discussion was an exchange between Christopher New and Saul Smilansky in the journal Analysis in the 1990s. But several more philosophers have chimed-in in the intervening years, many directly referencing Minority Report in their work.

I want to focus on some of the arguments in this area over the next few blog posts. I’ll start today by going back to New’s classic article on the topic “Time and Punishment”. I’ll try to set out his argument as best I can in this post before going on to look at the response from Saul Smilansky in the next.

1. Some Background
Before getting into the nitty gritty of New’s argument, some background is in order. First, we should note that we are starting out with the assumption that normal punishment (“postpunishment”) is morally acceptable. In other words, it is okay to punish a person after they have committed a crime. The reason being that there is a desert-relation subsisting between the person and the crime. This relation warrants their punishment. I have covered the topic of the desert relation before.

It is the need for this desert-relation that renders punishment of the innocent (another topic I’ve covered before) so morally abhorrent. We can’t wantonly pick out innocent people and fine them, or incarcerate them or otherwise harm them simply because we’d like to, or simply because it would serve some desirable end. Innocent people don’t deserve to be treated like that. They lack the necessary desert-relation. (Or so the argument goes, at any rate).

But is the same true of people who are punished before they commit crimes? Superficially, it seems like it is: once again the desert-relation is absent. But this may be because we are viewing the situation from a temporally biased direction. If we know the person is going to commit the crime, then what’s the problem? The desert-relation does not yet exist, but it will.

2. New’s Argument for Prepunishment
No doubt, the whole notion of prepunishment will seem logically and metaphysically incoherent to many. But New wants to argue that it is not. He argues that there are no logical, metaphysical or moral impediments to pre-punishment, only practical and epistemic ones. In other words, he thinks it possible to defend the following two theses (my interpolation, based on other stuff I’ve read):

Temporal Neutrality Thesis: It is morally acceptable to punish people both before and after the commit a crime, provided we know (or believe beyond a reasonable doubt) that they will commit the crime.
Epistemic Constraint Thesis: The only thing that really prevents us from prepunishing people is that we usually lack proper epistemic access to future crimes.

To support both theses, he develops a somewhat elaborate thought experiment. One that is coherent and consistent, and in which prepunishment seems acceptable. He then defends the conclusion of the thought experiment from a variety of counterattacks.

The thought experiment is as follows:

Alaskan Speeding: Algy is a well-known speedster who intends to break the speed limit on a remote, unpatrolled Alaskan highway (Wilderness One) tomorrow morning at 10.31. He rings up Ben, a local traffic policeman, to inform him of his intention. He knows that Ben and the police do not have the resources to reach and patrol the highway at that time. He also knows that it will be possible for him to flee the jurisdiction soon after committing the offence. So he offers Ben a deal. If Ben fines him today for the speeding, before he commits the offence, he will pay the fine in full. However, if Ben waits until after 10.31 tomorrow morning, Algy will flee the jurisdiction and never pay the fine. Ben issues the fine today and the following morning Algy breaks the speed limit on Wilderness One.

New’s argument is that there is nothing objectionable about prepunishment in this case. The desert relation is present, and we have knowledge (or belief beyond reasonable doubt) as to guilt. If we still think it is wrong then we need to explain why.

3. Objections and Limitations
New deals with a variety of objections in his article. He does so in somewhat disorganised and rapid-fire manner. At least, in my opinion he does. I’ll try to cover what I think are the most interesting objections and New’s responses thereto. New’s responses highlight some significant limitations in his argument, so I’ll be covering those too.

First up is the objection that the example only seems compelling because we are punishing Algy for planning to commit an offence, not for actually committing one. There are such things as crimes of planning (e.g. the crime of conspiracy), so it is not surprising that we think it okay to punish Algy for planning to speed. New emphatically denies this interpretation. Suppose the fine that Ben imposes is exactly identical to the fine imposed for those committing the offence of speeding, suppose that it is the offence of speeding that is written on Algy’s record. The pre-punishment still seems unobjectionable. Or at least that’s what New thinks.

Related to this is the claim that Algy is being punished for attempting the offence not for committing it. This is subtly different from being punished for planning the offence. Without getting into too much detail — I’ve discussed the topic before — punishment for attempts sometimes mirrors punishment for completed offences. This might trick us into thinking we have accepted pre-punishment when all we have accepted is punishment for attempts. There’s a lot of finicky theoretical concepts to untangle here. All I’ll say is that what Algy has done — i.e. declared an intention — wouldn’t seem like enough for him to have attempted the offence. An attempt usually requires more that a declared intention. That said, if the argument for prepunishment goes through, we may be forced to reconsider how we view attempt liability.

The next objection is epistemic in nature. It points out that in order for punishment to be justified we must know that the offence has been or will be committed. In other words, judgments as to punishment must meet a knowledge criterion. But we could never meet that criterion in the case of future crimes: we could never actually know that the offence was going to be committed at the time of the punishment.

New points to several flaws in this argument. The main one being that the knowledge criterion for punishment isn’t nearly as strong as this objection purports. We are typically uncertain about what happened in the past, but our judgments that punish past conduct are still deemed to justifiable. All we need is proof beyond a reasonable doubt. Why couldn’t the same be true of the future? Why couldn’t we believe beyond a reasonable doubt that a crime was going to be committed? In fact, isn’t this exactly what is true in the case of Algy. Ben is left in no reasonable doubt as to the fact that Algy will commit the crime: Algy has told him he will; and Algy’s past record suggests that he will definitely follow through on his intention. The other point to make is that even if a relaxed version of the knowledge criterion could not be met in these cases, it is still worth speculating so as to see whether there are any moral as opposed to epistemic objections to future punishment. (This is part of the epistemic constraint thesis, highlighted above)

That brings us to the next objection, one that derives from the morality of punishment. Under consequentialist theories of the desert-relation it looks like future punishment could be justifiable since it could lead to better outcomes. But that’s not really interesting since consequentialist theories also seem to support punishment of the innocent (in certain cases) and that is deemed morally objectionable. The more interesting question is whether retributivist theories, which rule out punishment of the innocent, can force the rejection of pre-punishment.

The initial feeling is that they might. Retributivism is about achieving moral balance: an eye for an eye, a tooth for a tooth, and so on. In the case of the innocent person there is no moral disequilibrium created by their actions and hence no justification for their punishment. The same is true of Algy in the prepunishment scenario: he has not created a moral disequilibrium so punishment cannot be justified.

New thinks this is wrong. He calls for a distinction to be drawn between people who will never commit the crime for which they are punished and people who have not yet committed the crime for which they are punished. It is true that, for the former group, no moral disequilibrium is created, but it is not true for the latter. They do ultimately create a moral disequilibrium, and so do warrant punishment. It's just that it comes after the punishment, not before.

To support his point, New asks us to consider an analogy. If you buy a TV from me, you can justifiably pay for it either before or after delivery. The movement of the TV from me to you creates a disequilibrium of sorts, but as long as it is corrected at some stage it does not really matter when. The same is true of retributive punishment. The misdeeds of the punishee create the disequilibrium, but the moral books can be balanced by punishment either before or after the event.

But this reveals an important limitation in New’s argument, one that he himself recognises. Under his theory, it still must be the case that the moral disequilibrium is created. Thus, forms of punishment that would actually rule out the future performance of the crime would be impermissible. New uses the obvious example of the death penalty here, but incarceration could also have this preventative effect.

This is a significant limitation since it seems to disconnect New’s argument from the debate over preventive detention. Prison sentences are frequently extended, and terrorists detained indefinitely, because of their future risk of committing a crime. The practice is common, even more so in the aftermath of the war on terror. What’s more, preventive detention is used in medicine too, for instance in the quarantining of those with certain infectious diseases. While we could argue over the propriety of some of these preventive punitive practices, New’s argument seems to rule them all out. That looks counterintuitive: can we really not act so as to prevent future crimes?

There’s an easy solution to this: yes, we can act so as to prevent future crimes, we just can’t punish people in order to prevent crimes. In other words, we can engage in non-punitive preventive practices. But this solution looks weird in light of New’s claim that we can punish people for future crimes, provided they actually will eventually commit the crimes. New seems to allow for the commission of crimes in order to ensure moral balance. But surely it's better to avoid moral imbalance in the first place? New may not deny this. He may just be arguing for the propriety of prepunishment in an extremely limited set of cases. But then one has to wonder about the usefulness of his argument. Does it really prove anything interesting?

That pretty much brings us to the end of New’s article. We’ll look at Smilansky’s counterargument the next day.

Tuesday, July 24, 2012

Blinding, Information Hiding and Epistemic Efficiency

This post is about the importance of information hiding in epistemic systems. It argues (though “argues” may be too strong a word) that hiding information from certain participants in an epistemic system can increase the epistemic efficiency of the overall system. While this conclusion is not particularly earth-shattering, the method adopted to reach it is quite interesting. It uses some of the formal apparatus from Roger Koppl’s work on Epistemic Systems, which combines game theory and information theory in an effort to better understand and intervene in certain social systems.

In what follows, I lay out some of the key elements from Koppl’s theory and then describe a simple model epistemic system (taken from Koppl’s article) that illustrates the importance of information hiding.

1. What is an Epistemic System?
An epistemic system is any social system that generates judgments of truth or falsity. The classic example might be the criminal trial which tries to work out whether or not a person committed a crime. Evidence is fed into this system via witnesses and lawyers, it is then interpreted, weighed and evaluated by a judge and jury, who in turn issue a judgment of truth or falsity, either: “Yes, the accused committed the crime” or “No, the accused did not commit the crime”. Although this may be the classic example, the definition adopted by Koppl is broad enough to cover many others. For example, science is viewed as an epistemic system under Koppl’s definition.

The goal of epistemic systems theory is to adopt some of the formal machinery from game theory and information theory in order to better understand and manipulate these epistemic system. In effect, the goal here is to develop simple models of epistemic systems, and use these to design better ones. The first step in this process is to identify the three key elements of any epistemic system. These are:

Senders: A set of individual agents who choose the messages that are sent through the system. 
Message Set: The set of possible messages that could be sent by the senders. 
Receivers: A set of individual agents who receive the messages and determine whether they represent the truth or not.

In its more mathematical guise, an epistemic system can be defined as an ordered triple of senders, receivers and messages {S, R, M}, with a formal symbology for representing the members of each set. I will eschew that formal symbology here in both the interests of simplicity and brevity. Full details can be found in Koppl’s article. I will use some elementary mathematics and pictures, such as the following, which represents a simple epistemic system with one message, one sender and one receiver. 

The system issues a judgment, and this judgment will either be true of false. Whether it is in fact true or false is not determined by the beliefs of the senders or receivers.

Senders and Receivers are viewed as rational agents, sometimes locked in strategic battles, within these systems. As such they have utility functions which represent their preferences for particular messages or conclusions and they act so as to maximise their utility. One of the key assumptions Koppl makes is that these utility functions will not usually include a preference for the truth. For instance, he assumes that scientists will have a preference for their pet theory, rather than for the true theory; or that lawyers will have a preference for evidence that supports their client’s case, not for the true evidence. In doing so, he adopts a Humean perspective on epistemic systems, believing we should presume the worst in order to design the best. He uses a nice quote from Hume to set this out:

… every man ought to be supposed a knave, and to have no other end, in all his actions, than private interest. By this interest we must govern him, and, by means of it, make him, notwithstanding his insatiable avarice and ambition, co-operate to public good.

This assumption of knavishness is fairly common in rational choice theory and I have no wish to question it here. What does need to be questioned, however, is what represents the “public good” when it comes to the design and regulation of epistemic systems. One could argue about this, but the perspective adopted by Koppl (and many others) is that we want epistemic systems that reach true judgments. To be more precise, we want epistemically efficient systems, where this is defined as:

Epistemic Efficiency: A measure of the likelihood of the system reaching a true judgment. Either: 1 minus the error rate of the system; or the ratio of true judgments to total judgments.

So the goal is to increase the epistemic efficiency of the system. The argument we will now look at claims that information hiding is one way of achieving this.

2. The Importance of Information Hiding
The argument, like all arguments, depends on certain assumptions. One of the advantages of the formal machinery adopted by Koppl is that these assumptions are rendered perspicuous. If you think these assumptions are wrong, the strength of the argument is obviously diminished, but at least you’ll be able to clearly see where it’s going wrong as you read along.

So what are these assumptions? First, we are working with an extremely simple system. The system consists of one sender, one receiver, and two messages. It does not matter what these messages are, so we shall simply denote them as m1 and m2. We shall refer to the sender as S and the receiver as R. S must pick either m1 or m2 to send to R. R does not question whether S is right or wrong. In other words, R always assumes that the message sent by S represents the truth. This is, roughly, illustrated in the diagram below. We assume that m1 has a 0.25 probability of being true, and m2 has a 0.75 probability of being true.

The second crucial assumption relates to the payoff functions of R and S. They are as follows:

Receiver’s Payoff Function = U(m) = 1 (if m=m1) or 0 (if m=m2)
Sender’s Payoff Function = V(m) = Pr(m is true) x E[U(m)]

In other words, we assume that R prefers to receive m1 over m2. And we assume that S’s payoff function is partly determined by what he thinks is the truth, and partly determined by what he expects R’s payoff function to be (E(U) denotes expected utility. This looks like a fairly realistic assumption. Imagine, for instance, the expert witness recruited by a trial lawyer. He will no doubt wish to protect his professional reputation by picking the “true” message from the message set, but he will also wish to please the lawyer who is paying for his services. So if he knows that the lawyer prefers one message over the other, he too may have a bias toward that message. That such biases may exist has been confirmed experimentally, and they may be entirely subconscious.

This is where information hiding comes into play. Look first at the efficiency of the system when there is no information hiding, i.e. when S knows exactly what R’s payoff function is. In other words, when E[U(m)] = U(m).

If S sends m1 then: 
  • (1) U(m) = 1; P(m1) = 0.25 
  • (2) V(m) = P(m1) x E[U(m)] 
  • (3) V(m) = (0.25) x (1) 
  • (4) V(m) = .25

If S sends m2 then: 
  • (5) U(m) = 0; P(m2) = 0.75 
  • (6) V(m) = P(m2) x E[U(m)] 
  • (7) V(m) = (0.75) x (0) 
  • (8) V(m) = 0

Since we assume S acts so as to maximise his payoff, it follows that S will always choose m1 in this system. And since m1 only has a one in four chance of being correct, it follows that the epistemic efficiency of the system as whole is 0.25. Which is pretty low.

Can efficiency be improved by hiding information about R’s preferences from S? Well, let’s do the math and see. Assume now that S has no idea what R’s preferences are. Consequently, S’s adopts the principle of indifference and assumes that R is equally likely to prefer m1 and m2. In other words, in this scenario E[U(m)] = (0.5)(1) = (0.5).

If S sends m1 then: 
  • (1*) E[U(m)] = 0.5 ; P(m1) = 0.25 
  • (2*) V(m) = P(m1) x E[U(m)] 
  • (3*) V(m) = (0.25) x (0.5) 
  • (4*) V(m) = 0.125

If S sends m2 then: 
  • (5*) E[U(m)] = 0.5; P(m2) = 0.75 
  • (6*) V(m) = P(m2) x E[U(m)] 
  • (7*) V(m) = (0.75) x (0.5) 
  • (8*) V(m) = 0.375

S’s preference now shifts from sending m1 to sending m2. And since m2 has a three in four chance of being correct, the epistemic efficiency of the system is increased from 0.25 to 0.75. This is a significant improvement. And, if the assumptions are correct, illustrates one significant way in which to improve the overall efficiency of an epistemic system.

As I said at the outset, this is not a particularly earth-shattering conclusion. Indeed, it is what motivates blinding protocols in scientific experimentation. What’s nice about the result is the formal apparatus underlying it. This formal apparatus is flexible, and can be used to model, evaluate and design other kinds of epistemic system.

Friday, July 13, 2012

Is Death Bad or Just Less Good? (Part 4)

(Part One, Part Two, Part Three)

This is the final part in my series of posts looking at the badness of death. The series works off the article by Aaron Smuts entitled “Less Good but Not Bad: In Defense of Epicureanism about Death”. In the article, Smuts defends a position he calls innocuousism, which holds that death is not bad for the one who dies.

The argument behind this is called the Dead End Argument (DEA). This argument holds that death is neither intrinsically nor extrinsically bad because it is an experiential blank. When one is dead one has no mental experiences, and only mental experiences carry value. The key premise of the DEA sets forth something Smuts calls the “causal hypothesis”:

(4) Causal Hypothesis: An event is extrinsically bad if and only if it leads to intrinsically bad states of affairs.

The causal hypothesis is challenged by defenders of the deprivation account (DA). The DA holds that death is bad because it deprives the person of the good experiences they would have had if they had remained alive. In the previous post, we saw how Smuts rejected the DA. He did so on two, interrelated, grounds. First, because it leads to absurd conclusions, such as that a person whose life is filled with positive experiences is actually living a bad life because things would have been better (however marginally) if they had made different choices. Second, because it confuses a state of affairs’ being less good with its being bad.

Despite this rejection of the DA, Smuts’s argument is not home and dry. There are three question marks still hanging over it. First, there is a worry that the causal hypothesis itself leads to absurd conclusions. Second, there is a worry that the DEA as whole would alter our attitude toward the wrongness of killing. And third, there is a worry that the DEA is incompatible with our seemingly well-founded fear of dying.

In this post, we’ll see how Smuts responds to each of these worries.

1. Is it bad to deny anaesthesia?
A long time ago — all the way back in part two — I put forth the following thought experiment (taken from Smuts):

Denying Anaesthesia: Suppose a person is undergoing surgery that will alleviate some serious condition. The surgery clearly leads to an intrinsically good state of affairs. But the surgery itself can be performed in two ways: (a) with anaesthetic; or (b) without. If it is performed without anaesthetic, the person will not die, but will be in considerable pain. If it is performed with anaesthetic, this is avoided.

I then asked the question: would it be bad (extrinsically) to deny anaesthesia to the person undergoing this surgery? Or, rather, would it be good (extrinsically) to administer the anaesthetic? The answer seems obvious: Of course it would!

This is where the causal hypothesis runs into trouble. In the scenario just described, the result of the surgery is the same irrespective of whether it is performed with anaesthetic or without. So administering an anaesthetic does not lead to an intrinsically better or worse state of affairs. Thus, it would seem that, under the causal hypothesis, denying anaesthetic is not bad and administering it is not good. Surely this is absurd?

Smuts, as I read him, offers two replies. First, he notes that whether the anaesthetic leads to intriniscally good states of affairs or not, depends on how fine-grained an analysis of the scenario you undertake. True, the outcome of the surgery is the same. But the person undergoing that surgery will be in great pain (presumably), while it is ongoing, if they are denied the anaesthetic. Experiences of great pain are intrinsically bad, and so denying anaesthetic would be extrinsically bad since it would causally contribute to such experiences. Thus, even under the causal hypothesis, denying anaesthetic would be bad. Absurdity avoided.

Second, Smuts argues that maybe administering anaesthetic is not all that good. Rather, it is far (far) less bad. He admits that this position seems counterintuitive at first glance but enjoins us to give it a chance. It’s not too bad once you get to know it. The reasoning is as follows: any state of affairs in which the administration of such anaesthetic is needed to avoid pain must be itself bad (i.e. you must be sick or dying). So is it not true to say that you’d rather not have to be anaesthetised in the first place? It’s just that, given your predicament, the surgery is necessary and so undergoing the surgery with anaesthesia is far less bad than undergoing it without.

This second point is not trivial. For one thing, it tracks the kinds of distinctions (between less good and bad, and less bad and good) that the DA failed to track. This was a major factor in the patent absurdity of the DA. Furthermore, it might explain why we have a moral duty to anaesthetise a person (if we are, say, the surgeon). Duties, Smuts argues, tend to be oriented toward preventing the bad not toward promoting the good. If administering anaesthesia is understood as promoting the good, not just as preventing the bad, then it would be less likely for us to have a duty to do it. But we do have a such a duty. So, since under his account the administration of anaesthesia is about preventing a far worse situation not about promoting a good one, we seem to have an additional reason to support the causal hypothesis.

2. The Wrongness of Killing
So much for the causal hypothesis. How about the conclusion of the DEA itself. As you recall from part one, the argument concludes that:

(7) Therefore, death is not prudentially bad for the person who dies.

Is this conclusion not wildly at odds with our views about the wrongness of killing? If it turns out that death is not bad for the one who dies, then what’s so bad about people going around and randomly euthanising other people? But isn’t that just insane?

Consider the process of reasoning here. First, one accepts the logic of the DEA. Then, one realises the clash between it and one’s beliefs about the wrongness of killing. Then one reasons: my belief in the wrongness of killing is about as solid as any belief I could possibly have (after all, it’s the foundation of every ethical theory and every human society). My belief in the DEA is rather less solid. So if the DEA clashes with my belief about the wrongness of killing, the DEA itself must be wrong.

This is tempting line of thought. Is there anything that a defender of the DEA can say in response? Smuts is somewhat cagey, but he suggests two possible lines of counterargument.

The first line highlights the distinction between something being prudentially bad and something being morally bad. An event is prudentially bad if it reduces the welfare of the person who is subject to it; an event is morally bad if it breaches some morally principle or causes some moral disvalue. The DEA is an argument about prudential badness not moral badness. The two are only equivalent under the normative position known as welfarism, which reduces all normative claims to claims about individual welfare. But welfarism is rejected by many. So it's not clear that the DEA entails that killing is okay.

The second line suggests that, even if welfarism is true, the morality of killing might be more complex than our initial gloss on it suggests. Under the DEA, death is not deemed prudentially neutral; it is deemed less good. If it were prudentially neutral, then the permissibility of killing might follow, but since it is not, it may not. We might still have an obligation to prevent less good states of affairs and so the DEA need not entail an absurdity.

Smuts accepts there is more to be said, but he leaves it there. Others have taken up this topic in more depth and I hope to look at their arguments at a later stage.

3. The Fear of Death
A final problem (for now) with the DEA is the implications it has for our fear of death. Like our beliefs about the wrongness of killing, our fear of death and dying seem pretty solid. That is to say, we seem to have good grounds for fearing our deaths. But if the DEA is true then our death is not bad for us, so why should we fear it?

There are a lot of things to be said about this. First of all, one must distinguish between death and dying. The process of dying might be painful and drawn out, i.e. riddled with intrinsically bad states of affairs, and so we might have good reasons for fearing that. Additionally, there is the fact that Epicureanism about death, at least as traditionally conceived, was all about removing the fear of death. If that’s one of the implications of the DEA, so be it.

Nevertheless, Smuts thinks there probably are good reasons to be anxious or saddened by one’s demise (though perhaps not stricken with fear at the prospect). Death may prevent us from completing our projects, or cut us down before we reach our potential. These are things to be deeply regretted. The DEA does not deny the appropriateness of such emotions. So while it may not encourage fear, it may not be all that counterintuitive.

4. Conclusion
To sum up, this series of posts has looked at Aaron Smuts’s defence of innocuousism. This is the view that death is not bad for the one that dies. Innocuousism relies on the causal hypothesis, according to which an event is extrinsically bad if and only if it leads to an intrinsically bad state of affairs. Since being dead is not intrinsically bad (because it is an experiential blank), it follows from the causal hypothesis (and some other premises) that death is not bad for the one who dies.

The causal hypothesis is challenged by defenders of the deprivation account of the badness of death. According to them, death is bad because it deprives us of states of affairs that could have been good. But as we saw in part two, the deprivation account has some absurd implications. Primarily, this is because it fails to track the distinction between less good and bad. The causal hypothesis tracks this distinction and so is to be preferred.

Nevertheless, as we saw in this post, the causal hypothesis, and innocuousism more generally, have troubling implications of their own. However, each of these can be neutralised. The upshot is that death is not bad, but it may be less good. Defenders of life extension policies will no doubt be encouraged by this, but at the same time they need not fear death as much as they might.

Thursday, July 12, 2012

A Request


I don't really like to do much work promoting this blog or my own academic work. Something within me recoils at the thought. But at the same time, lurking beneath my veneer of modesty, there is an ego longing for attention and acclaim. And while I'd probably continue writing blog posts even if no one read them, part of me would like to have (a few) more readers. Also, part of me knows that promotion of this sort is necessary in my line of work.

Consequently, I'd like to issue a mild plea for people to share my blog posts - if they like them - on the various social media outlets included in the buttons on this blog. I know it's not difficult since I do it myself all the time. The buttons are there on each blog post, and pretty much everyone is active on some social network these days. If you did, I would be most appreciative. And I'd like to thank all the people who do this already. I may not get the chance to thank you all personally, but I really am grateful.

Finally, I'd also like to draw people's attention to the fact that you can follow the blog on twitter, facebook and google plus (which makes sharing posts even easier). The links are all in the sidebar. You can also follow my actual academic work on academia.edu (link is in the sidebar again). I haven't updated my page on academia.edu in a while. But I will definitely be doing so quite soon. I have quite a number of works in progress at the moment, and will be uploading them in due course.

Anyway, that's enough of a plea for the time being. I may be engaging in other shameful acts of self-promotion before the end of the summer. You have been forewarned.

Is Death Bad or Just Less Good? (Part 3)

(Part One, Part Two)

This brief series of posts is looking at the supposed badness of death. The series is working off Aaron Smuts’s article “Less Good but not Bad: In Defence of Epicureanism About Death”. In the article, Smuts defends a position he calls innocuousism. This is a specific version of the Epicurean position that death is not bad for the one who dies.

Smuts uses the Dead End Argument (DEA) to support innocuousism. According to the DEA, death is not bad because it is an experiential blank, and only experiential states can be good or bad. The centrepiece of the DEA is the so-called causal hypothesis:

(4) Causal Hypothesis: An event is extrinsically bad if and only if it leads to intrinsically bad states of affairs.

This hypothesis allows Smuts to reach the conclusion that death is neither intrinsically nor extrinsically bad for the one who dies. But this thesis is challenged by defenders of the Deprivation Account (DA) of the badness of death. According to the DA, death is bad because it deprives you of good experiences that you might otherwise have had.

In the previous post, we looked at a variety of thought experiments and arguments that are used by defenders of the DA. These thought experiments were designed to pump the intuition that counterfactual assessments of goodness — that is, comparative assessments of the value of different possible worlds close to our own — are relevant when determining whether what happens is actually good or bad. This was summed up in the OVT (overall value thesis), which read:

OVT: The overall value of a state of affairs P for a subject S at a time T in a world W is equal to the intrinsic value of T for S at W, minus the intrinsic value of T for S at the nearest world to W at which P does not obtain. 

In this post, we’ll look at how Smuts responds to defenders of the OVT. In essence his argument boils down to the following: the OVT is wrong because it leads to absurd conclusions. To be more precise, the OVT is wrong because it conflates a state of affairs’ being less good with its being bad (all things considered).

1. Counterfactual Thought Experiments and Less Goodness
Recall from part two, the Joe College thought experiment. In this thought experiment we are invited to imagine the choices facing Joe before he goes to college. He can choose between college A and college B. If he goes to A, he will study accounting, become a reasonably successful accountant, and live a generally happy life. If he goes to B, he will study philosophy, discover a deep passion for the subject, become a world-renowned philosopher at a top university, and live a much happier life.

According to proponents of OVT, if Joe chooses to go to A his life will be bad, all things considered. This is because his state of well-being in that world will be less than his state of well-being in the world in which he chooses to go to B. And since the OVT forces us to determine overall value by a comparison between these two worlds, it follows that A is bad because B is better.

But is this really credible? Consider a structurally similar example where the differences between college A and college B are rather more trivial.

Joe Coffee: Joe has a choice between two colleges, A and B. If he goes to A, he will major in math, go to graduate school and land a great job at a research university. There he will live out a comfortable and intellectually stimulating existence. If he goes to B, he will major in philosophy, go to graduate school and land a great job at a university. There he will live out a comfortable and intellectually stimulating existence. Joe would find philosophy equally as fulfilling as mathematics, and his general life circumstances would be equivalent. However, if he became a math professor he would be in a department with a great cappuccino machine, whereas the philosophy department would have a lousy coffee machine. (taken from p. 208)

Now suppose Joe went ahead and chose college B, would his life be bad, all things considered? Surely not. Surely the mere fact that he chose a rewarding career in philosophy (+ bad coffee) over a rewarding career in mathematics (+ good coffee) does not make his life bad. It might make it (very marginally) less good but that’s a different matter.

Here is where the absurdity of the OVT reveals itself. If the OVT is true, then the conclusion that Joe’s life in college B would be bad (all things considered), merely due to the absence of the cappuccino machine, would seem to follow. After all, OVT enjoins us to determine value based on the comparative assessment of worlds. Since the value of the world in which Joe goes to B subtracted from the value of the world in which he goes to A is negative, it follows that his life in that world is bad. But this cannot be right since nothing bad actually happens to him in that world. In fact, most of what happens to him is very good.

This gives us the following argument against OVT (and its ilk):

  • (13) Suppose: Joe has a choice between going to college A and college B. His life after choosing college B would roughly be equivalent in value to his life after choosing A, with the sole exception being that his life in A would come with better coffee. Joe chooses college B and misses out on the good coffee. 
  • (14) If OVT is true, then Joe’s life is bad, all things considered. 
  • (15) In no sense could Joe’s life be deemed bad (all things considered) after going to college B (he has a successful and rewarding career and a comfortable, well-heeled existence after all). 
  • (16) Therefore, OVT must be false.

At this point, I should say that Smuts’ supports this criticism of OVT (and the deprivation thesis more generally) with several other thought experiments. These, while structurally similar to Joe Coffee, might be more persuasive to you so I urge you to check them out. I chose Joe Coffee because I thought it was the starkest and most ridiculous of them all; the one that brings into clearest relief the problems with OVT. The above argument could be tweaked to incorporate your preferred thought experiment.

2. Whither then the Deprivation Thesis?
One could legitimately wonder why the OVT goes off the rails like this. After all, when we looked at some of the thought experiments in the previous post the OVT looked pretty compelling. Two reasons might account for this.

The first is that the comparativism at the heart of the OVT is an intrinsically fuzzy and problematic way of assessing value. The actual world will turn out in some particular way. We could compare the value of that actual world to any number of possible worlds. Depending on how we do this, the actual world may look very bad or very good. After all, every outcome is bad measured against some set of alternative outcomes and good measured against another set of alternative outcomes. If there are no real restrictions on which set of alternative outcomes can be included — and the OVT provides no such restrictions beyond the fact that the comparator world must not be one in which the event under consideration in the actual world occurred — the comparison is meaningless. One could tweak the conditions of the thought experiment however one liked to pump the desired intuition. This is not a good way to reach philosophical conclusions.

The other explanation for why the OVT seems compelling is that it might work quite well for guiding decision-making, but not for assessing overall value. When we are reasoning about what we ought to do, consideration of relevant counterfactuals is important. We want to make the best possible decision, and so we rule out possible choices on the grounds that they lead to worse outcomes. We deem these decisions “bad” as a result. But this does not mean that our lives are bad if we make the wrong decision. Our lives could still be quite good, even if our decision were “bad”.

So the OVT is wild of the mark when it comes to the assessment of the overall value of a state of affairs. What implications does this have? Does the causal hypothesis win simply because the OVT does not? Not exactly. Smuts’s argument reveals that the OVT misses an important distinction, namely: the distinction between a bad state of affairs and one that is less good. The positive argument in favour of the causal hypothesis is that it properly tracks this distinction.

We can see this if we apply the causal hypothesis to the Joe Coffee example. According to the causal hypothesis, Joe’s life after choosing college B cannot be deemed bad because it neither leads to, nor consists in, an intrinsically bad state of affairs. In fact, quite the opposite: the states of affairs in that world look to be intrinsically and extrinsically good. At the same time, the causal hypothesis respects the fact that Joe’s life after choosing college A would be have been (however marginally) better. Thus, the causal hypothesis avoids conflating less good with bad.

3. What Next?
So things are looking up for the DEA. The causal hypothesis was the crucial link in the chain and, now that the deprivation thesis has been knocked down, it looks to be solid. But this does not mean the DEA is out of the woods.

As we saw in part two, the causal hypothesis looks like it too leads to an absurd conclusion — viz. it suggests that denying anaesthetic to someone undergoing an operation is not bad. Furthermore, it looks like the conclusion of the DEA might have some counterintuitive implications of its own. In particular, it looks like it might warrant a more lacklustre prohibition (if any) against gratuitous killing and a less anxious attitude toward one’s demise. We’ll see how Smuts deals with these three problems in the final part of the series.

Wednesday, July 11, 2012

Eyewitness Enhancement and the Common Good

I am going to briefly interrupt my series on the badness of death to look at an argument on human enhancement. I’ll get back to musing about death and immortality over the weekend. I find weekends are conducive to such things.

A while back, I took a look at an argument by a pair of philosophers (Anton Vedder and Laura Klaming) proposing that enhancement technologies might be used for the common, not just the personal, good. To be more precise, the authors argued that transcranial magnetic stimulation (TMS) might be used to enhance eyewitness memory and recollection, and that this is turn might serve a common set of interests in identifying those who are guilty of crimes (and exonerating those who are innocent).

Vedder and Klaming would be the first to admit that their argument is speculative in nature. The memory-enhancing effects of targeted TMS are only recently discovered and are still poorly understood. But nevertheless they submit that the argument they use to support its (speculative) use could significantly alter the landscape of the enhancement debate.

This is for two reasons. First, traditional debates over the merits of enhancement often hinge on an (arguably meaningless) distinction between treatment (which is okay) and enhancement (which is contested). Justifying enhancement in the name of the common good has the nice effect of sidestepping this distinction. The treatment-enhancement distinction matters at the individual level, since what counts as treatment and what counts as enhancement is indexed relative to the “normal” capacities of the individual. But it does not matter from the perspective of the common good. From that perspective, all that matters is whether an intervention serves a common end. Any indexing to “normal” and “supernormal” capacities falls away. In addition to this, focusing on the common good has the nice effect of sidestepping some traditional arguments against enhancement, which tend to highlight the self-regarding and self-interested nature of enhancement.

But Vedder and Klaming’s argument has been criticised by a number of people. I examined Hauskeller’s criticisms in my earlier post. In this post, I want to take a look at another criticism which comes from the inauspiciously titled article “Is Invading the Sacred for the Sake of Justice Justified?”, by Pepe Lee Chang and Diana Buccafurni. I say this is “inauspicious” since the use of the word “sacred” is both unnecessary and off-putting, to me at any rate.

Nevertheless, the article is not a total wash (it’s only three pages!) and it does present a mildly interesting objection to Vedder and Klaming’s proposal. I want to set out that objection here in formal terms and briefly consider possible replies to it. I won’t, unfortunately, go into much detail on the possible replies since a fuller reply will feature as a (small) part of a longer academic article I am currently working on. It’s not that I’m being secretive, it is just that this reply is not completely formulated in my mind right now.

1. The Argument from the Diminishment of Individual Worth
The objection that Chang and Buccafurni offer works from the premise that the neurocognitive enhancement of eyewitness memory (NEEM, as they call it) could undermine the worth of the individual. Consequently, I call it the argument from the diminishment of individual worth (ADIW for short). (Aren’t philosophical acronyms — or rather initialisms — wonderful?)

The ADIW is compactly expressed in one paragraph of Chang and Buccafurni’s article, which I shall quote here in full:

If cognitive capacity manipulation is accepted because it benefits the common good, this would mean that it is also accepted that individual good is worth sacrificing for the common good. We define individual good not simply as the absence of physical or psychological pain but as the presence of respect for cognitive capacities as an intrinsically valuable end in itself. If respect for human cognitive capabilities is treated as an end in itself, then accepting the manipulation of these capacities for the common good is a violation of this respect. In other words, accepting that the individual good is worth sacrificing for the common good violates the intrinsic value of the individual. As a result, the worth of the individual is diminished.

Let’s try to unpack all of this. In essence, Chang and Buccafurni are pointing to a deep tension between the common good — in this case represented by the desire to prosecute the guilty and exonerate the innocent — and the individual good. The former is about achieving desirable social ends, the latter is about respecting intrinsic personal properties like autonomy and cognition. Because of this tension, pursuing enhancement for the common good undermines respect for the individual. The individual is seen as a mere means to a particular social end, not as an end in him- or her-self.
The argument could be formalised in the following manner:

  • (1) If a proposal advocates the manipulation of a cognitive capacity in order to achieve a socially desirable end, instead of respecting that capacity for its intrinsic value, then it treats a person as a mere means to an end, not as an end in their own right. 
  • (2) Vedder and Klaming’s NEEM-proposal advocates the manipulation of a cognitive capacity in order to achieve a socially desirable end. 
  • (3) Therefore, the NEEM proposal treats people as means to an end, not as ends in their own right. 
  • (4) It is bad to treat people as a means to an end because it diminishes their self-worth. 
  • (5) Therefore, the NEEM proposal is bad.

There are several complaints one may have about this argument. For starters, one may wonder how exactly individual self-worth is diminished. There is probably an answer to this. And that answer could be sketched as follows. If an individual sees themselves as a mere cog in a machine — i.e. just one manipulable contributor to the efficiency of the justice system — then they begin to lose the sense of themselves as autonomous creators and pursuers of their own conception of the good. And the more the impression of being a cog in a machine is reinforced, the more this sense is lost. But that sense is essential to self-worth, hence if it is lost self-worth is diminished.

That seems fair enough to me, but it looks like there another, bigger complain about the argument. This is that it assumes there is always a tension between the common good and the individual good. If the common good is seen as the aggregation of individual good, and not as some emergent sui generis property, then there is no necessary tension between the two. It is perfectly possible that the individual good could serve the common good, and vice versa. The notion that one has to be sacrificed to save the other might be illusory. Of course, this is something that has to be proven in the particular case of NEEM, but I think this is possible and is something I am currently trying to develop.

2. Informed Consent and Individual Worth
There’s another thing too. The criticism that Vedder and Klaming’s proposal could lead to tensions between common interests and individual interests seems quite obvious. So obvious in fact that you’d expect Vedder and Klaming to anticipate it and offer some kind of response. As a matter of fact they do exactly that. And, as another matter, Chan and Buccafurni respond to their suggestion. I’ll close by briefly considering both

Vedder and Klaming note that NEEM could be in tension with certain individual rights such as privacy and autonomy. Their solution is to adopt an informed consent model for the use of NEEM. In other words, to only allow those eyewitnesses who offered informed consent to the use of TMS would to undergo the enhancement. Informed consent is a popular way to solve potential autonomy violations in medicine and other fields, so this looks like a promising solution.

But as Chan and Buccafurni note, there are at least three problems with it. First, there is the fact that many statutes, in many parts of the world, make the obstruction of justice an offence. Arguably, if what is stopping people from helping the police in their investigations is the fact that they have not undergone NEEM, they are obstructing justice. The penalties associated with this could have a coercive effect and undermine the informed consent model. Second, there is the fact that since the common good is at stake, it would be difficult for one individual to subvent it. In other words, the fact that NEEM was recognised as being in the common interest would create a coercive social pressure which would limit effective consent. Finally, there is the problem of unintended consequences, which might undermine the “informed” nature of the consent. As the authors note, memories are complex and sometimes traumatic and painful. If NEEM leads to the general improvement of memory, it could lead to these painful memories being dredged up to. We may think a general warning to this effect would be sufficient to satisfy the requirements of informed consent, but Chan and Buccafurni dispute this.

I think Chan and Buccafurni are on somewhat more solid ground with these criticisms. In particular, I agree that enhancement technologies can create social pressures, and that these social pressures can be coercive. I just don’t think that this is necessarily the case and I think this can be shown by pursuing my earlier line of criticism, which would highlight a deep constitutive relationship between personal and common goods. Developing this line of criticism is something I plan to do quite soon.

Sunday, July 8, 2012

Is Death Bad or Just Less Good? (Part Two)

(Part One)

This post is the second part in a brief series looking at the infamous Epicurean argument that death is not bad for the one who dies. The series is working off Aaron Smuts’s recent article “Less Good but not Bad: In Defense of Epicureanism about Death”. In the article, Smuts defends the innocuousist position, which holds that death is not prudentially bad because it is an experiential blank, i.e. it cannot be bad because no positive or negative experiential states are associated with it.

The argument that Smuts uses to defend this conclusion is called the Dead End Argument (DEA). This was outlined in full in part one. The DEA relies on a number of controversial premises, but the most controversial is the so-called causal hypothesis which states (numbering taken from the DEA in part one):

(4) Causal Hypothesis: An event is extrinsically bad if and only if it leads to intrinsically bad states of affairs.

The causal hypothesis is based on the idea that if an event is not in itself intrinsically good or bad, it can only derive its goodness or badness from its causal contribution to an event or state of affairs that is intrinsically good or bad. This is controversial because it contradicts a popular thesis about the badness of death, the so-called Deprivation Thesis. We’ll be refining this thesis later in this post, but for now the following formulation will do:

Deprivation Thesis: Life is prudentially good because it is associated with enjoyable and pleasant experiences. The state of non-being (death) is bad because it deprives you of those states.

The deprivation thesis depends on a counterfactual claim. This is inimical to the causal hypothesis. The deprivation thesis tells us that death is bad, even if it is an experiential blank and even if an experiential blank is not intrinsically bad, because it deprives us of something that would have been valuable if it had been the case, namely: continued existence. In this post we will look at some of the arguments advanced by defenders of the deprivation thesis. In the next post we’ll look at Smuts’s responses to these arguments.

1. Joe College and the Taliban Girl
The most popular way to defend the deprivation thesis and reject the causal hypothesis is through the use of thought experiments. These thought experiments are designed to pump our intuitions about when something is extrinsically (or intrinsically) good or bad. Reflection on those intuitions is supposed to reveal the flaws in the causal hypothesis. We’ll be looking at three such thought experiments here. Two suggesting that the causal hypothesis has the wrong account of extrinsic badness, and one looking at the other side of equation and suggesting that the causal hypothesis has the wrong account of extrinsic goodness.

We start with the two dealing with extrinsic badness. Consider the following (from Feldman a defender of the deprivation thesis):

Joe College: Joe is admitted to two colleges (A & B). If he goes to A he will study accounting, become a moderately successful accountant and live a reasonably good life. If he goes to B, he will study philosophy, discover a passion for the subject, and pursue a highly successful and fulfilling career at a top university. His life would be better if he went to B rather than A.

Suppose Joe decides to go to A. Was this a bad decision? Feldman think it undeniably so. In fact, he thinks that because going to B would have been better than going to A, going to A is extrinsically bad: it leads to a worse state of affairs. This, in effect, is the counterfactual claim at the heart of the deprivation thesis and it contradicts the causal hypothesis. Under the causal hypothesis, Joe’s choice of A is not extrinsically bad because Joe’s subsequent life is not bad, indeed it is reasonably good.
To emphasise the point raised by the Joe College thought experiment, consider another thought experiment:

Taliban Girl: Suppose there is a girl living in a repressive, fundamentalist Islamic culture that forbids teaching women how to read. If she is raised in this culture, she will remain illiterate but will otherwise live a reasonably good life. If she had been raised elsewhere, and taught to read, she would have developed a love for poetry, become a great poet and have lived a much better life.

Supposing she is raised in the fundamentalist culture, is her life bad? Again, it seems like it is. The fact that things would have been better if things were different seems to make the life she lived bad, even though the positive aspects of the life she did live outweigh the negative aspects (i.e. the illiteracy). This contradicts the causal hypothesis. The two possible lives are illustrated in the diagram below, and the badness of the illiterate existence is highlighted.

Taliban Girl - Thought Experiment

There are lots of problems with the two thought experiments. Many of which will be raised in subsequent parts. But granting their plausibility for now, Feldman thinks they support the following account of extrinsic badness (this is labelled EI, for some reason that is not revealed in Smuts’s article):

EI: Something is extrinsically bad for a person if and only if he or she would have been intrinsically better off had it not taken place.

This thesis explains the intuitive reactions to the Joe College and Taliban Girl thought experiments, and replaces the causal hypothesis.

2. Denying Anaesthesia
But EI only looks at half the picture. If the causal hypothesis claims that an event is only extrinsically bad if it causally contributes to an intrinsically bad state of affairs, it stands to reason that the symmetrical position is also true, i.e. that an event is extrinsically good if and only if it contributes to an intrinsically good state of affairs. As Smuts puts it, any defence of the causal hypothesis that appealed to an asymmetry between good and bad, would be ad hoc.

But there is a seemingly compelling counterexample to the causal hypothesis when it is applied to extrinsic good. Consider:

Denying Anaesthesia: Suppose a person is undergoing surgery that will alleviate some serious condition. The surgery clearly leads to an intrinsically good state of affairs. But the surgery itself can be performed in two ways: (a) with anaesthetic; or (b) without. If it is performed without anaesthetic, the person will not die, but will be in considerable pain. If it is performed with anaesthetic, this is avoided.
Denying Anaesthesia - Thought Experiment

Now the question: is it extrinsically good (better) to administer anaesthetic, even if it is not absolutely essential to the success of the surgery? It seems obvious that it is. But, bizarrely, the causal hypothesis denies this. According to the causal hypothesis, performing surgery with or without anaesthetic is extrinsically the same since both lead to the exact same outcome. Administering anaesthetic does not causally contribute to an intrinsically better state of affairs so it can’t be extrinsically good.

This, as I say, seems bizarre and wrong. And developing a thesis that accounts for why this is would be desirable. EI doesn’t do this since it only looks at extrinsic bad, so a broader thesis is needed. Bradley, another defender of the deprivation account, offers one such thesis. He calls it the OVT (Smuts does not say why, but I’m guessing it stands for “Overall Value Thesis”):

OVT: The overall value of a state of affairs P for a subject S at is equal to the intrinsic value of T for S at W, minus the intrinsic value of T for S at the nearest world to W at which P does not obtain. (Where W = a world, and T = a time)

OVT fully embodies the counterfactual claim at the heart of the deprivation thesis. It claims that the value of a state of affairs can only be assessed by comparing it to the value of the state of the nearest possible world in which that state of affairs does not obtain. This supplies all we need for a deprivationist argument for the badness of death. For sake of completeness, let’s spell out that argument here (with numbering continuing from part one):

  • (8) OVT is true: the overall value of a state of affairs P for a subject S in a particular world at a particular time is equal to the value of the world at that time to S, minus the value of the nearest possible world at that time to S, in which P does not obtain.
  • (9) If you die at time T, you cease to exist and cease to have anymore positive experiences (call this state of affairs P1). 
  • (10) In the nearest possible world in which P1 does not obtain, you do not die and continue to have positive experiences (call this state of affairs P2). 
  • (11) The value of P2 (for you) exceeds the value of P1 (for you). 
  • (12) Therefore, dying is overall bad for the person who dies.

As I believe I have mentioned elsewhere, this is pretty much the canonical view about death. But Smuts thinks it is wrong. He does so because EI and OVT, which are used to support this deprivationist argument, lead to absurd conclusions. We’ll start looking at those absurd conclusions the next day.