Thursday, August 29, 2019

Making Sense: The Art of Philosophical Living (Index)


Diogenes the Cynic


I don't see philosophy as a mode of inquiry; I see it as a way of life. Nevertheless, until relatively recently, I always tried to keep myself (i.e. my self) out of what I wrote. I did so because I believed this was the appropriate thing to do - that it was in the interests of personal and professional humility. After all, who cares about me? My philosophical lens was, thus, always turned outwards, onto the world, and never inwards, onto the self.

That changed when my sister died back in April 2018. In the year following her death, I wrote several, far more personal articles. These articles focused initially on how to cope with grief, but then grew into more general reflections on character, attitude and outlook. In each of them, I've been trying to use the tools of philosophical analysis to re-assess and to re-adjust.

I have found writing these articles to be therapeutic, even though I cringe, slightly, when I read back over them. To me they seem quite self-indulgent. Still, a large number of readers have responded positively to them and they are now among the most popular things I have ever written. Consequently, it feels like the time had come to group them together into one index. Some people might find it useful to read them together as a collection. Below, I have grouped them according to certain themes. This does, however, correspond roughly to the chronological order in which I wrote them. So not only do they cover specific topics, they also provide a pretty accurate record of how I was thinking over the course of a year or so.


How should I cope with death?

What kind of attitude should I have to life?

How should I approach my work?

Putting it all together




Wednesday, August 28, 2019

#63 - Reagle on the Ethics of Life Hacking

Joseph Reagle
In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the social implications of digital technology. Our conversation focuses on his most recent book: Hacking Life: Systematized Living and its Discontents (MIT Press 2019).

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).




Show Notes

  • 0:00 - Introduction
  • 1:52 - What is life-hacking? The four features of life-hacking
  • 4:20 - Life Hacking as Self Help for the 21st Century
  • 7:00 - How does technology facilitate life hacking?
  • 12:12 - How can we hack time?
  • 20:00 - How can we hack motivation?
  • 27:00 - How can we hack our relationships?
  • 31:00 - The Problem with Pick-Up Artists
  • 34:10 - Hacking Health and Meaning
  • 39:12 - The epistemic problems of self-experimentation
  • 49:05 - The dangers of metric fixation
  • 54:20 - The social impact of life-hacking
  • 57:35 - Is life hacking too individualistic? Should we focus more on systemic problems?
  • 1:03:15 - Does life hacking encourage a less intuitive and less authentic mode of living?
  • 1:08:40 - Conclusion (with some further thoughts on inequality)
 

Relevant Links




Friday, August 23, 2019

Understanding Praiseworthiness: Does more effort equal more praise?





I recently finished my first solo-authored book (available in all good bookstores in September!). Here’s a question: do I deserve any praise for doing this? Well, consider some relevant facts. I found writing, editing and indexing the book to be quite arduous. Don’t get me wrong. I enjoyed conceiving the main idea for the book and mapping out its main arguments; but the actual writing was a pain. It took me over a year to finish the 110,000 word manuscript. Due to various setbacks and delays, a surprising amount of that writing was completed in the last month (about 50,000 words). That month was tough. The writing took up all my energy and attention and left me with little time for anything else. What’s more, once I finished the manuscript the job wasn’t done. The manuscript had to be reviewed and I had to revise it in response to the reviewers. That took another month. After that, I had to go through two more rounds of copy edits and revisions, and, to top it all off, I then had to spend three days preparing and writing an index. If you have ever done it, you will know that preparing an index is one of the more mind-numbing tasks you can perform. First world problems, I know, but I just want to emphasise that it was a lengthy and difficult process.

So do I deserve any praise for this? You might say ‘no’ because the book isn’t any good. I wasted my time on something that isn’t worthwhile and no one deserves praise for wasting their time in this way. But let’s assume that’s not true. Let’s assume the book is worthwhile. Does the fact that I spent so much time and effort on it make its completion more praiseworthy? To be more precise, does the volume of effort expended on writing the book increase the amount of praise I am owed?

Many people have the intuition that it does. They follow a simple formula when deciding how much praise is due to someone for an achievement:

More effort = More praise (all else being equal)

But does this formula hold up to closer scrutiny? In a recent article entitled “Praiseworthiness and Motivational Enhancement: No Pain, No Praise?” Hannah Maslen and her colleagues have argued that it does not. Their argument is, ostensibly, about a particular issue in the enhancement debate — namely: whether motivational enhancement undermines praiseworthiness — but in the course of presenting this argument they develop a general theory of praiseworthiness that I found quite illuminating. I want to examine that theory in the remainder of this article. I won’t completely ignore what they have to say about motivational enhancement since it does provide a nice illustration of how their theory applies in practice, but my focus will be primarily on the theory itself.


1. The Theory of Praiseworthiness
Let’s start by thinking about what praiseworthiness is. As a first step we can say that praiseworthiness is related to, but importantly distinct from, responsibility. We often talk about people being ‘responsible’ for performing actions that produce certain results in the world (call these results the ‘outputs’ of the action). If we decide that someone is responsible for producing certain outputs, we then proceed to blame or praise them for doing so. We blame them if we think the outputs are bad; we praise them if we think the outputs are good. Both praise and blame come in degrees. In other words, an agent can be more or less praiseworthy/blameworthy depending on the circumstances.

There is a lot of attention dedicated to blame in the philosophical literature. This is not surprising. Figuring out who deserves to be blamed for doing wrong is a high stakes game and is central to most human societies. We have norms that we expect people to uphold and we see blame as an important way of policing and enforcing those norms (whether that is true and/or a good thing is beyond the scope of the present discussion). Praise has received less attention in the philosophical literature. This is unfortunate since not only is it a worthy topic in its own right, but thinking about praiseworthiness can also shed light on blameworthiness. Since they are complementary phenomena we can expect similar factors to be relevant to the assessment of both.

A theory of praiseworthiness should help to explain how praise varies depending on the circumstances. In other words, it should identify the variables that are relevant to assessing the degree of praise owed to someone for producing a certain output. What are these variables? Maslen et al argue that four variables are relevant to the assessment of praise. We can set these out in the form of a mathematical equation — since Maslen et al use mathematical language in explaining their theory — but I wouldn’t read too much into that formalisation. It’s a useful metaphor/mental model but we are obviously not going to be able to quantify the variables in this equation in any precise way.

The formula is this:

Degree of Praise = Voluntariness(Cost of Commitment x Strength of Commitment x Value of Output)

Each term in this formula needs to be explained. ‘Voluntariness’ is a threshold condition for praise. You cannot be praised for an action that is involuntary or coerced. For example, if I held a gun to your head and told you to donate all your money to charity or else, you would hardly deserve praise for being so charitable (if you decided to donate the money). So, in a sense, voluntariness can only take on one of two values in the above equation. If the action is voluntary (1) then we can conduct an inquiry into how praiseworthy it is by looking at the other three variables; if it is not voluntary (0), then those other three variables don’t really matter.

The ‘cost of commitment’ refers, unsurprisingly, to the expenses incurred by the agent in performing the actions that produced the output. The term ‘costs’ should be interpreted broadly here. The focus is not so much on the monetary cost of committing to the action (indeed, Maslen et al don’t really consider this type of cost at all in their article) but rather on the amount of time invested in the action, the psychological effort involved in performing those actions, and the foregone opportunities (opportunity cost) associated with the actions. One of the crucial arguments they make in their paper is that the ‘more effort = more praise’ intuition that many people have is too simplistic. Effort, which they define as the amount of psychological aversion an agent has to overcome when performing an action, is a type of costly commitment, but not the only type. An agent might reduce the amount of effort involved in an action but compensate for this by incurring increased costs elsewhere. For example, an athlete might take a painkiller in order to get through a training session. The painkiller will reduce the amount of effort involved in the training session because it will reduce their need to overcome pain. But this doesn’t mean that they deserve less praise as a result. On the contrary, the use of the painkiller might increase the amount of time they can invest in training and so increase their overall level of costly commitment. This might mean they deserve more praise, not less.

The ‘strength of commitment’ refers to the degree to which the agent prioritises the production of the relevant output in their life. Maslen et al separate this out from the cost of commitment but I’m not entirely clear on why they do this. It seems to me that the strength of commitment is largely measured by reference to the opportunities the agent forgoes in order to produce the output. The committed musician will dedicate themselves to perfecting their performances and will, consequently, have to sacrifice elsewhere in their lives. This seems like a straightforward manifestation of opportunity cost. I’m not sure what else strength of commitment could mean in this context. That said, I think I know what they are talking about and it seems appropriate to include it in the assessment of praise, whether that be as a specific type of cost or something different.

An important point to bear in mind is that both the costs of commitment and strength of commitment should be assessed diachronically. In other words, you shouldn’t determine how strong or costly someone’s commitment to producing an output is solely on the basis of the actions that immediately preceded the production of the output. To give an extreme example, the last character I typed in my book manuscript was a full stop (or period if you are American). It was very easy for me to type that symbol. It had a minimal cost. But it would, of course, be wrong to assess the praiseworthiness of my completing the book solely on the basis of this action. You have to look at all the things I did that got me to the point at which that full stop was all I need to complete the book.

Finally, the value of the output produced must play some role in assessing the degree of praiseworthiness. A very low value output will not warrant much praise, no matter how costly our commitment to it was. For example, I could spend years counting all the blades of grass in my backyard. This would be a very costly, very effortful endeavour, but I would not warrant much praise for doing so. The value of the output is too low. That said, Maslen et al point out that the value of the output shouldn’t play too big a role in the assessment of praiseworthiness. Many outputs are a matter of luck: you can put lots of effort and time in and not achieve the desired result. It seems like it would be wrong to let praiseworthiness be dictated too much by luck (though, as Thomas Nagel pointed out long ago: we do allow luck to play a large role in our assessments of blame).


2. Some Implications of the Theory
That’s Maslen et al’s theory in a nutshell. Apart from the minor niggle I mentioned regarding the distinction between the cost of commitment and the strength of commitment, I quite like it. But what are its practical implications? Does it overlook anything important?

Let me consider the second of those questions first. As Maslen et al point out, the theory outlined above works well for local assessments of praiseworthiness. Local assessments concern the praiseworthiness of specific agents in relation to a specific output. The opening example of the degree of praiseworthiness I might be due for finishing my book is a good example of a local assessment in action. It is specifically concerned with one output (the book) and whether I deserve praise for producing that one output. Global assessments of praiseworthiness focus not just on how an agent dedicated themselves to one specific output but on how the agent allocates their scarce resources of time and energy across different possible projects. I might deserve praise for finishing my book if you look at this through a local lens but not if you look at it through a global lens. Maybe I invested my scarce resources of time and energy poorly.

In the paper, Maslen et al give the example of a medical researcher who dedicated their time and energy to created a vaccine for one specific disease. This is a valuable end and their commitment to pursuing it was costly. As such, it looks like they deserve a lot of praise. But maybe we shouldn’t leap to that judgment. What else could they have done with their time and energy? Suppose it turns out that they could have dedicated the same amount of time and effort to producing vaccines for three separate diseases. From that more global perspective, maybe what they did wasn’t so praiseworthy after all?

This raises another, related, point. You cannot gratuitously increase the costs of producing an output and expect more praise (whether this was intended or not). So, to stick with the example of the medical researcher, suppose that instead of doing all their experimental calculations with computer software they used paper and pen. This would increase the amount of effort involved in producing the vaccine, but it’s hardly praiseworthy. Using paper and pen might have taken them longer. Sometimes the efficient production of an output is more praiseworthy than the inefficient production. Indeed, there are some people (I’m thinking specifically of David Krakauer) who argue that intelligence is largely a measure of how efficiently you can solve problems. The more efficient (i.e. the lower the cost) the better. In fact, we often praise people for their using their intelligence in this way. What’s going on here? Does this undermine the theory of praise outlined by Maslen et al? Maybe not. I suspect we praise people who come up with efficient ways of solving problems because we see the invention of those methods as a kind of valuable output, but those who make use of those efficient methods don’t subsequently increase the praise they are owed just because they use those methods.

In addition to this, although I appreciate what Maslen et al are saying about counterfactual judgments and the role they play in assessments of praiseworthiness, I do worry about our ability to make those judgments fairly and reasonably. For example, I know of several famous book authors who write everything out in longhand before transcribing it to a word processor. You could argue that this means they have used a gratuitously inefficient method for writing a book and so any assessment of praiseworthiness should be modified accordingly. Perhaps they could have written more valuable books in less time if they had adopted a more efficient method? But they will, no doubt, argue that this inefficient method actually helps them to produce a better output. It helps them to think more clearly and carefully about what they want to say. I, personally, find that hard to understand. I find writing things out by hand to be too slow and error prone. Whenever I do it I get frustrated and stop writing sooner than I would if I used a word processor. That said, who am I to second guess their judgment? Maybe they are right and they wouldn’t have done as well if they used a word processor from the get go.

The important point here, I think, is that perhaps we shouldn’t rush to judgment of those who use inefficient methods for producing certain outputs, or who dedicate themselves to tasks we think are less valuable than other tasks they could have dedicated themselves to. Determining whether someone is investing their talents and time appropriately is often very tricky and I’m not sure that we can do it well.





Wednesday, August 21, 2019

Intoxicated Consent and Intoxicated Responsibility: Is there a paradox?




Once upon a time, I used to teach criminal law. For me, the most challenging section of the course was invariably the section on sexual offences. Some students would find the subject uncomfortable, perhaps even traumatising. Others, though interested and engaged, would find it difficult to articulate their thoughts in a precise way. There would be occasionally awkward discussions about the nature of sexual consent and responsibility, as well as contentious debates about the gendered assumptions that continue to underlie the law.

Every year, I would teach tutorial classes in which students were asked to consider the correct legal approach to real and hypothetical cases of sexual assault and rape. Every year, I found that one kind of hypothetical case would generate the most heated discussion, with the debate usually (though not always) breaking down along gendered lines.

The case would be posed by one of the students (I don’t believe I ever brought it up). The case would involve a man and a woman, both of whom were heavily, but voluntarily, intoxicated. The man and woman would then engage in some kind of sexual* touching. This could be penetrative or not; the exact form did not matter too much to the hypothetical (though see the discussion of this issue below). If it were penetrative, it would be assumed that the man had penetrated the woman. The question would then be posed: was there a legally chargeable sexual assault or rape?

This hypothetical would generate heated discussion because (a) the general presumption in law is that voluntary intoxication does not negate or undermine criminal responsibility and (b) there is an (emerging) social norm to the effect that you cannot consent to sex if heavily intoxicated. When these two things are combined with the general presumption that rape and sexual assault are usually male-on-female crimes, it would yield the conclusion that what you have here is a case in which the man is guilty of sexually assaulting or raping the woman. Some students (typically though not exclusively male) would perceive this to be unfair since both parties were voluntarily intoxicated. The more analytical students would point out that this revealed a puzzling asymmetry in our attitudes to drunken consent and drunken responsibility. (To show that my experiences with this hypothetical are not unusual, I suggest reading this article describing the discussion at a ’smart consent’ workshop that is taught to students in Irish universities; this hypothetical features prominently in the discussion).

Although I am sure I will regret doing this, I want share some of my own thoughts about this hypothetical case. I think the hypothetical is worth taking seriously because it reveals some of the tensions and nuances in how we think about consent and responsibility. I also think its apparently paradoxical aspects become less pronounced as you move away from the hypothetical to more realistic cases. That said, I am not sure what the best way to think about this hypothetical case is or if there is a simple correct answer to what should happen in such a case. I offer my own tentative ‘solution’ in what follows, but I’m not sure how convincing it is.


1. The Paradox and One Possible Solution
I want to start by sharpening the paradox that is supposedly revealed by the hypothetical. To do this, I need to say something about responsibility and consent, and then present a more abstract version of the hypothetical.

I begin with a platitude: Responsibility and consent are central to how we think about liability and blame. Both are dependent on similar underlying mental capacities. Since the time of Aristotle, responsibility is has been thought to depend on two basic capacities: (i) the capacity for voluntary action and (ii) the capacity to understand/know what your actions entail. If you perform an action voluntarily, and you understand what that action is likely to entail, you are responsible for it; if one or both of those things is absent, you are not. Consent, clearly, depends on the same capacities. Whether you are consenting to medical treatment, sex or something else, the validity of your consent depends on whether you signalled consent voluntarily and whether you knew what it was you were consenting to through that signal. There is more to it than that, of course. I’ve written extensively about the ethics of consent before and one theme that emerges from those earlier discussions is that the validity of consent also depends on how we expect consent to communicated and understood by the person to whom it is communicated. Nevertheless, at its core, valid consent depends on volition and understanding. This underlying similarity between consent and responsibility is what sets up the paradox or tension that students perceive in the hypothetical.

The hypothetical, however, involves one major complicating factor: the voluntary intoxication of both parties. Intoxication impairs our mental capacities. Mild intoxication (e.g. a single unit of alcohol) probably does little harm to our capacity for responsibility and consent, but at a sufficient degree of intoxication, it is plausible to suppose that the intoxicated person lacks any meaningful capacity for volition and, even more plausibly, understanding. This would seem to lead to the conclusion that intoxication, at a sufficiently high degree, undermines both responsibility and consent.

But no one really accepts that conclusion, at least not when it comes to responsibility. We know that intoxication can raise the risk of someone engaging in harmful activity. Some people even try to build up the courage (“Dutch Courage”) to engage in harmful activity by intoxicating themselves. Consequently, we don’t want people to be able to excuse themselves from blame by voluntarily imbibing intoxicants prior to doing something wrong. So, instead, we say that since they were responsible for their actions at the time they chose to get intoxicated, they are also responsible for the downstream consequences of that decision. In other words, we say that if they subsequently engaged in harmful activity we can trace their responsibility back in time to the point at which they chose to get intoxicated. This is sometimes referred to as a ‘prior fault’ analysis of responsibility.

Our attitude to consent is a bit different. Though I hesitate to give an authoritative statement on this, my understanding of the law in Ireland and the UK** is that voluntary intoxication does not necessarily undermine the validity of consent — but it might. In other words, courts are reluctant to say that all instances of intoxicated consent are invalid, but they accept that some instances might be, particularly if it seems that the intoxicated person was so far gone that they didn’t understand what they were getting themselves into.

There is, presumably, a plausible rationale behind this: we don’t want people to be taken advantage of while in a vulnerable, impaired state (hence we don’t want to say that all intoxicated consent is valid); but we also recognise, certainly when it comes to sex, that people do engage in mutual sexual activity while intoxicated and to say that all such cases are criminal due to lack of consent would be counterintuitive. That said, my suspicion is that there is less tolerance for this latter view nowadays than there used to be. You see this particularly in media commentary about sexual assault cases involving intoxication. Hence there might be an emerging norm to the effect that most (if not all) instances of alleged intoxicated consent are invalid. Either way, one thing that is clear in intoxicated consent cases is that we do not trace the validity of consent back in time to the decision to get intoxicated; we focus solely on the occurrent capacities of the person who is alleged to have consented. Consent is sometimes said to be a ‘continuing act’ and so there can be no prior fault analysis of consent.

There is, consequently, a clear tension between our attitudes to intoxicated responsibility and intoxicated consent. The tension can be morally justified in the sense that there is a prima facie plausible moral reason to reject the claim that intoxication undermines responsibility (i.e. to stop people from availing of an easy excuse for criminal activity) and to accept the claim that intoxication undermines consent (i.e. to protect people from be abused or taken advantage of), but this tension is what sets up the hypothetical.

For the time being I want to work with an abstract version of that hypothetical. This version focuses on two people of unspecified gender getting drunk to the point that their occurrent capacities for consent and responsibility are impaired, and then engaging in some form of sexual touching. I don’t want the genders or sex acts to be specified right now because I think our assumptions about the gendered nature of sex affects how we interpret the hypothetical. I won’t ignore those assumptions — I will talk about them later on — but I want to set them aside initially.

Given this set-up, how should we think about responsibility in a case like this? The following four premises would seem to apply:


  • (1) A person shall be guilty of sexual assault if they sexually touch another person without that person’s consent.***

  • (2) Two persons (A and B) are voluntarily intoxicated to the point that their occurrent capacities for responsibility and consent are impaired and have engaged in sexual touching.

  • (3) Voluntary intoxication does not undermine responsibility; responsibility can be traced back in time to the decision to get intoxicated.

  • (4) Voluntary intoxication does undermine consent if it impairs the occurrent capacities for volition and understanding; the validity of consent cannot be traced back in time to the decision to get intoxicated.



The question then is: What conclusion is implied by these four premises? Well, here’s what I think is implied:


  • (5) Conclusion: Therefore A and B are guilty of sexually assaulting each other.


To me, this seems to be the most logical inference to draw based on this abstract form of the hypothetical.


2. What’s wrong with this analysis?
In the years that I taught sexual offence law, I don’t think anyone ever suggested this was the correct conclusion to draw in such a case. I’m sure other people have (probably many times). So I am not claiming that my analysis is original. It’s just that I can’t recollect anyone doing so in my classes. This suggests to me that this is not the most intuitively compelling way to think about this case.

But why not? Mutual offences are not inconceivable. It is possible for two people to be guilty of assaulting one another, and people sue and countersue in private law all the time. Nevertheless, the mind does seem to recoil from the notion that two people could be guilty of sexually assaulting one another. It doesn’t match our intuitive sense of justice. There must be a victim and a perpetrator; a doer and a done-to. In other words, there must be something wrong with the analysis I have presented. What could it be?

One obvious criticism of the hypothetical, as I have sketched it, is that it is highly artificial. I have stipulated that the case involves intoxication to the point of impairment on both sides. In real world cases there would probably be much more uncertainty about the effects of intoxication. This uncertainty could have a big impact on how we think about intoxicated consent in particular. If we accept that not all instances of intoxicated consent are invalid, then there is likely to be a dispute as to whether the intoxication was sufficient to undermine the validity of consent on one or both sides. Depending on the context, it is possible that a court or tribunal will be inclined to conclude that the capacity was not impaired and hence there was some valid consent and no offence. This, incidentally, is one reason why the worry about unfairness in the gendered-form of the hypothetical is often misplaced. When students raise the gendered hypothetical they often claim that it would be unfair to hold a man responsible for sexual assault/rape when both parties were voluntarily intoxicated. But in practice, this might rarely arise. My limited exposure to cases like this (primarily through media and academic discussions) is that juries are often quite willing to believe that a woman’s intoxication did not impair her capacity to consent (or, what is slightly different, that there is sufficient doubt about this to warrant finding the man not guilty). This is compounded by the fact that there is often great uncertainty as to what exactly happened during an alleged sexual assault/rape, with the evidence usually depending on conflicting testimony.

Another obvious criticism of the hypothetical is that in not specifying the nature of the sexual touching between the parties I overlook the asymmetrical nature of certain sex acts. The conclusion that both parties are guilty of assaulting each other only really holds if there is some sexual touching on both sides. But this might not be the case. It might be that there is one party that is active and another passive: one party that does the touching and the other party that gets touched. This is often how we interpret cases of penetrative sexual touching. If that’s how we interpret the facts of the case, then reaching the conclusion that one party is guilty of an offence but the other is not is more plausible. That said, we need to bear in mind that real world cases are likely to involve some dispute and uncertainty as to what exactly happened. This might leave the door open to the view that there was some touching on both sides. Furthermore, in cases of penetrative sexual touching, it would not be impossible for one party to be guilty of a penetrative sexual assault on the other (rape or assault by penetration) and the other party to be guilty of non-penetrative sexual assault on them.

In addition to above criticisms, someone could point out that I haven’t been entirely accurate in my summary of how voluntary intoxication affects responsibility. While it is generally true that it does not undermine responsibility, there are certain cases where it might. A distinction is sometimes drawn between crimes of basic intent (which depend on recklessness/negligence) and specific intent (which depend on specific knowledge and/or intention to do something specific). While voluntary intoxication does not undermine responsibility for crimes of basic intent it might undermine responsibility for crimes of specific intent (this is a matter left to the jury to determine from the facts). The problem then is that rape and penetrative sexual assault are, in part, crimes of specific intent: the defendant must have had the intent to penetrate the other party. So there may be some cases where voluntary intoxication can undermine responsibility for sexual assault. But this complication offers little reassurance to someone who thinks the hypothetical is puzzling since even in those cases it will still be possible to hold the defendant liable for a ‘lesser’ form of sexual assault that does not require specific intent.

Are there any other ways to resolve the dilemma at the heart of the hypothetical? There are two. One would be to address the inconsistency between our attitudes to consent and responsibility by dropping our commitment to either premise (3) or (4) of the argument given earlier. This would mean either accepting that intoxicated consent, at least when the intoxication is voluntary, is valid consent (i.e. there can be a prior fault analysis of consent) or that one cannot be responsible if intoxicated to sufficient degree (i.e. our attitude to responsibility should be the same as our attitude to consent). Neither of those options seems attractive given the moral rationales underlying our acceptance of (3) and (4): avoid giving the intoxicated a ready excuse and protect the vulnerable from abuse. But some people have defended these views. For example, Heidi Hurd once argued that intoxicated consent should be deemed valid if the intoxication was voluntary.

The other potential solution would be to argue that there is something missing from our understanding of consent that warrants treating intoxicated consent differently from intoxicated responsibility. Perhaps there is some third factor/capacity that is needed for valid consent that is impaired by voluntary intoxication? One possibility here would be to argue that consent requires a continuing act and responsibility does not. This is a common view when it comes to consent to sex. People often argue that consent must be ongoing and it can be withdrawn at any time. But I would say that this alleged ‘third factor’ is more puzzling than anything else. Not all consent involves an ongoing act or the possibility of withdrawal (e.g. consent to general anaesthetic) and, more importantly, why should consent require ongoing acts and responsibility not? Why treat those things differently. Another possibility is suggested by Alan Wertheimer (whose arguments I considered in more detail previously) who once argued that consent required a deeper expression of the agent's will than responsibility and hence this justified the asymmetrical approach. Now, I'm not sure why we should accept that there is a deeper expression of will in the case of consent, but in any event this argument only works if we assume that the occurrent capacities for responsibility are not impaired at the time of the offence. The problem I am pointing out is that even when they are impaired there is a tendency to trace responsibility back to the decision to get intoxicated. Why do we think it is okay to do that for responsibility but not for consent? I don’t know if there are any other third factors, but it would be worth exploring.


3. Conclusion
In conclusion, the case in which both parties to an incident of sexual touching are voluntarily intoxicated to the point that their capacities for consent and responsibility are impaired presents what I think is a genuine puzzle, at least in the abstract case. The puzzle can be resolved by arguing that both parties are responsible for sexually assaulting each other, but this doesn’t seem to be an intuitively compelling solution. In real world cases, it may be possible to avoid the puzzle by claiming the facts favour one interpretation of the case over another, but if they don’t (and if there is sufficient doubt about the correct interpretation) we have to confront the tension in our attitudes to consent and responsibility.

* I know that the language used to describe sexual offences is highly politicised. Some people object to using terms like ‘sexual’ or ‘sex’ to refer to non-consensual acts. They argue that these things must be referred to as rape or assault. I use the term ‘sexual touching’ for two reasons (i) this is the language used in law and (ii) until you determine guilt or innocence it would be inappropriate to refer to these acts as rape or assault without the additional qualifier of ‘alleged’ or something of that sort. 

** In criminal law, the UK is divided into three separate jurisdictions. The only jurisdiction with which I am familiar is England and Wales. Nevertheless, I imagine the position on drunken consent is similar in the other two jurisdictions. 

*** Technically, the legal rule is more complicated than this because the guilty party would also have to lack the ‘honest’ or ‘reasonable’ (the standard varies) belief in consent. I overlook that here for the simple reason that this is not immediately relevant in cases of sufficient intoxication. In other words, if someone argued that their intoxication caused them to believe the other party was consenting, this would not be accepted as a legitimate excuse.




Monday, August 19, 2019

A Moral Duty to Share Data? AI and the Data Free Rider Problem

Image taken from Roche et al 2014


A lot of the contemporary debate around digital surveillance and data-mining focuses on privacy. This is for good reason. Mass digital surveillance impinges on the right to privacy. There are significant asymmetries of power between the companies and governments that utilise mass surveillance and the individuals affected by it. Hence, it is important to introduce legal safeguards that allow ordinary individuals to ensure that their rights are not eroded by the digital superpowers. This is, in effect, the ethos underlying the EU’s General Data Protection Regulation (GDPR).

But is this always a good thing? I have encountered a number of AI enthusiasts who lament this fixation on privacy and data protection. Their worry seems to be this: Modern AI systems depend on massive amounts of data in order to be effective. If they don’t get the data, they cannot learn and develop the pattern-matching abilities that they need in order to work. This means that we need mass data collection in order to unlock the potential benefits of AI. If the pendulum swings too far in favour of privacy and data protection, the worry is that we will never realise these benefits.

Now, I am pretty sure that this is not a serious practical worry just yet. There is still plenty of data being collected even with the protections of the GDPR and there are also plenty of jurisdictions around the world where individuals are not so well protected against the depredations of digital surveillance. So it’s not clear that AI is being held back right now by the lack of data. Still, the objection is an interesting one because it suggests that (a) if there is a sufficiently beneficial use case for AI and (b) if the development of that form of AI relies on mass data collection then (c) there might be some reason to think that individuals ought to share their data with AI developers. This doesn’t mean they should be legally obliged to do so, but perhaps we might think there is a strong ethical or civic duty to do so (like, say, a duty to vote).

But this argument encounters an immediate difficulty, which we can call the ‘data free-rider problem’:

Data Free-Rider Problem: If the effectiveness of AI depends on mass data collection, then the contribution of any one individual’s data to the effectiveness of AI is negligible. Given that there is some moral cost to data sharing (in terms of loss of privacy etc.) then it seems that it is both rational and morally acceptable for any one individual to refuse to share their data.

If this is right, then it would be difficult to argue that there is a strong moral obligation on individuals to share their data.

Problems similar to this plague other ethical and political debates. In the remainder of this article, I want to see if arguments that have recently been made in relation to the ethics of vaccination might carry over to the case of data sharing and support the idea of an obligation to share data.


1. The Vaccination Analogy: Is there are duty to vaccinate?
The dynamics of vaccination are quite similar to the dynamics of AI development (at least if what I’ve said in the introduction is accurate). Vaccination is beneficial but only if a sufficient number of people in a given population get vaccinated. This is what allows for so-called ‘herd immunity’. The exact percentage of people within a population that need to be vaccinated in order to achieve herd immunity varies, but it is usually around 90-95%. This, of course, means that the contribution of anyone individual to achieving herd immunity is negligible. Given this, how can you argue that any one individual has an obligation to get vaccinated?

This is not a purely academic question. Although vaccination is medically contraindicated for some people, for the vast majority it is safe and low cost, with minimal side effects. Unfortunately, there has been a lot of misinformation spread about the harmfulness of vaccination in the past 20 years. This has led many people to refuse to vaccinate themselves and their children. This is creating all manner of real world health crises, with, for example, measles outbreaks now becoming more common despite the fact that an effective vaccination is available.

In a recent paper, Alberto Giublini, Tom Douglas and Julian Savulescu have argued that despite the fact that the individual contribution to herd immunity is minimal, there is nevertheless a moral obligation on individuals (for whom vaccination is not medically contraindicated) to get vaccinated. They make three arguments in support of this claim.

The first argument is a utilitarian one and derives from the work of Derek Parfit. Parfit asks us to imagine a hypothetical case in which a group of people are in a desert and need water. You belong to another group of people each of whom has 1 litre of water to spare. If you all pooled together your spare water, and carted it off to the desert, it would rescue the thirsty group of people. What should you do? Your intuition in such a case would probably be “well, of course I should give my spare water to the other group”. Parfit argues that this intuition can be justified on utilitarian grounds. If you have a case in which collective action is required to secure some beneficial outcome, then, under the right conditions, the utility-maximising thing to do is to contribute to the collective effort. So if you are a utilitarian, you ought to contribute to the collective effort, even if your contribution is minimal.

But what are the ‘right conditions’? One of the conditions stipulated by Parfit is that in order to secure the beneficial outcome everyone must contribute to the collective effort. In other words, if one person refuses to contribute, the benefit is not realised. That’s a bit of a problem since it is presumably not true in the hypothetical he is imagining nor in the kind of case we are concerned with. It is presumably unlikely that your 1 litre of water makes a critical difference to the survival of the thirsty group: 99 litres of water will save their lives just as much as 100 litres. Furthermore, you may yourself be a little thirsty and derive utility from drinking the water. So it might be the case that, if everyone else has donated their water, the utility-maximising thing to do is to keep the water for yourself.

Giublini et al acknowledge this problem and address it by modifying Parfit’s thought experiment. Imagine that instead of pooling the water into a tank that is delivered to the people in the desert, each litre of water goes to a specific person and helps to save their life (they call this a case of ‘directed donation’ and contrast it with the original case of ‘collective donation’). In that case, the utility-maximising thing to do would be to donate the water. They then argue that vaccination is more like a directed donation case than a collective donation case. This is because although any one non-vaccinated person is unlikely to make a difference to herd immunity, they might still make a critical difference by being the person that exposes another person to a serious or fatal illness. This is true even if the risk of contracting and conveying the disease is very low. The small chance of being the crucial causal contributor to another person’s serious illness is enough to generate a utilitarian duty to vaccinate (provided the cost of vaccination to the vaccinated person is low). Giublini et al then generalise from this to formulate a rule to the effect that if your failure to do X results in a low probability but high magnitude risk to others, and if doing X is low cost (lower than the expected risk to others) then you have a duty to do X. This means a utilitarian can endorse a duty to vaccinate. Note, however, that this utilitarian rule ultimately has nothing really to do with collective benefit: the rule would apply even if there was no collective benefit; it applies in virtue of the low probability high magnitude risk to others.

The second argument is a deontological one. Giublini et al actually consider two separate deontological arguments. The first one is based on a Kantian principle of universalisability: you ought to do that which you can endorse everyone doing; and you ought not to do that which you cannot endorse everyone doing. The argument then is that refusing to vaccinate yourself is not universalisable because you could not endorse a world in which everyone refused to vaccinate. Hence you ought to vaccinate yourself. Giublini et al dismiss this argument for somewhat technical reasons that I won’t get into right here. They do, however, accept a second closely-related deontological argument based on contractualism.

Contractualism in moral philosophy is the view that we can work out what our duties are by asking what rules of behaviour we would be willing accept under certain idealised bargaining conditions. Giublini et al focus on the version of contractualism that was developed by the philosopher Thomas Scanlon:

Scanlonian Contractualism: “[a]n act is wrong if its performance under the circumstances would be disallowed by any set of principles for the general regulation of behaviour that no one could reasonably reject as a basis for informed, unforced, general agreement.” (Scanlon 1998, 153 - quoted in Giublini et al 2018)

Reasonable-rejectability is thus the standard for assessing moral duties. If X is reasonably-rejectable under idealised bargaining conditions, then you do not have a duty to do it; if it is not reasonably rejectable, then you have a duty to do it. The argument is that the requirement to vaccinate is not reasonably rejectable under idealised bargaining conditions. Or, to put it another way, the argument is the failure to vaccinate would be disallowed by a set of rules that no one could reasonably reject. If each person in society is at some risk of infection, and if the cost of reducing that risk through vaccination is minimal, then it is reasonable to demand that each person get vaccinated. Note that the reasonability of this depends on the cost of vaccination. If the cost of vaccination is very high (and it might be, for certain people, under certain conditions) then it may not be reasonable to demand that everyone get vaccinated. Giublini et al’s argument is simply that for most vaccinations, for most people, the cost is sufficiently low to make the demand reasonable.

The third argument is neither utilitarian nor deontological. It derives from a widely-accepted moral duty that can be embraced by either school of thought. This is the duty of easy rescue, roughly: if you can save someone from a harmful outcome at minimal cost to yourself, then you have a duty to do so (because it is an ‘easy rescue’). The classic thought experiment outlining this duty is Peter Singer’s drowning infant case: you are walking past a pond with a drowning infant; you could easily jump in and save the infant. Do you have a duty to do so? Of course you do.

Giublini et al argue that vaccination gives rise to a duty of easy rescue. The only difference is, in this case, the duty applies not to individuals but to collectives. The argument works like this: The collective could ensure the safety of individuals by achieving herd immunity. This comes at a minimal cost to the collective as a whole. Therefore, the collective has a duty to do what it takes to achieve herd immunity. The difficulty is that this can only happen if 90-95% of the population contributes to achieving that end through vaccination. This means that in order for the collective to discharge their duty, it must somehow get 90-95% of the population to vaccinate themselves. This means the group must impose the burden of vaccination on that percentage of the population. How can it do this? Giublini et al argue that instead of selecting some specific cohort of 90-95% of the people (and sparing another cohort of 5-10%) the fairest way to distribute that burden is just to say that everyone ought to vaccinate. This means no one is singled out for harsher or more preferential treatment. In short, then, an individual duty to vaccinate can be derived from the collective duty of easy rescue because it is the fairest way to distribute the burden of vaccination.

Suffice to say there is a lot more detail and qualification in Giublini et al’s paper. This quick summary is merely intended to show how they try to overcome the free rider problem in the case of vaccination and conclude that there is an individual duty to vaccinate. The question now is whether these arguments carry over to data collection and AI.


2. Do the arguments carry over to AI development?
Each of Giublini et al’s arguments identifies a set of conditions that must apply in order to derive an individual duty to contribute to a collective benefit. Most of these conditions are shared across the three arguments. The two most important conditions are (a) that there is some genuine and significant benefit to be derived from the collective effort and (b) that the individual contribution to that collective benefit comes at a minimal cost to the individual. There are also other conditions that are only relevant to certain arguments. This is particularly true of the utilitarian argument which, in addition to the two conditions just mentioned, also requires that (c) the individual’s failure to perform the contributory act poses some low probability, high magnitude risk to others.

Identifying these three conditions helps with the present inquiry. Given the analogy we are drawing between AI development and vaccination, the question we need to focus on is whether these three conditions also apply to AI development. Let’s take them one at a time.

First, is there some genuine and significant benefit to be derived from mass data collection and the subsequent development of AI? At present, I am somewhat sceptical. There are lots of touted benefits of AI, but I don’t know that there is a single provable case of significant benefit that is akin to the benefit we derive from vaccination. The use of AI and data collection in medicine is the most obvious direct analogy, but my reading of the literature on AI in medicine suggests that the jury is still out on whether it generates significant benefits or not. There are some interesting projects in progress, but I don’t see a “killer” use case (pardon the irony) at this stage. That said, I would qualify this by pointing out that there are already people who argue that there is a duty to share public health data in some cases, and there is a strong 'open data' movement in the sciences that suggests there is a duty on scientists to share data. One could easily imagine these arguments being modified to make the case for a duty to share such data in order to develop medical AI.

The use of mass data collection to ensure safe autonomous vehicles might be another compelling case in which significant benefit depends on data sharing, but again it is early days there too. Until we have proof of significant benefit, it is hard to argue that there is an individual obligation to contribute data to the development of self-driving cars. And, remember, with any of these use cases it is not enough to show that the AI itself is genuinely beneficial, it must be shown that the benefit depends on mass data collection. This might not be the case. For example, it might be the case that targeted or specialised data (small data) is more useful. Still, despite my scepticism of the present state of AI, it is possible that a genuine and significant benefit will emerge in the future. If that happens, the case for an individual obligation to contribute data could be reopened.

Second, does the individual contribution to AI development (in the form of data sharing) come at minimal cost to the individual? Here is where the privacy activists will sharpen their knives. They will argue that there are indeed significant and underappreciated costs associated with data sharing that make it quite unlike the vaccination case. These costs include the intrinsic harm caused by the loss of privacy* as well as potential consequential harms arising from the misuse of data. For example, the data used to create better medical diagnostics AI could also be used to deny people medical insurance. The former might be beneficial but the latter might encourage more authoritarian control and greater social inequality.

My general take on these arguments is that they can be more or less compelling, depending on the type of data being shared and the context in which it is being shared. The sharing of some data (in some contexts) does come at minimal cost; in other cases the costs are higher. So it is not easy to do a global assessment of this second condition. Furthermore, I think it is worth bearing in mind that the users of technology often don’t seem to be that bothered by the alleged costs of data sharing. They share personal data willy-nilly and for minimal personal benefit. They might be wrong to do this (privacy activists would argue that they are) but this is one reason to think that the worry that prompted this article (that too much data protection is hindering AI) is probably misguided at the present time.

Finally, does the individual failure to contribute data pose some low probability high magnitude risk to others? I don’t know the answer to this. I find it hard to believe that it would. But it is conceivable that there could be a case in which your failure to share data poses a specific risk to another (i.e. that your data makes the crucial causal difference to the welfare of at least one other person). I don’t know of any such cases, but I’m happy to hear of them if they exist. Either way, it is worth remembering that this condition is only relevant if you are making the utilitarian argument for the duty to share data.


3. Conclusion
What can we conclude from this analysis? To briefly summarise, there is a prima facie case for thinking that AI development depends for its effectiveness on mass data collection and hence that the free rider dynamics of mass data collection pose a threat to the development of effective and beneficial AI. This raises the intriguing question as to whether there might be a duty on individuals to share data with AI developers. Drawing an analogy with vaccination, I have argued that it is unlikely that such a duty exists at the present time. This is because the reasons for thinking that there is an individual duty to contribute to herd immunity in the vaccination do not easily carry over to the AI case. Nevertheless, this is a tentative and defeasible argument. In the future, it is possible that a compelling case could be made for an individual duty to contribute data to AI development. It all depends on the collective benefits of the AI and the costs to the individual of sharing data.


*There are complexities to this. Is privacy harmed if you voluntarily submit your data, even if this is guided by your belief that you have an obligation to do so? This is something privacy scholars struggle with. Historically, the willingness to concede to individual expressed preference (via informed consent) was quite high, but nowadays a more paternalistic view is being taken. The GDPR for example doesn’t make ‘notice-and-consent’ the sole factor in determining the legitimacy of data protection. It works with the implicit assumption that sometimes individuals need to be protected in spite of informed consent. 

 

Wednesday, August 14, 2019

Self Sacrifice Devices and Self Driving Cars: Should we do it?




Lots of people are interested in the ethics of autonomous vehicles. Indeed, the philosophical literature on this topic has grown unwieldy in the past few years. Whereas once upon time it was possible for one person to read and understand everything that had been published on this issue, I suspect that there is now so much written, and being written, that it has become impossible to keep up.

This is, in some ways, unfortunate. While there is lot of good work being done, there is a tendency for popular discussions of the ethical issues to fixate on simplistic thought experiments such as the infamous ‘trolley’ dilemmas. This creates the impression that figuring out what an autonomous vehicle should do in such a case is the be-all and end-all of the ethical debate. This isn’t true. While there is some value to considering such hypothetical cases, they are edge cases that do not provide the best guide to thinking about how autonomous vehicles should react in all dilemmatic cases. Furthermore, there are other ethical issues arising from the use of such vehicles that need to considered and are often overlooked.

I say all this by way of apology for what you are about to read. Although, I agree with the conclusion reached at the end of the preceding paragraph, I have to confess that I enjoy thinking about hypothetical edge cases. They bring into sharp relief some of the most fascinating ethical concepts and questions with which we must contend. I am going to discuss one such hypothetical edge case in the remainder of this article. The edge case concerns whether we should design a system of autonomous driving vehicles in such a way that it allows for individuals to voluntarily sacrifice themselves in the case of unavoidable crashes.

Let me first explain what I mean by this and then consider the arguments for and against it.


1. The Self Sacrifice Device
To explain the idea, I have to say something about the nature of unavoidable crash scenarios. This may be familiar to some readers; they should feel free to skip ahead to the next paragraph. An unavoidable crash scenario is a scenario in which a car is going to collide with someone or something and must choose between potential sites of collision. The typical set-up is a modified version of the trolley dilemma. A car is driving down a road when it is suddenly confronted with two sets of pedestrians occupying both sides of the road. On one side is an elderly couple; on the other side a group of children (or any other set of pedestrians). It is impossible for the car to avoid colliding with one set of pedestrians and so a split-second decision must be made as to which set of pedestrians should be saved and which sacrificed. Many variations of this basic set-up are possible. For example, instead of choosing between sets of pedestrians perhaps the car has to choose between colliding with a crash barrier (thereby injuring/killing the driver and passengers) and a group of pedestrians. Either way, the important point is that in these cases a harmful outcome is unavoidable (they are genuine dilemmas); the key ethical issue is not to prevent harm but to select between harmful outcomes. Sometimes it will be possible to minimise the amount of harm, other times the harmful outcomes may be equally weighted. If a human is driving the car, then the human must make the split-second decision. If a computer program is in control, then its programming must instruct it what to do in such a case.

Truly unavoidable crash scenarios of this sort are probably quite rare. I am not familiar with any studies that have been done on the matter, but my guess is that many real-world crash scenarios don’t involve such stark and equally weighted choices. There is much more uncertainty and imbalance in practice. This is one reason why some people think it is a mistake for the ethical debate about autonomous vehicles to become dominated by their discussion. Nevertheless, I persist.

I do not persist in the hope of discussing all possible resolutions of such cases. Instead, I persist in the hope of discussing the role that self-sacrifice might play in addressing such cases. In a previous article, I looked at a thought experiment from Hin Yan Liu concerning the creation of “immunity devices” that could be used in unavoidable crash scenarios. Liu’s idea was that it would probably be possible to create a device (just a small RFID chip perhaps) that would emit a signal that told a self-driving vehicle that the person wearing this device should not be sacrificed in the event of an unavoidable crash scenario. The effect of such a device might not be dissimilar to other forms of immunity that are granted to people by law (e.g. diplomatic immunity) or to a kind of extra health/safety insurance that people purchase at will.

To be clear, Liu didn’t think that the creation of immunity devices was a good idea. He just argued that their creation did not seem implausible and so it was important to think about the ethical and social ramifications. Here, I want to suggest a simple variation on Liu’s thought experiment. What if instead of immunity devices we allow people to create self sacrifice devices? These devices would also send a signal to a self-driving vehicle, but the meaning of the signal would be very different. It would inform the vehicle that the wearer of the device is willing to be sacrificed in the event of an unavoidable crash. This might be analogised to carrying an organ donor card, albeit with the not inconsiderable difference being that instead of signalling your willingness to give up your organs after death you are signalling your willingness to sacrifice your life for the lives of others.

What should we think about the creation of such a device?


2. The Arguments for and against a Self-Sacrifice Device
You might think that the idea of a self-sacrifice device is absurd or abhorrent. But let’s just consider for a moment whether there are any good reasons to endorse the creation of such a device.

I can think of two. First, as you may know, there is a rich experimental literature on people’s attitudes to trolley dilemmas. In these experiments, the dilemmas are usually structured in such a way that the experimental subject has to chose between harming two or more people other than themselves. But in some experimental studies people have indicated that if they had the option, they would prefer to sacrifice themselves instead of sacrificing some other party (e.g. Sachdeva et al 2015; Di Nucci 2013). In other words, if someone has to be harmed in such a case people would prefer if they could bear the brunt of the harm themselves (though there are some inconsistencies in this). For what it is worth, whenever I discuss trolley-type dilemmas with students, I find that a significant proportion of students agree that self-sacrifice, if possible, would be the ‘right’ thing to do in such a case. One advantage of the self-sacrifice device is that it allows people to exercise this preference in unavoidable crash scenarios. So you could argue that the creation of such a device is a good thing because it gives people an option that they want to be able to exercise.

Second, and perhaps more importantly, there is a rich moral tradition suggesting that self sacrifice is a noble deed. Think of the soldier who saves his/her comrades by diving on a grenade; think of the medical worker who cares for ebola sufferers only to be struck down by the disease themselves. These people are celebrated in our culture. They went above and beyond the call of moral duty. They are moral heroes and heroines. We might argue that it would be a good thing to give people the option of noble self sacrifice because it would allow them to exercise this extreme form of moral virtue. We might argue that this would be a particularly good thing in light of the fact that other suggested solutions to unavoidable crash scenarios are not hugely compelling (e.g. forcing some moral theory such as consequentialism on everyone; deciding by majority preference; or selecting outcomes at random).

But, but, but…There is also, clearly, a dark side to the idea of self-sacrifice device. Indeed, there are several dark sides: reasons to think that the creation of such devices would not be a good thing. Let’s review some of them.

First, we might worry that the creation of a self-sacrifice device undermines the goodness of noble self-sacrifice. A noble self-sacrifice is a supererogatory act. It’s goodness lies, to some extent, in the fact that it is an unforced, often spontaneous, decision. A self sacrifice device might undermine this unforced spontaneity. People using the device would have to pre-commit to sacrificing themselves at some unknown (perhaps never-to-be-realised) future moment. Their capacity for spontaneous virtue might thus be compromised. More importantly, in some societies, the existence of such a device might pressure or force some people into sacrificing themselves against their will. For example, the historical norm in (Western) societies is that adult men ought to sacrifice themselves in order to protect women and children. If this norm continues to apply, we might expect adult men to face a strong social pressure to use self-sacrifice devices. Thus we might worry that in wearing such devices they are not authentically expressing their moral agency but, rather, conforming to social stereotyping.

Second, in addition to social pressures, there may be a strong temptation to create legal pressures that force some people into wearing self-sacrifice devices. This is particularly true if such devices become commonplace and it is necessary to create a ranking system to differentiate between different wearers (i.e. to decide who gets sacrificed first in the event of an unavoidable crash). This would presumably require a points-based ranking and it would be tempting to some governments to tie this into a system of social punishment. This might work like the Chinese social credit system(s). People might get docked points if they do something wrong thus making it marginally more likely that they will be sacrificed in the event of an unavoidable crash. Of course, in this case we have moved beyond the world of self-sacrifice into the world of authoritarian social control: everyone might end up being required to wear a device that signals their social worth to machines that may use this information to distribute risks away from high value individuals and onto low value individuals. The point is that there is, arguably, a slippery slope from creating a self-sacrifice device to enabling such a system of social control. This might be one compelling reason not to create such a device.

Third, there would, presumably, be some formidable practical difficulties with the implementation of self sacrifice devices. How do we guarantee that the signal sent from the device to the car is reliable and high speed? Would the car have enough time to use the information in the crash scenario? Could the person wearing the device be singled out from other potential crash victims? What if they are embedded in a group of pedestrians? What if they are with their children? Practical engineering solutions would need to be found for each of these issues and each involves important ethical choices.
Fourth, there would, presumably, be significant cybersecurity challenges raised by the existence of such devices. They could be hacked. A malicious agent could play around with the signals being sent back and forth between the cars and the devices, perhaps directing the car to collide with wearers even when there is no unavoidable crash. In other words, the mere existence of the device makes possible a whole range of malicious interferences. (Cybersecurity issues of a similar nature plague the entire field of autonomous vehicles).

Fifth, and finally, even if we grant that self sacrifice is good thing (and I grant that it is in certain cases) it’s not obvious that you need a self-sacrifice device to enable it. It would presumably still be open to some pedestrians (or drivers/passengers) to exercise a preference for self-sacrifice through other means. A pedestrian could jump in front of a car, for example, or a driver/passengers could take control of the steering wheel and crash the car into a wall (assuming the autonomous vehicle allows for such driver-takeover). The opportunities for self-sacrifice might be more limited in these cases, but that might not be a bad thing given the other risks discussed above.


3. Conclusion
So where does that leave us? There are probably more arguments that could be mustered on both sides, but based on this quick review I think, on balance, that the arguments against self-sacrifice devices are more compelling than the arguments in their favour. There is an prima facie case to made for the creation of such devices, but this is negated by the many risks posed by their creation and by the fact opportunities for self-sacrifice can be accessed in other ways.




Monday, August 12, 2019

The Types and Harms of Victim-Blaming




I have recently been reading up about the ethics of victim-blaming. Victim-blaming is a prevalent phenomenon. It crops up most controversially in cases of sexual assault, and also features in hot-button debates about poverty and police shootings. These controversial cases are not, however, the only ones in which the phenomenon arises. Victim-blaming, of a sort, features prominently in private law, particularly in personal injuries litigation where people who suffer harm as a result of the negligence of others have their compensation reduced (or eliminated) as a result of their own perceived negligence. It also crops up frequently in our day-to-day lives. I suspect many of us have criticised or have been tempted to criticise our friends and colleagues for failing to take adequate precautions to ensure the safety and security of themselves or their families or their possessions. In certain circumstances, this kind of criticism can amount to victim-blaming.

From an intellectual perspective, victim-blaming is interesting because it implicates many important philosophical concepts. These include responsibility, blame, innocence, power, oppression, and distributive justice/injustice. This means that it is not only a practically important topic, but also one that raises many fascinating and complex intellectual questions. The common intuition among people I have talked to is that victim-blaming is always a bad thing, but if you read the literature you find a slightly more ambivalent perspective emerging, with some people accepting that certain forms of victim-blaming can be acceptable (for an excellent exploration of these ambivalent attitudes to the phenomenon, see Susan Wendell’s article on responsibility and oppression)

I haven’t fully developed my own thoughts on the issue (are thoughts ever fully developed?) but I have learned quite a bit from my reading thus far. In the remainder of this article, I want to share two important ideas about victim-blaming. Both come from an article by J. Harvey called ‘Categorizing and Uncovering “Blaming the Victim” Incidents’. The first concerns the different forms that blaming the victim can take; the second concerns the harms that arise as a result. Both help to highlight why victim blaming is seen to be particularly problematic in the case of minority groups or people living under conditions of oppression.


1. Six Different Forms of Victim Blaming
All blaming the victim (BTV) cases have a common structure. First, they involve a victim(s), i.e. someone who suffers a harm. Second, they involve some attempt to assign responsibility for this harm to the victim.

Harvey adds to this that all these attempts to assign responsibility to the victim are inappropriate and hence all BTC cases are morally suspect. I would prefer not to make that assumption part of the defining characteristics of BTV. This is because I think it builds the moral inappropriateness of BTV into its definition; this strikes me as something that needs to be argued for and not simply assumed.

I suspect what is going on here, incidentally, is that in many people’s heads the term ‘victim’ is synonymous with ‘innocence’ and if all victims are innocent, then all blame assigned to them is morally inappropriate. But I prefer to define ‘victim’ broadly to cover anyone who suffers a harm. This avoids making assumptions about their responsibility or innocence.

Beyond those two features there is probably a third feature that is common to most BTV cases, namely: that the harm suffered by the victim appears to have been caused by another person (call them the ‘perpetrator’). The function of victim-blaming is then to shift some or all responsibility for the harm from the perpetrator to the victim. That said, I am reluctant to say that this is a common feature of all BTV cases. This is because people often talk about self-victimisation (e.g. the smoker suffering from lung cancer) and about victims of natural disasters (flood victims/earthquake victims). These cases do not involve a third party perpetrator. The potential absence of a perpetrator is one of the things Harvey highlights in her* categorisation of different forms that BTV cases can take.

Without further ado, let’s consider these six different cases:

Case 1: The victim suffers from some harm that was not attributable to the actions of a perpetrator (call this a ‘non-moral’ harm) and is then blamed for this. This is the kind of case I was just alluding to and would typified by the example of someone blaming a cancer patient for bringing about their own condition.

Case 2: The victim suffers from some harm that was attributable to a perpetrator (call this ‘moral harm’), but they are told that this wasn’t really harm and that they are miscategorising what happened to them. This is usually accompanied by some allegation to the effect that they are overreacting or engaging in false or malicious accusations. Harvey gives the example of a woman in the Canadian military who complained when her commanding officer called her a ‘broad’. Her complaint was dismissed for being an inappropriate overreaction.

Case 3: The victim suffers from some moral harm but it is argued that this was not attributable to a perpetrator and was in fact a case of non-moral harm. Harvey gives the example of a woman who complains of sexual harassment. The complaint is dismissed but it is accepted that the woman suffered from considerable distress and psychological harm. This, however, is attributed to her own dispositions/psychological frailty and not the actions of a perpetrator.

Case 4: The victim suffers from some moral harm, which is prima facie attributable to a perpetrator, but then it is argued that the victim was also partly or maybe even wholly responsible for the harm. This is usually justified on the grounds that the victim either intentionally or negligently provoked the perpetrator. The classic example here is the case of the sexual assault victim who is alleged to have ‘led on’ the perpetrator through their behaviour or dress. This probably constitutes the core case of victim-blaming and is what most people have in mind when they think of the phenomenon.

Case 5: The victim suffers from some moral harm, which is attributed to a perpetrator (i.e. they are taken to bear the majority of the responsibility) but then it is argued that the victim somehow made the harm worse than it needed to be through their own actions. The intuition underlying this case is that people ought to take steps (if they can) to minimise the harm they suffer. So, again, we have the classic case of a sexual assault victim (or harassment victim) who is criticised for not using force against the perpetrator, or for not running away or screaming, or for not confronting the perpetrator and telling them that they did not consent to their conduct.

Case 6: The victim suffers from some moral harm, which is wholly attributed to the perpetrator, but then it is argued that after it occurred the victim did something that made it worse that it needed to be. This is really just a subtle variation on the previous case, involving longer-term reactions to the harm. Harvey notes that victims can sometimes be blamed for exaggerating the harm they have suffered, for brooding or dwelling on it and not moving on, and for protesting the harm in an inappropriate way.


As you can see, these cases vary in interesting ways. You might query whether we need all six, but I think there is value to each distinction. The distinctions show how, even though there is a core BTV case (case 4), victim-blaming can arise in other ways.


2. The Harms of Victim-Blaming
So much for the different forms of victim-blaming what about its ethics? We know that people find it objectionable (even if they frequently engage in it) but why? What’s so harmful about it?

Harvey identifies seven different harms that result from victim-blaming. I’m going to simplify her analysis and talk about three primary types of harm that can result from it:

Misattribution harms: Someone who is innocent or not fully responsible for a harm is singled out as being morally at fault. This is morally wrong and contrary to how we think principles of blame and responsibility should be applied. So this results in a kind of moral harm being applied to the victim. This is the most basic and obvious kind of harm that results from victim-blaming. In practice this can be quite an abstract and philosophical form of harm, unless it has real-world implications (e.g. the victim is punished or has their compensation reduced/eliminated).

Psychological harms: Because they have been blamed, the victim suffers from some kind of psychological harm, often of a lingering kind. For example, the victim may suffer an ongoing loss of confidence, self-esteem or self-respect. They may feel shame and guilt that they ought not to feel. This is distinct from, but compounded on top of, the harm they experienced through their victimisation (e.g. trauma or physical distress).

Oppression-related harms: The victim is assumed to have more power than they actually have and may be expected (unfairly) to proactively protect against their own victimisation in the future. This is a particular problem when members of oppressed groups are the victims because the imposition of additional responsibility-burdens on them tends to compound and perpetuate their oppression.

These harms are not mutually exclusive. Any particular BTV case may involve all three of them. Again, consider the classic case of a sexual assault victim who is blamed on the grounds that she provoked the perpetrator. Here we have blame being misattributed to the victim. This blame is likely to lead many people to expect her to proactively avoid future victimisation (don’t dress like that! don’t drink! don’t flirt! don’t walk alone! etc). These expectations will, no doubt, foist unreasonable burdens upon her. Her freedom of movement, dress and so forth will be curtailed more than that of others (specifically men). This all serves to compound the oppression that she and other women experience, particularly in relation to how they must act in heterosexual relations. It is also possible that the victim-blaming will be psychologically harmful. The woman may experience shame and guilt as a result of the blame, and may lose self-respect and self-esteem. She may even be encouraged to feel those things by others in her community.

Sometimes these harms are not so obvious. Many people engage in (mild) forms of victim-blaming for the best of reasons: they want to empower victims to avoid harm in the future. But Harvey makes the important point that the harm of victim-blaming is independent from the motivations underlying it. This is, in some ways, a trivial observation: harming is distinct from wronging. You can harm someone without intending to do so. But it is an important point to make in relation to BTV cases. We have a tendency to assume that we have more control over the world than we really do. This leads us to endorse narratives of false empowerment, e.g. ‘If I didn’t wear that dress, it wouldn’t have happened…’. These narratives give us an unreasonable sense of what we can do to avoid future victimisation. Encouraging people from oppressed groups, who are already disadvantaged, to embrace these narratives of false empowerment is problematic, particularly if what they have to do to exercise that power curtails their freedom to live a flourishing life in other ways.

But there is a delicate balancing act to perform here. You don’t want people to endorse a narrative of false helplessness either. The victim-mindset can be seductive. We often don’t want to take responsibility for what happens to us. We want others to take up that burden. This is one of the things I like about Susan Wendell’s analysis of victimisation and oppression. She is acutely aware of the delicate balancing act that needs to take place when interacting with victims, suggesting that sometimes we need to get beyond the simplistic ‘blame the victim’ versus ‘blame the perpetrator’ framing of these cases. Instead, we have to develop a mindset in which we can acknowledge the wrong done to the victim whilst at the same time empowering them to transcend their victimhood. I suspect the key to this lies in how we seek to empower the victim. Do we impose unreasonable burdens on them that compound their oppression? Or do we give them some capacity to address the conditions of their oppression? The latter kind of empowerment seems less objectionable than the former

I would also add, as a final point, that there might be a flipside to all this. Harvey is right, I believe, to say that victim-blaming is particularly problematic when the victim belongs to an oppressed group. But not all victims belong to such groups. Does this imply that it is less problematic to blame victims from powerful groups? I haven’t seen this explored in any detail in the literature that I have read but it seems like a point worth considering.


* I don’t know exactly who ‘J. Harvey’ is, but I assume it is Jean Harvey, a philosopher who died in 2014 and wrote a lot about oppression. I could be wrong about this.