Wednesday, August 21, 2019

Intoxicated Consent and Intoxicated Responsibility: Is there a paradox?




Once upon a time, I used to teach criminal law. For me, the most challenging section of the course was invariably the section on sexual offences. Some students would find the subject uncomfortable, perhaps even traumatising. Others, though interested and engaged, would find it difficult to articulate their thoughts in a precise way. There would be occasionally awkward discussions about the nature of sexual consent and responsibility, as well as contentious debates about the gendered assumptions that continue to underlie the law.

Every year, I would teach tutorial classes in which students were asked to consider the correct legal approach to real and hypothetical cases of sexual assault and rape. Every year, I found that one kind of hypothetical case would generate the most heated discussion, with the debate usually (though not always) breaking down along gendered lines.

The case would be posed by one of the students (I don’t believe I ever brought it up). The case would involve a man and a woman, both of whom were heavily, but voluntarily, intoxicated. The man and woman would then engage in some kind of sexual* touching. This could be penetrative or not; the exact form did not matter too much to the hypothetical (though see the discussion of this issue below). If it were penetrative, it would be assumed that the man had penetrated the woman. The question would then be posed: was there a legally chargeable sexual assault or rape?

This hypothetical would generate heated discussion because (a) the general presumption in law is that voluntary intoxication does not negate or undermine criminal responsibility and (b) there is an (emerging) social norm to the effect that you cannot consent to sex if heavily intoxicated. When these two things are combined with the general presumption that rape and sexual assault are usually male-on-female crimes, it would yield the conclusion that what you have here is a case in which the man is guilty of sexually assaulting or raping the woman. Some students (typically though not exclusively male) would perceive this to be unfair since both parties were voluntarily intoxicated. The more analytical students would point out that this revealed a puzzling asymmetry in our attitudes to drunken consent and drunken responsibility. (To show that my experiences with this hypothetical are not unusual, I suggest reading this article describing the discussion at a ’smart consent’ workshop that is taught to students in Irish universities; this hypothetical features prominently in the discussion).

Although I am sure I will regret doing this, I want share some of my own thoughts about this hypothetical case. I think the hypothetical is worth taking seriously because it reveals some of the tensions and nuances in how we think about consent and responsibility. I also think its apparently paradoxical aspects become less pronounced as you move away from the hypothetical to more realistic cases. That said, I am not sure what the best way to think about this hypothetical case is or if there is a simple correct answer to what should happen in such a case. I offer my own tentative ‘solution’ in what follows, but I’m not sure how convincing it is.


1. The Paradox and One Possible Solution
I want to start by sharpening the paradox that is supposedly revealed by the hypothetical. To do this, I need to say something about responsibility and consent, and then present a more abstract version of the hypothetical.

I begin with a platitude: Responsibility and consent are central to how we think about liability and blame. Both are dependent on similar underlying mental capacities. Since the time of Aristotle, responsibility is has been thought to depend on two basic capacities: (i) the capacity for voluntary action and (ii) the capacity to understand/know what your actions entail. If you perform an action voluntarily, and you understand what that action is likely to entail, you are responsible for it; if one or both of those things is absent, you are not. Consent, clearly, depends on the same capacities. Whether you are consenting to medical treatment, sex or something else, the validity of your consent depends on whether you signalled consent voluntarily and whether you knew what it was you were consenting to through that signal. There is more to it than that, of course. I’ve written extensively about the ethics of consent before and one theme that emerges from those earlier discussions is that the validity of consent also depends on how we expect consent to communicated and understood by the person to whom it is communicated. Nevertheless, at its core, valid consent depends on volition and understanding. This underlying similarity between consent and responsibility is what sets up the paradox or tension that students perceive in the hypothetical.

The hypothetical, however, involves one major complicating factor: the voluntary intoxication of both parties. Intoxication impairs our mental capacities. Mild intoxication (e.g. a single unit of alcohol) probably does little harm to our capacity for responsibility and consent, but at a sufficient degree of intoxication, it is plausible to suppose that the intoxicated person lacks any meaningful capacity for volition and, even more plausibly, understanding. This would seem to lead to the conclusion that intoxication, at a sufficiently high degree, undermines both responsibility and consent.

But no one really accepts that conclusion, at least not when it comes to responsibility. We know that intoxication can raise the risk of someone engaging in harmful activity. Some people even try to build up the courage (“Dutch Courage”) to engage in harmful activity by intoxicating themselves. Consequently, we don’t want people to be able to excuse themselves from blame by voluntarily imbibing intoxicants prior to doing something wrong. So, instead, we say that since they were responsible for their actions at the time they chose to get intoxicated, they are also responsible for the downstream consequences of that decision. In other words, we say that if they subsequently engaged in harmful activity we can trace their responsibility back in time to the point at which they chose to get intoxicated. This is sometimes referred to as a ‘prior fault’ analysis of responsibility.

Our attitude to consent is a bit different. Though I hesitate to give an authoritative statement on this, my understanding of the law in Ireland and the UK** is that voluntary intoxication does not necessarily undermine the validity of consent — but it might. In other words, courts are reluctant to say that all instances of intoxicated consent are invalid, but they accept that some instances might be, particularly if it seems that the intoxicated person was so far gone that they didn’t understand what they were getting themselves into.

There is, presumably, a plausible rationale behind this: we don’t want people to be taken advantage of while in a vulnerable, impaired state (hence we don’t want to say that all intoxicated consent is valid); but we also recognise, certainly when it comes to sex, that people do engage in mutual sexual activity while intoxicated and to say that all such cases are criminal due to lack of consent would be counterintuitive. That said, my suspicion is that there is less tolerance for this latter view nowadays than there used to be. You see this particularly in media commentary about sexual assault cases involving intoxication. Hence there might be an emerging norm to the effect that most (if not all) instances of alleged intoxicated consent are invalid. Either way, one thing that is clear in intoxicated consent cases is that we do not trace the validity of consent back in time to the decision to get intoxicated; we focus solely on the occurrent capacities of the person who is alleged to have consented. Consent is sometimes said to be a ‘continuing act’ and so there can be no prior fault analysis of consent.

There is, consequently, a clear tension between our attitudes to intoxicated responsibility and intoxicated consent. The tension can be morally justified in the sense that there is a prima facie plausible moral reason to reject the claim that intoxication undermines responsibility (i.e. to stop people from availing of an easy excuse for criminal activity) and to accept the claim that intoxication undermines consent (i.e. to protect people from be abused or taken advantage of), but this tension is what sets up the hypothetical.

For the time being I want to work with an abstract version of that hypothetical. This version focuses on two people of unspecified gender getting drunk to the point that their occurrent capacities for consent and responsibility are impaired, and then engaging in some form of sexual touching. I don’t want the genders or sex acts to be specified right now because I think our assumptions about the gendered nature of sex affects how we interpret the hypothetical. I won’t ignore those assumptions — I will talk about them later on — but I want to set them aside initially.

Given this set-up, how should we think about responsibility in a case like this? The following four premises would seem to apply:


  • (1) A person shall be guilty of sexual assault if they sexually touch another person without that person’s consent.***

  • (2) Two persons (A and B) are voluntarily intoxicated to the point that their occurrent capacities for responsibility and consent are impaired and have engaged in sexual touching.

  • (3) Voluntary intoxication does not undermine responsibility; responsibility can be traced back in time to the decision to get intoxicated.

  • (4) Voluntary intoxication does undermine consent if it impairs the occurrent capacities for volition and understanding; the validity of consent cannot be traced back in time to the decision to get intoxicated.



The question then is: What conclusion is implied by these four premises? Well, here’s what I think is implied:


  • (5) Conclusion: Therefore A and B are guilty of sexually assaulting each other.


To me, this seems to be the most logical inference to draw based on this abstract form of the hypothetical.


2. What’s wrong with this analysis?
In the years that I taught sexual offence law, I don’t think anyone ever suggested this was the correct conclusion to draw in such a case. I’m sure other people have (probably many times). So I am not claiming that my analysis is original. It’s just that I can’t recollect anyone doing so in my classes. This suggests to me that this is not the most intuitively compelling way to think about this case.

But why not? Mutual offences are not inconceivable. It is possible for two people to be guilty of assaulting one another, and people sue and countersue in private law all the time. Nevertheless, the mind does seem to recoil from the notion that two people could be guilty of sexually assaulting one another. It doesn’t match our intuitive sense of justice. There must be a victim and a perpetrator; a doer and a done-to. In other words, there must be something wrong with the analysis I have presented. What could it be?

One obvious criticism of the hypothetical, as I have sketched it, is that it is highly artificial. I have stipulated that the case involves intoxication to the point of impairment on both sides. In real world cases there would probably be much more uncertainty about the effects of intoxication. This uncertainty could have a big impact on how we think about intoxicated consent in particular. If we accept that not all instances of intoxicated consent are invalid, then there is likely to be a dispute as to whether the intoxication was sufficient to undermine the validity of consent on one or both sides. Depending on the context, it is possible that a court or tribunal will be inclined to conclude that the capacity was not impaired and hence there was some valid consent and no offence. This, incidentally, is one reason why the worry about unfairness in the gendered-form of the hypothetical is often misplaced. When students raise the gendered hypothetical they often claim that it would be unfair to hold a man responsible for sexual assault/rape when both parties were voluntarily intoxicated. But in practice, this might rarely arise. My limited exposure to cases like this (primarily through media and academic discussions) is that juries are often quite willing to believe that a woman’s intoxication did not impair her capacity to consent (or, what is slightly different, that there is sufficient doubt about this to warrant finding the man not guilty). This is compounded by the fact that there is often great uncertainty as to what exactly happened during an alleged sexual assault/rape, with the evidence usually depending on conflicting testimony.

Another obvious criticism of the hypothetical is that in not specifying the nature of the sexual touching between the parties I overlook the asymmetrical nature of certain sex acts. The conclusion that both parties are guilty of assaulting each other only really holds if there is some sexual touching on both sides. But this might not be the case. It might be that there is one party that is active and another passive: one party that does the touching and the other party that gets touched. This is often how we interpret cases of penetrative sexual touching. If that’s how we interpret the facts of the case, then reaching the conclusion that one party is guilty of an offence but the other is not is more plausible. That said, we need to bear in mind that real world cases are likely to involve some dispute and uncertainty as to what exactly happened. This might leave the door open to the view that there was some touching on both sides. Furthermore, in cases of penetrative sexual touching, it would not be impossible for one party to be guilty of a penetrative sexual assault on the other (rape or assault by penetration) and the other party to be guilty of non-penetrative sexual assault on them.

In addition to above criticisms, someone could point out that I haven’t been entirely accurate in my summary of how voluntary intoxication affects responsibility. While it is generally true that it does not undermine responsibility, there are certain cases where it might. A distinction is sometimes drawn between crimes of basic intent (which depend on recklessness/negligence) and specific intent (which depend on specific knowledge and/or intention to do something specific). While voluntary intoxication does not undermine responsibility for crimes of basic intent it might undermine responsibility for crimes of specific intent (this is a matter left to the jury to determine from the facts). The problem then is that rape and penetrative sexual assault are, in part, crimes of specific intent: the defendant must have had the intent to penetrate the other party. So there may be some cases where voluntary intoxication can undermine responsibility for sexual assault. But this complication offers little reassurance to someone who thinks the hypothetical is puzzling since even in those cases it will still be possible to hold the defendant liable for a ‘lesser’ form of sexual assault that does not require specific intent.

Are there any other ways to resolve the dilemma at the heart of the hypothetical? There are two. One would be to address the inconsistency between our attitudes to consent and responsibility by dropping our commitment to either premise (3) or (4) of the argument given earlier. This would mean either accepting that intoxicated consent, at least when the intoxication is voluntary, is valid consent (i.e. there can be a prior fault analysis of consent) or that one cannot be responsible if intoxicated to sufficient degree (i.e. our attitude to responsibility should be the same as our attitude to consent). Neither of those options seems attractive given the moral rationales underlying our acceptance of (3) and (4): avoid giving the intoxicated a ready excuse and protect the vulnerable from abuse. But some people have defended these views. For example, Heidi Hurd once argued that intoxicated consent should be deemed valid if the intoxication was voluntary.

The other potential solution would be to argue that there is something missing from our understanding of consent that warrants treating intoxicated consent differently from intoxicated responsibility. Perhaps there is some third factor/capacity that is needed for valid consent that is impaired by voluntary intoxication? One possibility here would be to argue that consent requires a continuing act and responsibility does not. This is a common view when it comes to consent to sex. People often argue that consent must be ongoing and it can be withdrawn at any time. But I would say that this alleged ‘third factor’ is more puzzling than anything else. Not all consent involves an ongoing act or the possibility of withdrawal (e.g. consent to general anaesthetic) and, more importantly, why should consent require ongoing acts and responsibility not? Why treat those things differently. Another possibility is suggested by Alan Wertheimer (whose arguments I considered in more detail previously) who once argued that consent required a deeper expression of the agent's will than responsibility and hence this justified the asymmetrical approach. Now, I'm not sure why we should accept that there is a deeper expression of will in the case of consent, but in any event this argument only works if we assume that the occurrent capacities for responsibility are not impaired at the time of the offence. The problem I am pointing out is that even when they are impaired there is a tendency to trace responsibility back to the decision to get intoxicated. Why do we think it is okay to do that for responsibility but not for consent? I don’t know if there are any other third factors, but it would be worth exploring.


3. Conclusion
In conclusion, the case in which both parties to an incident of sexual touching are voluntarily intoxicated to the point that their capacities for consent and responsibility are impaired presents what I think is a genuine puzzle, at least in the abstract case. The puzzle can be resolved by arguing that both parties are responsible for sexually assaulting each other, but this doesn’t seem to be an intuitively compelling solution. In real world cases, it may be possible to avoid the puzzle by claiming the facts favour one interpretation of the case over another, but if they don’t (and if there is sufficient doubt about the correct interpretation) we have to confront the tension in our attitudes to consent and responsibility.

* I know that the language used to describe sexual offences is highly politicised. Some people object to using terms like ‘sexual’ or ‘sex’ to refer to non-consensual acts. They argue that these things must be referred to as rape or assault. I use the term ‘sexual touching’ for two reasons (i) this is the language used in law and (ii) until you determine guilt or innocence it would be inappropriate to refer to these acts as rape or assault without the additional qualifier of ‘alleged’ or something of that sort. 

** In criminal law, the UK is divided into three separate jurisdictions. The only jurisdiction with which I am familiar is England and Wales. Nevertheless, I imagine the position on drunken consent is similar in the other two jurisdictions. 

*** Technically, the legal rule is more complicated than this because the guilty party would also have to lack the ‘honest’ or ‘reasonable’ (the standard varies) belief in consent. I overlook that here for the simple reason that this is not immediately relevant in cases of sufficient intoxication. In other words, if someone argued that their intoxication caused them to believe the other party was consenting, this would not be accepted as a legitimate excuse.




Monday, August 19, 2019

A Moral Duty to Share Data? AI and the Data Free Rider Problem

Image taken from Roche et al 2014


A lot of the contemporary debate around digital surveillance and data-mining focuses on privacy. This is for good reason. Mass digital surveillance impinges on the right to privacy. There are significant asymmetries of power between the companies and governments that utilise mass surveillance and the individuals affected by it. Hence, it is important to introduce legal safeguards that allow ordinary individuals to ensure that their rights are not eroded by the digital superpowers. This is, in effect, the ethos underlying the EU’s General Data Protection Regulation (GDPR).

But is this always a good thing? I have encountered a number of AI enthusiasts who lament this fixation on privacy and data protection. Their worry seems to be this: Modern AI systems depend on massive amounts of data in order to be effective. If they don’t get the data, they cannot learn and develop the pattern-matching abilities that they need in order to work. This means that we need mass data collection in order to unlock the potential benefits of AI. If the pendulum swings too far in favour of privacy and data protection, the worry is that we will never realise these benefits.

Now, I am pretty sure that this is not a serious practical worry just yet. There is still plenty of data being collected even with the protections of the GDPR and there are also plenty of jurisdictions around the world where individuals are not so well protected against the depredations of digital surveillance. So it’s not clear that AI is being held back right now by the lack of data. Still, the objection is an interesting one because it suggests that (a) if there is a sufficiently beneficial use case for AI and (b) if the development of that form of AI relies on mass data collection then (c) there might be some reason to think that individuals ought to share their data with AI developers. This doesn’t mean they should be legally obliged to do so, but perhaps we might think there is a strong ethical or civic duty to do so (like, say, a duty to vote).

But this argument encounters an immediate difficulty, which we can call the ‘data free-rider problem’:

Data Free-Rider Problem: If the effectiveness of AI depends on mass data collection, then the contribution of any one individual’s data to the effectiveness of AI is negligible. Given that there is some moral cost to data sharing (in terms of loss of privacy etc.) then it seems that it is both rational and morally acceptable for any one individual to refuse to share their data.

If this is right, then it would be difficult to argue that there is a strong moral obligation on individuals to share their data.

Problems similar to this plague other ethical and political debates. In the remainder of this article, I want to see if arguments that have recently been made in relation to the ethics of vaccination might carry over to the case of data sharing and support the idea of an obligation to share data.


1. The Vaccination Analogy: Is there are duty to vaccinate?
The dynamics of vaccination are quite similar to the dynamics of AI development (at least if what I’ve said in the introduction is accurate). Vaccination is beneficial but only if a sufficient number of people in a given population get vaccinated. This is what allows for so-called ‘herd immunity’. The exact percentage of people within a population that need to be vaccinated in order to achieve herd immunity varies, but it is usually around 90-95%. This, of course, means that the contribution of anyone individual to achieving herd immunity is negligible. Given this, how can you argue that any one individual has an obligation to get vaccinated?

This is not a purely academic question. Although vaccination is medically contraindicated for some people, for the vast majority it is safe and low cost, with minimal side effects. Unfortunately, there has been a lot of misinformation spread about the harmfulness of vaccination in the past 20 years. This has led many people to refuse to vaccinate themselves and their children. This is creating all manner of real world health crises, with, for example, measles outbreaks now becoming more common despite the fact that an effective vaccination is available.

In a recent paper, Alberto Giublini, Tom Douglas and Julian Savulescu have argued that despite the fact that the individual contribution to herd immunity is minimal, there is nevertheless a moral obligation on individuals (for whom vaccination is not medically contraindicated) to get vaccinated. They make three arguments in support of this claim.

The first argument is a utilitarian one and derives from the work of Derek Parfit. Parfit asks us to imagine a hypothetical case in which a group of people are in a desert and need water. You belong to another group of people each of whom has 1 litre of water to spare. If you all pooled together your spare water, and carted it off to the desert, it would rescue the thirsty group of people. What should you do? Your intuition in such a case would probably be “well, of course I should give my spare water to the other group”. Parfit argues that this intuition can be justified on utilitarian grounds. If you have a case in which collective action is required to secure some beneficial outcome, then, under the right conditions, the utility-maximising thing to do is to contribute to the collective effort. So if you are a utilitarian, you ought to contribute to the collective effort, even if your contribution is minimal.

But what are the ‘right conditions’? One of the conditions stipulated by Parfit is that in order to secure the beneficial outcome everyone must contribute to the collective effort. In other words, if one person refuses to contribute, the benefit is not realised. That’s a bit of a problem since it is presumably not true in the hypothetical he is imagining nor in the kind of case we are concerned with. It is presumably unlikely that your 1 litre of water makes a critical difference to the survival of the thirsty group: 99 litres of water will save their lives just as much as 100 litres. Furthermore, you may yourself be a little thirsty and derive utility from drinking the water. So it might be the case that, if everyone else has donated their water, the utility-maximising thing to do is to keep the water for yourself.

Giublini et al acknowledge this problem and address it by modifying Parfit’s thought experiment. Imagine that instead of pooling the water into a tank that is delivered to the people in the desert, each litre of water goes to a specific person and helps to save their life (they call this a case of ‘directed donation’ and contrast it with the original case of ‘collective donation’). In that case, the utility-maximising thing to do would be to donate the water. They then argue that vaccination is more like a directed donation case than a collective donation case. This is because although any one non-vaccinated person is unlikely to make a difference to herd immunity, they might still make a critical difference by being the person that exposes another person to a serious or fatal illness. This is true even if the risk of contracting and conveying the disease is very low. The small chance of being the crucial causal contributor to another person’s serious illness is enough to generate a utilitarian duty to vaccinate (provided the cost of vaccination to the vaccinated person is low). Giublini et al then generalise from this to formulate a rule to the effect that if your failure to do X results in a low probability but high magnitude risk to others, and if doing X is low cost (lower than the expected risk to others) then you have a duty to do X. This means a utilitarian can endorse a duty to vaccinate. Note, however, that this utilitarian rule ultimately has nothing really to do with collective benefit: the rule would apply even if there was no collective benefit; it applies in virtue of the low probability high magnitude risk to others.

The second argument is a deontological one. Giublini et al actually consider two separate deontological arguments. The first one is based on a Kantian principle of universalisability: you ought to do that which you can endorse everyone doing; and you ought not to do that which you cannot endorse everyone doing. The argument then is that refusing to vaccinate yourself is not universalisable because you could not endorse a world in which everyone refused to vaccinate. Hence you ought to vaccinate yourself. Giublini et al dismiss this argument for somewhat technical reasons that I won’t get into right here. They do, however, accept a second closely-related deontological argument based on contractualism.

Contractualism in moral philosophy is the view that we can work out what our duties are by asking what rules of behaviour we would be willing accept under certain idealised bargaining conditions. Giublini et al focus on the version of contractualism that was developed by the philosopher Thomas Scanlon:

Scanlonian Contractualism: “[a]n act is wrong if its performance under the circumstances would be disallowed by any set of principles for the general regulation of behaviour that no one could reasonably reject as a basis for informed, unforced, general agreement.” (Scanlon 1998, 153 - quoted in Giublini et al 2018)

Reasonable-rejectability is thus the standard for assessing moral duties. If X is reasonably-rejectable under idealised bargaining conditions, then you do not have a duty to do it; if it is not reasonably rejectable, then you have a duty to do it. The argument is that the requirement to vaccinate is not reasonably rejectable under idealised bargaining conditions. Or, to put it another way, the argument is the failure to vaccinate would be disallowed by a set of rules that no one could reasonably reject. If each person in society is at some risk of infection, and if the cost of reducing that risk through vaccination is minimal, then it is reasonable to demand that each person get vaccinated. Note that the reasonability of this depends on the cost of vaccination. If the cost of vaccination is very high (and it might be, for certain people, under certain conditions) then it may not be reasonable to demand that everyone get vaccinated. Giublini et al’s argument is simply that for most vaccinations, for most people, the cost is sufficiently low to make the demand reasonable.

The third argument is neither utilitarian nor deontological. It derives from a widely-accepted moral duty that can be embraced by either school of thought. This is the duty of easy rescue, roughly: if you can save someone from a harmful outcome at minimal cost to yourself, then you have a duty to do so (because it is an ‘easy rescue’). The classic thought experiment outlining this duty is Peter Singer’s drowning infant case: you are walking past a pond with a drowning infant; you could easily jump in and save the infant. Do you have a duty to do so? Of course you do.

Giublini et al argue that vaccination gives rise to a duty of easy rescue. The only difference is, in this case, the duty applies not to individuals but to collectives. The argument works like this: The collective could ensure the safety of individuals by achieving herd immunity. This comes at a minimal cost to the collective as a whole. Therefore, the collective has a duty to do what it takes to achieve herd immunity. The difficulty is that this can only happen if 90-95% of the population contributes to achieving that end through vaccination. This means that in order for the collective to discharge their duty, it must somehow get 90-95% of the population to vaccinate themselves. This means the group must impose the burden of vaccination on that percentage of the population. How can it do this? Giublini et al argue that instead of selecting some specific cohort of 90-95% of the people (and sparing another cohort of 5-10%) the fairest way to distribute that burden is just to say that everyone ought to vaccinate. This means no one is singled out for harsher or more preferential treatment. In short, then, an individual duty to vaccinate can be derived from the collective duty of easy rescue because it is the fairest way to distribute the burden of vaccination.

Suffice to say there is a lot more detail and qualification in Giublini et al’s paper. This quick summary is merely intended to show how they try to overcome the free rider problem in the case of vaccination and conclude that there is an individual duty to vaccinate. The question now is whether these arguments carry over to data collection and AI.


2. Do the arguments carry over to AI development?
Each of Giublini et al’s arguments identifies a set of conditions that must apply in order to derive an individual duty to contribute to a collective benefit. Most of these conditions are shared across the three arguments. The two most important conditions are (a) that there is some genuine and significant benefit to be derived from the collective effort and (b) that the individual contribution to that collective benefit comes at a minimal cost to the individual. There are also other conditions that are only relevant to certain arguments. This is particularly true of the utilitarian argument which, in addition to the two conditions just mentioned, also requires that (c) the individual’s failure to perform the contributory act poses some low probability, high magnitude risk to others.

Identifying these three conditions helps with the present inquiry. Given the analogy we are drawing between AI development and vaccination, the question we need to focus on is whether these three conditions also apply to AI development. Let’s take them one at a time.

First, is there some genuine and significant benefit to be derived from mass data collection and the subsequent development of AI? At present, I am somewhat sceptical. There are lots of touted benefits of AI, but I don’t know that there is a single provable case of significant benefit that is akin to the benefit we derive from vaccination. The use of AI and data collection in medicine is the most obvious direct analogy, but my reading of the literature on AI in medicine suggests that the jury is still out on whether it generates significant benefits or not. There are some interesting projects in progress, but I don’t see a “killer” use case (pardon the irony) at this stage. That said, I would qualify this by pointing out that there are already people who argue that there is a duty to share public health data in some cases, and there is a strong 'open data' movement in the sciences that suggests there is a duty on scientists to share data. One could easily imagine these arguments being modified to make the case for a duty to share such data in order to develop medical AI.

The use of mass data collection to ensure safe autonomous vehicles might be another compelling case in which significant benefit depends on data sharing, but again it is early days there too. Until we have proof of significant benefit, it is hard to argue that there is an individual obligation to contribute data to the development of self-driving cars. And, remember, with any of these use cases it is not enough to show that the AI itself is genuinely beneficial, it must be shown that the benefit depends on mass data collection. This might not be the case. For example, it might be the case that targeted or specialised data (small data) is more useful. Still, despite my scepticism of the present state of AI, it is possible that a genuine and significant benefit will emerge in the future. If that happens, the case for an individual obligation to contribute data could be reopened.

Second, does the individual contribution to AI development (in the form of data sharing) come at minimal cost to the individual? Here is where the privacy activists will sharpen their knives. They will argue that there are indeed significant and underappreciated costs associated with data sharing that make it quite unlike the vaccination case. These costs include the intrinsic harm caused by the loss of privacy* as well as potential consequential harms arising from the misuse of data. For example, the data used to create better medical diagnostics AI could also be used to deny people medical insurance. The former might be beneficial but the latter might encourage more authoritarian control and greater social inequality.

My general take on these arguments is that they can be more or less compelling, depending on the type of data being shared and the context in which it is being shared. The sharing of some data (in some contexts) does come at minimal cost; in other cases the costs are higher. So it is not easy to do a global assessment of this second condition. Furthermore, I think it is worth bearing in mind that the users of technology often don’t seem to be that bothered by the alleged costs of data sharing. They share personal data willy-nilly and for minimal personal benefit. They might be wrong to do this (privacy activists would argue that they are) but this is one reason to think that the worry that prompted this article (that too much data protection is hindering AI) is probably misguided at the present time.

Finally, does the individual failure to contribute data pose some low probability high magnitude risk to others? I don’t know the answer to this. I find it hard to believe that it would. But it is conceivable that there could be a case in which your failure to share data poses a specific risk to another (i.e. that your data makes the crucial causal difference to the welfare of at least one other person). I don’t know of any such cases, but I’m happy to hear of them if they exist. Either way, it is worth remembering that this condition is only relevant if you are making the utilitarian argument for the duty to share data.


3. Conclusion
What can we conclude from this analysis? To briefly summarise, there is a prima facie case for thinking that AI development depends for its effectiveness on mass data collection and hence that the free rider dynamics of mass data collection pose a threat to the development of effective and beneficial AI. This raises the intriguing question as to whether there might be a duty on individuals to share data with AI developers. Drawing an analogy with vaccination, I have argued that it is unlikely that such a duty exists at the present time. This is because the reasons for thinking that there is an individual duty to contribute to herd immunity in the vaccination do not easily carry over to the AI case. Nevertheless, this is a tentative and defeasible argument. In the future, it is possible that a compelling case could be made for an individual duty to contribute data to AI development. It all depends on the collective benefits of the AI and the costs to the individual of sharing data.


*There are complexities to this. Is privacy harmed if you voluntarily submit your data, even if this is guided by your belief that you have an obligation to do so? This is something privacy scholars struggle with. Historically, the willingness to concede to individual expressed preference (via informed consent) was quite high, but nowadays a more paternalistic view is being taken. The GDPR for example doesn’t make ‘notice-and-consent’ the sole factor in determining the legitimacy of data protection. It works with the implicit assumption that sometimes individuals need to be protected in spite of informed consent. 

 

Wednesday, August 14, 2019

Self Sacrifice Devices and Self Driving Cars: Should we do it?




Lots of people are interested in the ethics of autonomous vehicles. Indeed, the philosophical literature on this topic has grown unwieldy in the past few years. Whereas once upon time it was possible for one person to read and understand everything that had been published on this issue, I suspect that there is now so much written, and being written, that it has become impossible to keep up.

This is, in some ways, unfortunate. While there is lot of good work being done, there is a tendency for popular discussions of the ethical issues to fixate on simplistic thought experiments such as the infamous ‘trolley’ dilemmas. This creates the impression that figuring out what an autonomous vehicle should do in such a case is the be-all and end-all of the ethical debate. This isn’t true. While there is some value to considering such hypothetical cases, they are edge cases that do not provide the best guide to thinking about how autonomous vehicles should react in all dilemmatic cases. Furthermore, there are other ethical issues arising from the use of such vehicles that need to considered and are often overlooked.

I say all this by way of apology for what you are about to read. Although, I agree with the conclusion reached at the end of the preceding paragraph, I have to confess that I enjoy thinking about hypothetical edge cases. They bring into sharp relief some of the most fascinating ethical concepts and questions with which we must contend. I am going to discuss one such hypothetical edge case in the remainder of this article. The edge case concerns whether we should design a system of autonomous driving vehicles in such a way that it allows for individuals to voluntarily sacrifice themselves in the case of unavoidable crashes.

Let me first explain what I mean by this and then consider the arguments for and against it.


1. The Self Sacrifice Device
To explain the idea, I have to say something about the nature of unavoidable crash scenarios. This may be familiar to some readers; they should feel free to skip ahead to the next paragraph. An unavoidable crash scenario is a scenario in which a car is going to collide with someone or something and must choose between potential sites of collision. The typical set-up is a modified version of the trolley dilemma. A car is driving down a road when it is suddenly confronted with two sets of pedestrians occupying both sides of the road. On one side is an elderly couple; on the other side a group of children (or any other set of pedestrians). It is impossible for the car to avoid colliding with one set of pedestrians and so a split-second decision must be made as to which set of pedestrians should be saved and which sacrificed. Many variations of this basic set-up are possible. For example, instead of choosing between sets of pedestrians perhaps the car has to choose between colliding with a crash barrier (thereby injuring/killing the driver and passengers) and a group of pedestrians. Either way, the important point is that in these cases a harmful outcome is unavoidable (they are genuine dilemmas); the key ethical issue is not to prevent harm but to select between harmful outcomes. Sometimes it will be possible to minimise the amount of harm, other times the harmful outcomes may be equally weighted. If a human is driving the car, then the human must make the split-second decision. If a computer program is in control, then its programming must instruct it what to do in such a case.

Truly unavoidable crash scenarios of this sort are probably quite rare. I am not familiar with any studies that have been done on the matter, but my guess is that many real-world crash scenarios don’t involve such stark and equally weighted choices. There is much more uncertainty and imbalance in practice. This is one reason why some people think it is a mistake for the ethical debate about autonomous vehicles to become dominated by their discussion. Nevertheless, I persist.

I do not persist in the hope of discussing all possible resolutions of such cases. Instead, I persist in the hope of discussing the role that self-sacrifice might play in addressing such cases. In a previous article, I looked at a thought experiment from Hin Yan Liu concerning the creation of “immunity devices” that could be used in unavoidable crash scenarios. Liu’s idea was that it would probably be possible to create a device (just a small RFID chip perhaps) that would emit a signal that told a self-driving vehicle that the person wearing this device should not be sacrificed in the event of an unavoidable crash scenario. The effect of such a device might not be dissimilar to other forms of immunity that are granted to people by law (e.g. diplomatic immunity) or to a kind of extra health/safety insurance that people purchase at will.

To be clear, Liu didn’t think that the creation of immunity devices was a good idea. He just argued that their creation did not seem implausible and so it was important to think about the ethical and social ramifications. Here, I want to suggest a simple variation on Liu’s thought experiment. What if instead of immunity devices we allow people to create self sacrifice devices? These devices would also send a signal to a self-driving vehicle, but the meaning of the signal would be very different. It would inform the vehicle that the wearer of the device is willing to be sacrificed in the event of an unavoidable crash. This might be analogised to carrying an organ donor card, albeit with the not inconsiderable difference being that instead of signalling your willingness to give up your organs after death you are signalling your willingness to sacrifice your life for the lives of others.

What should we think about the creation of such a device?


2. The Arguments for and against a Self-Sacrifice Device
You might think that the idea of a self-sacrifice device is absurd or abhorrent. But let’s just consider for a moment whether there are any good reasons to endorse the creation of such a device.

I can think of two. First, as you may know, there is a rich experimental literature on people’s attitudes to trolley dilemmas. In these experiments, the dilemmas are usually structured in such a way that the experimental subject has to chose between harming two or more people other than themselves. But in some experimental studies people have indicated that if they had the option, they would prefer to sacrifice themselves instead of sacrificing some other party (e.g. Sachdeva et al 2015; Di Nucci 2013). In other words, if someone has to be harmed in such a case people would prefer if they could bear the brunt of the harm themselves (though there are some inconsistencies in this). For what it is worth, whenever I discuss trolley-type dilemmas with students, I find that a significant proportion of students agree that self-sacrifice, if possible, would be the ‘right’ thing to do in such a case. One advantage of the self-sacrifice device is that it allows people to exercise this preference in unavoidable crash scenarios. So you could argue that the creation of such a device is a good thing because it gives people an option that they want to be able to exercise.

Second, and perhaps more importantly, there is a rich moral tradition suggesting that self sacrifice is a noble deed. Think of the soldier who saves his/her comrades by diving on a grenade; think of the medical worker who cares for ebola sufferers only to be struck down by the disease themselves. These people are celebrated in our culture. They went above and beyond the call of moral duty. They are moral heroes and heroines. We might argue that it would be a good thing to give people the option of noble self sacrifice because it would allow them to exercise this extreme form of moral virtue. We might argue that this would be a particularly good thing in light of the fact that other suggested solutions to unavoidable crash scenarios are not hugely compelling (e.g. forcing some moral theory such as consequentialism on everyone; deciding by majority preference; or selecting outcomes at random).

But, but, but…There is also, clearly, a dark side to the idea of self-sacrifice device. Indeed, there are several dark sides: reasons to think that the creation of such devices would not be a good thing. Let’s review some of them.

First, we might worry that the creation of a self-sacrifice device undermines the goodness of noble self-sacrifice. A noble self-sacrifice is a supererogatory act. It’s goodness lies, to some extent, in the fact that it is an unforced, often spontaneous, decision. A self sacrifice device might undermine this unforced spontaneity. People using the device would have to pre-commit to sacrificing themselves at some unknown (perhaps never-to-be-realised) future moment. Their capacity for spontaneous virtue might thus be compromised. More importantly, in some societies, the existence of such a device might pressure or force some people into sacrificing themselves against their will. For example, the historical norm in (Western) societies is that adult men ought to sacrifice themselves in order to protect women and children. If this norm continues to apply, we might expect adult men to face a strong social pressure to use self-sacrifice devices. Thus we might worry that in wearing such devices they are not authentically expressing their moral agency but, rather, conforming to social stereotyping.

Second, in addition to social pressures, there may be a strong temptation to create legal pressures that force some people into wearing self-sacrifice devices. This is particularly true if such devices become commonplace and it is necessary to create a ranking system to differentiate between different wearers (i.e. to decide who gets sacrificed first in the event of an unavoidable crash). This would presumably require a points-based ranking and it would be tempting to some governments to tie this into a system of social punishment. This might work like the Chinese social credit system(s). People might get docked points if they do something wrong thus making it marginally more likely that they will be sacrificed in the event of an unavoidable crash. Of course, in this case we have moved beyond the world of self-sacrifice into the world of authoritarian social control: everyone might end up being required to wear a device that signals their social worth to machines that may use this information to distribute risks away from high value individuals and onto low value individuals. The point is that there is, arguably, a slippery slope from creating a self-sacrifice device to enabling such a system of social control. This might be one compelling reason not to create such a device.

Third, there would, presumably, be some formidable practical difficulties with the implementation of self sacrifice devices. How do we guarantee that the signal sent from the device to the car is reliable and high speed? Would the car have enough time to use the information in the crash scenario? Could the person wearing the device be singled out from other potential crash victims? What if they are embedded in a group of pedestrians? What if they are with their children? Practical engineering solutions would need to be found for each of these issues and each involves important ethical choices.
Fourth, there would, presumably, be significant cybersecurity challenges raised by the existence of such devices. They could be hacked. A malicious agent could play around with the signals being sent back and forth between the cars and the devices, perhaps directing the car to collide with wearers even when there is no unavoidable crash. In other words, the mere existence of the device makes possible a whole range of malicious interferences. (Cybersecurity issues of a similar nature plague the entire field of autonomous vehicles).

Fifth, and finally, even if we grant that self sacrifice is good thing (and I grant that it is in certain cases) it’s not obvious that you need a self-sacrifice device to enable it. It would presumably still be open to some pedestrians (or drivers/passengers) to exercise a preference for self-sacrifice through other means. A pedestrian could jump in front of a car, for example, or a driver/passengers could take control of the steering wheel and crash the car into a wall (assuming the autonomous vehicle allows for such driver-takeover). The opportunities for self-sacrifice might be more limited in these cases, but that might not be a bad thing given the other risks discussed above.


3. Conclusion
So where does that leave us? There are probably more arguments that could be mustered on both sides, but based on this quick review I think, on balance, that the arguments against self-sacrifice devices are more compelling than the arguments in their favour. There is an prima facie case to made for the creation of such devices, but this is negated by the many risks posed by their creation and by the fact opportunities for self-sacrifice can be accessed in other ways.




Monday, August 12, 2019

The Types and Harms of Victim-Blaming




I have recently been reading up about the ethics of victim-blaming. Victim-blaming is a prevalent phenomenon. It crops up most controversially in cases of sexual assault, and also features in hot-button debates about poverty and police shootings. These controversial cases are not, however, the only ones in which the phenomenon arises. Victim-blaming, of a sort, features prominently in private law, particularly in personal injuries litigation where people who suffer harm as a result of the negligence of others have their compensation reduced (or eliminated) as a result of their own perceived negligence. It also crops up frequently in our day-to-day lives. I suspect many of us have criticised or have been tempted to criticise our friends and colleagues for failing to take adequate precautions to ensure the safety and security of themselves or their families or their possessions. In certain circumstances, this kind of criticism can amount to victim-blaming.

From an intellectual perspective, victim-blaming is interesting because it implicates many important philosophical concepts. These include responsibility, blame, innocence, power, oppression, and distributive justice/injustice. This means that it is not only a practically important topic, but also one that raises many fascinating and complex intellectual questions. The common intuition among people I have talked to is that victim-blaming is always a bad thing, but if you read the literature you find a slightly more ambivalent perspective emerging, with some people accepting that certain forms of victim-blaming can be acceptable (for an excellent exploration of these ambivalent attitudes to the phenomenon, see Susan Wendell’s article on responsibility and oppression)

I haven’t fully developed my own thoughts on the issue (are thoughts ever fully developed?) but I have learned quite a bit from my reading thus far. In the remainder of this article, I want to share two important ideas about victim-blaming. Both come from an article by J. Harvey called ‘Categorizing and Uncovering “Blaming the Victim” Incidents’. The first concerns the different forms that blaming the victim can take; the second concerns the harms that arise as a result. Both help to highlight why victim blaming is seen to be particularly problematic in the case of minority groups or people living under conditions of oppression.


1. Six Different Forms of Victim Blaming
All blaming the victim (BTV) cases have a common structure. First, they involve a victim(s), i.e. someone who suffers a harm. Second, they involve some attempt to assign responsibility for this harm to the victim.

Harvey adds to this that all these attempts to assign responsibility to the victim are inappropriate and hence all BTC cases are morally suspect. I would prefer not to make that assumption part of the defining characteristics of BTV. This is because I think it builds the moral inappropriateness of BTV into its definition; this strikes me as something that needs to be argued for and not simply assumed.

I suspect what is going on here, incidentally, is that in many people’s heads the term ‘victim’ is synonymous with ‘innocence’ and if all victims are innocent, then all blame assigned to them is morally inappropriate. But I prefer to define ‘victim’ broadly to cover anyone who suffers a harm. This avoids making assumptions about their responsibility or innocence.

Beyond those two features there is probably a third feature that is common to most BTV cases, namely: that the harm suffered by the victim appears to have been caused by another person (call them the ‘perpetrator’). The function of victim-blaming is then to shift some or all responsibility for the harm from the perpetrator to the victim. That said, I am reluctant to say that this is a common feature of all BTV cases. This is because people often talk about self-victimisation (e.g. the smoker suffering from lung cancer) and about victims of natural disasters (flood victims/earthquake victims). These cases do not involve a third party perpetrator. The potential absence of a perpetrator is one of the things Harvey highlights in her* categorisation of different forms that BTV cases can take.

Without further ado, let’s consider these six different cases:

Case 1: The victim suffers from some harm that was not attributable to the actions of a perpetrator (call this a ‘non-moral’ harm) and is then blamed for this. This is the kind of case I was just alluding to and would typified by the example of someone blaming a cancer patient for bringing about their own condition.

Case 2: The victim suffers from some harm that was attributable to a perpetrator (call this ‘moral harm’), but they are told that this wasn’t really harm and that they are miscategorising what happened to them. This is usually accompanied by some allegation to the effect that they are overreacting or engaging in false or malicious accusations. Harvey gives the example of a woman in the Canadian military who complained when her commanding officer called her a ‘broad’. Her complaint was dismissed for being an inappropriate overreaction.

Case 3: The victim suffers from some moral harm but it is argued that this was not attributable to a perpetrator and was in fact a case of non-moral harm. Harvey gives the example of a woman who complains of sexual harassment. The complaint is dismissed but it is accepted that the woman suffered from considerable distress and psychological harm. This, however, is attributed to her own dispositions/psychological frailty and not the actions of a perpetrator.

Case 4: The victim suffers from some moral harm, which is prima facie attributable to a perpetrator, but then it is argued that the victim was also partly or maybe even wholly responsible for the harm. This is usually justified on the grounds that the victim either intentionally or negligently provoked the perpetrator. The classic example here is the case of the sexual assault victim who is alleged to have ‘led on’ the perpetrator through their behaviour or dress. This probably constitutes the core case of victim-blaming and is what most people have in mind when they think of the phenomenon.

Case 5: The victim suffers from some moral harm, which is attributed to a perpetrator (i.e. they are taken to bear the majority of the responsibility) but then it is argued that the victim somehow made the harm worse than it needed to be through their own actions. The intuition underlying this case is that people ought to take steps (if they can) to minimise the harm they suffer. So, again, we have the classic case of a sexual assault victim (or harassment victim) who is criticised for not using force against the perpetrator, or for not running away or screaming, or for not confronting the perpetrator and telling them that they did not consent to their conduct.

Case 6: The victim suffers from some moral harm, which is wholly attributed to the perpetrator, but then it is argued that after it occurred the victim did something that made it worse that it needed to be. This is really just a subtle variation on the previous case, involving longer-term reactions to the harm. Harvey notes that victims can sometimes be blamed for exaggerating the harm they have suffered, for brooding or dwelling on it and not moving on, and for protesting the harm in an inappropriate way.


As you can see, these cases vary in interesting ways. You might query whether we need all six, but I think there is value to each distinction. The distinctions show how, even though there is a core BTV case (case 4), victim-blaming can arise in other ways.


2. The Harms of Victim-Blaming
So much for the different forms of victim-blaming what about its ethics? We know that people find it objectionable (even if they frequently engage in it) but why? What’s so harmful about it?

Harvey identifies seven different harms that result from victim-blaming. I’m going to simplify her analysis and talk about three primary types of harm that can result from it:

Misattribution harms: Someone who is innocent or not fully responsible for a harm is singled out as being morally at fault. This is morally wrong and contrary to how we think principles of blame and responsibility should be applied. So this results in a kind of moral harm being applied to the victim. This is the most basic and obvious kind of harm that results from victim-blaming. In practice this can be quite an abstract and philosophical form of harm, unless it has real-world implications (e.g. the victim is punished or has their compensation reduced/eliminated).

Psychological harms: Because they have been blamed, the victim suffers from some kind of psychological harm, often of a lingering kind. For example, the victim may suffer an ongoing loss of confidence, self-esteem or self-respect. They may feel shame and guilt that they ought not to feel. This is distinct from, but compounded on top of, the harm they experienced through their victimisation (e.g. trauma or physical distress).

Oppression-related harms: The victim is assumed to have more power than they actually have and may be expected (unfairly) to proactively protect against their own victimisation in the future. This is a particular problem when members of oppressed groups are the victims because the imposition of additional responsibility-burdens on them tends to compound and perpetuate their oppression.

These harms are not mutually exclusive. Any particular BTV case may involve all three of them. Again, consider the classic case of a sexual assault victim who is blamed on the grounds that she provoked the perpetrator. Here we have blame being misattributed to the victim. This blame is likely to lead many people to expect her to proactively avoid future victimisation (don’t dress like that! don’t drink! don’t flirt! don’t walk alone! etc). These expectations will, no doubt, foist unreasonable burdens upon her. Her freedom of movement, dress and so forth will be curtailed more than that of others (specifically men). This all serves to compound the oppression that she and other women experience, particularly in relation to how they must act in heterosexual relations. It is also possible that the victim-blaming will be psychologically harmful. The woman may experience shame and guilt as a result of the blame, and may lose self-respect and self-esteem. She may even be encouraged to feel those things by others in her community.

Sometimes these harms are not so obvious. Many people engage in (mild) forms of victim-blaming for the best of reasons: they want to empower victims to avoid harm in the future. But Harvey makes the important point that the harm of victim-blaming is independent from the motivations underlying it. This is, in some ways, a trivial observation: harming is distinct from wronging. You can harm someone without intending to do so. But it is an important point to make in relation to BTV cases. We have a tendency to assume that we have more control over the world than we really do. This leads us to endorse narratives of false empowerment, e.g. ‘If I didn’t wear that dress, it wouldn’t have happened…’. These narratives give us an unreasonable sense of what we can do to avoid future victimisation. Encouraging people from oppressed groups, who are already disadvantaged, to embrace these narratives of false empowerment is problematic, particularly if what they have to do to exercise that power curtails their freedom to live a flourishing life in other ways.

But there is a delicate balancing act to perform here. You don’t want people to endorse a narrative of false helplessness either. The victim-mindset can be seductive. We often don’t want to take responsibility for what happens to us. We want others to take up that burden. This is one of the things I like about Susan Wendell’s analysis of victimisation and oppression. She is acutely aware of the delicate balancing act that needs to take place when interacting with victims, suggesting that sometimes we need to get beyond the simplistic ‘blame the victim’ versus ‘blame the perpetrator’ framing of these cases. Instead, we have to develop a mindset in which we can acknowledge the wrong done to the victim whilst at the same time empowering them to transcend their victimhood. I suspect the key to this lies in how we seek to empower the victim. Do we impose unreasonable burdens on them that compound their oppression? Or do we give them some capacity to address the conditions of their oppression? The latter kind of empowerment seems less objectionable than the former

I would also add, as a final point, that there might be a flipside to all this. Harvey is right, I believe, to say that victim-blaming is particularly problematic when the victim belongs to an oppressed group. But not all victims belong to such groups. Does this imply that it is less problematic to blame victims from powerful groups? I haven’t seen this explored in any detail in the literature that I have read but it seems like a point worth considering.


* I don’t know exactly who ‘J. Harvey’ is, but I assume it is Jean Harvey, a philosopher who died in 2014 and wrote a lot about oppression. I could be wrong about this.



Friday, August 2, 2019

The Robotic Disruption of Morality




We increasingly collaborate and interact with robots and AIs. We use them to perform tasks and we also find that our choices and opportunities are affected by their operations. The increasing prevalence of such interactions has led to an explosion of interest in AI ethics and robo-ethics. Squads of academics, technologists and policy-makers are frantically asking how we should use ethical principles to guide and constrain the operation of robots and AIs. The prevailing belief amongst most of these actors is that long-standing human moral beliefs and practices should constrain the operation of these new technologies.

There is, however, another kind of inquiry we can conduct into the impact of robotics and AI on morality. Instead of asking how our moral beliefs and practices should constrain the operation of the technology we can ask whether and to what extent the technology is changing our moral beliefs and practices. Admittedly, there are plenty of people interested in asking this question, but it seems to me to be the road that is currently less travelled. That’s why, in the remainder of this article, I want share some thoughts that contribute to this second inquiry.

To be more precise, I want to outline one naturalistic theory of how human morality came into being (Michael Tomasello’s theory). I then want to consider how this could be disrupted or undermined by the growing prevalence of robotics and AI. I’m trying to be tentative not dogmatic. I’m very interested in feedback. If you think this is an interesting line of inquiry, and have thoughts on how it could be developed further, please leave a comment at the end.


1. Tomasello’s Theory of Human Morality
I’ll start by setting out Tomasello’s theory. The theory comes from the book The Natural History of Human Morality. It is an attempt to explain how human morality came into being over the course of our evolutionary and cultural history. The theory is interesting and probably represents the best current attempt to come up with a naturalistic account of the origins of human morality. One of the most impressive things about it is the range of empirical evidence that Tomasello draws upon to support his theory, much of it coming from his own lab.

Unfortunately, I am not going to discuss any of that empirical evidence (read the book! It’s good). Instead, I’m going to focus on the general structure of the theory. What exactly does Tomasello think happened in order for humans to develop their contemporary moral beliefs and practices?

To answer that, you first need to know something about what Tomasello understands by the phrase ‘human morality’. Tomasello’s main focus is on moral norms and the practices associated with them. Humans believe that they have duties and obligations; that they ought to fulfil their duties; that people deserve to be blamed if they don’t live up to their duties; and that people deserve to be treated fairly if they do. These beliefs, and their associated practices, are what Tomasello is interested in when it comes to explaining human morality. How did they come into being?

Tomasello argues that they came into being as the result of two key transitions. The first key transition was the rise of cooperative hunting and foraging. One of the distinctive features of humans is our willingness to cooperate with one another to achieve joint goals. This sets us apart from our ape cousins. For example, chimpanzees will sometimes form hunting parties that appear to work together towards a common end, but these alliances are usually feeble and easily broken down; humans form more sustained cooperative partnerships (Tomasello has performed several experiments showing that our ape cousins are not ‘natural born’ cooperators in the same way that we are).

But how do human cooperative partnerships work? Take the case of two hunters working together to track and kill a deer. Tomasello argues that their partnership is an exercise in joint agency. They imagine that they are both part of a joint ‘mind’ that is working together toward to a common goal. They each have their own distinctive roles in relation to that common goal, but these roles are interchangeable and conceived as being equally important. This gives rise to a distinctive ‘second personal’ psychology. Each hunter sympathises with the position of the other hunter and treats them as they would treat themselves. In other words, they each think that the other deserves a fair share of the spoils of the hunt; they don’t just grab all they can for themselves. In addition to this, each of them understands that they have duties with respect to the common goal (‘role responsibilities’) and if they fail to live up to those duties the other hunter can hold them to account. I’ve tried to illustrate this model below.




A lot of what we need to sustain normative beliefs and practices is present as a result of this first transition. Nevertheless, Tomasello argues that there is another important transition responsible for modern moral norms. After the transition to cooperative partnerships, humans also started to form cooperative groups. These groups also worked together through joint agency but, crucially, they sometimes competed with other cooperative groups. To survive this competition, the groups had to form institutional superstructures that promulgated, policed and enforced a common set of normative beliefs and practices.

This, in turn, gave rise to the complex moral psychology that most of us now share. This psychology consisted in a range of moral emotions that reinforced the institutional superstructure. Some of these moral emotions were self-directed, e.g. feelings of guilt and shame when norms were broken and feelings of self-respect when norms were upheld. Some were other-directed, e.g. feelings of trust and respect when others upheld the norms, and feelings of resentment and blame when they did not.

In short, Tomasello argues that modern human morality emerged from two important developments in human psychology (a) our capacity to take the second personal stance, i.e. to sympathise with the other and view them as an equivalent agent and (b) the complex suite of moral emotions that goes with this. Suffice to say there is a lot more detail in the book about how these things work and how they came into being. Hopefully, this overview is enough to give you the gist of the theory.


2. The Robotic Disruption of Human Morality
From my perspective, the most interesting aspect of Tomasello’s theory is the importance he places on the second personal psychology (an idea he takes from the philosopher Stephen Darwall). In essence, what he is arguing is that all of human morality — particularly the institutional superstructure that reinforces it — is premised on how we understand those with whom we interact. It is because we see them as intentional agents, who experience and understand the world in much the same way as we do, that we start to sympathise with them and develop complex beliefs about what we owe each other. This, in turn, was made possible by the fact that humans rely so much on each other to get things done.

This raises the intriguing question: what happens if we no longer rely on each other to get things done? What if our primary collaborative and cooperative partners are machines and not our fellow human beings? Will this have some disruptive impact on our moral systems?

The answer to this depends on what these machines are or, more accurately, what we perceive them to be. Do we perceive them to be intentional agents just like other human beings or are they perceived as something else — something different from what we are used to? There are several possibilities worth considering. I like to think of these possibilities as being arranged along a spectrum that classifies robots/AIs according to how autonomous or tool-like they perceived to be.

At one extreme end of the spectrum we have the perception of robots/AIs as tools, i.e. as essentially equivalent to hammers and wheelbarrows. If we perceive them to be tools, then the disruption to human morality is minimal, perhaps non-existent. After all, if they are tools then they are not really our collaborative partners; they are just things we use. Human actors remain in control and they are still our primary collaborative partners. We can sustain our second personal morality by focusing on the tool users and not the tools.

At the other extreme end of the spectrum we have the perception of robots/AIs as fully autonomous agents, independent of their human creators and users (if, indeed, they even have readily identifiable creators and users). This could be quite disruptive to our second personal morality since it means we cannot look directly to those human creators and users to sustain our moral norms. But this all depends on how we understand the autonomous agency of robots/AIs. If we understand it to be essentially the same has human agency — in other words, if we assume that robots/AIs have the same kinds of intentional states (beliefs, desires etc) underlying their agency — then the disruption may be quite minimal. We will not be forced to deal with ontologically distinct collaborative partners. Robots/AIs will be just like the human collaborative partners we are used to: we can continue to apply our familiar second personal morality to them.

Many people, however, are uncomfortable with this idea. They do not think that robots can (perhaps ever) share our intentional psychology. This means robots should never be perceived as being equivalent to human collaborative partners. So if robots/AIs do attain autonomous agency it must be a wholly different and unfamiliar kind of autonomous agency. This is the form of perceived autonomous agency that could be most disruptive to our second personal morality. It would mean that we end up collaborating and interacting with robots/AIs on a regular basis but we cannot apply our familiar moral frameworks to those interactions. We cannot respect or trust robots to uphold our duties, we cannot resent them or blame them when they do wrong. The traditional moral norms find no purchase. This might be seen as a good thing by those who dislike our traditional moral frameworks (particularly people who dislike the psychology of blame and retribution that seems to go with it) but others will be more disconcerted.

In between these two extremes there are, of course, a range of intermediate states. These are states in which robots/AIs are perceived as being partly tool-like and partly autonomous, (and also as, perhaps, sharing perhaps some of our intentional psychology but not all of it). For what it is worth, I believe we are currently somewhere in this intermediate range. I cannot pinpoint our exact location, and it probably varies depending on the specific form of robot/AI in which we are interested, but I can see some tensions emerging for our traditional second personal morality. You can see this most clearly in the debate about the ‘responsibility gap’ in relation to robotic weapons and cars. Some people cling to the traditional model and urge us to see these technologies as essentially tool-like in nature. Thus we can continues to focus our moral energies on the humans that control and shape these technologies. No disruption to worry about. Others, admittedly a minority, urge us to accept robots/AIs as potentially autonomous agents and then differ on the disruptive consequences of this, depending on how they understand machine autonomy.




3. Conclusion
That’s the gist of the idea. Does it make sense? I’m not sure. Clearly more work would need to be done on the exact mechanisms underlying our second personal morality and how exactly they might be disrupted by robots/AI. Furthermore, it would be worth addressing the longer-term consequences of this disruption. Is it really a deep problem or is it an opportunity? I would welcome further exploration of this idea.

Before I wrap up, though, I want to make two interpretive points. First, in case it wasn’t clear from the foregoing, I don’t think the analysis I have offered hinges on the actual ontological status of robots/AIs. In other words, I don’t think it really matters whether robots/AI actually are fully autonomous agents or have an intentional psychology. What I think matters is what they are perceived to be. Obviously, there is some relationship between perception and reality, but it is not tight and its looseness could create problems for our moral frameworks even if the actual reality does not.

Second, I want to be clear that I don’t think developments in robotics and AI are the only things that threaten our second personal morality. Philosophical theories of human behaviour that are naturalistic, deterministic and reductionistic also pose challenges for the legitimacy of second personal morality. These challenges have been widely-debated and discussed. But they remain reasonably esoteric and divorced from people’s everyday lives. What interests me about the disruptive impact of robots/AIs is that it is more immediate and practically salient. People now have to interact with and collaborate with these technologies. This means questions about the ontological status of those technologies need to be resolved on a day-to-day basis. This means the disruptive impact of those technologies on our moral frameworks could be much more real than abstract philosophical concepts and debates.




Wednesday, July 31, 2019

Eternal Recurrence and Nihilism: How Can We Add Weight to Our Decisions?



[E]ternal recurrence means that every time you choose an action you must be willing to choose it for all eternity. And it is the same for every action not made, every stillborn thought, every choice avoided. And all unlived life will remain bulging inside you, unlived through all eternity. And the unheeded choice of your conscience will cry out to you forever. 
(“Nietzsche” in Irvin Yalom’s When Nietzsche Wept)

Nietzsche was a nihilist. He rejected the truth of normative and evaluative statements. That said, the exact kind of nihilism he favoured is a matter of some dispute (a dispute I touched upon in a previous article). Furthermore, despite his commitment to nihilism, a lot of Nietzsche’s philosophy was dedicated to moving beyond it. He wanted to show how life is still possible in the shadow of nihilism. Indeed, on one reading, it is possible to argue that Nietzsche saw nihilism as a great opportunity for humankind. Instead of passively accepting the values that are foisted upon us by cultural tradition, we can now actively create our own value systems. This could bring new hope to our lives.

Key to this was the doctrine of eternal recurrence. This doctrine showed how to add weight to our decisions in a nihilistic world. According to this doctrine, whenever you make a decision, you should imagine that you will have to make that decision over and over again (i.e. that there will be infinite replays of the decision). You should then choose whichever option you would be willing to choose across all of those replays. In other words, don’t go with one option for the sake of it and hope that you’ll get a chance to choose another option at a later replaying; pick the option that stands up to scrutiny over and over again.

Nietzsche may have believed that eternal recurrence was a real thing, and that our lives really do replay themselves an infinite number of times. But that’s not strictly speaking necessary to the usefuless of eternal recurrence as a decision heuristic. We can analyse it as a purely imaginative doctrine. The question then becomes: would it really make a difference if we imagined infinite replays of the same decision? Would such imagining help us to overcome nihilism?

Nadeem Hussain, in his article ‘Eternal Recurrence and Nihilism: Adding Weight to the Unbearable Lightness of Action’ argues that it could. In the remainder of this article, I want to review what he has to say. I think Hussain’s argument is fascinating, in particular because it shows how Nietzsche’s doctrine of eternal recurrence has some direct analogues in modern moral theory and psychology.

That said, before I get into the meat of Hussain’s argument, it is worth repeating something he himself says, namely: that despite its importance in his philosophical project, Nietzsche doesn’t actually say a whole lot about the doctrine of eternal recurrence. He doesn’t offer a detailed explanation of how it is supposed to be applied to day-to-day decision-making, nor a detailed justification of its use. So interpreters like Hussain have to read between the lines and construct an understanding of the doctrine that does justice to what Nietzsche seemed to be saying. In other words, don’t fool yourself into thinking that what follows is pure, unadulterated Nietzsche; it’s Hussain’s take on Nietzsche.


1. Why do we need to add weight to our decisions?
Let’s start by considering exactly why the doctrine of eternal recurrence is needed. Hussain takes the view — which he defends at greater length elsewhere — that Nietzsche is a thoroughgoing nihilist about normative and evaluative judgments. In other words, he thinks that all statements of the form ‘X is good’ or ‘X is obligatory’ are false (or, more properly, not capable of being true or false). This means that, at best, these statements are expressions of, perhaps widespread, feelings or attitudes. We can call this view ‘theoretical nihilism’.

This creates a problem for Nietzsche because of the claims he makes about human psychology. Like Schopenhauer before him, Nietzsche thinks that the will is an overwhelming psychological force in human life. People constantly want to do things; they are restless and try to change the world around them. Nietzsche’s twist on Schopenhauer is that he thinks the will always seeks out forms of resistance and tries to overcome them. This means that the will is, to use his phrase, a ‘will to power’. Related to this, Nietzsche thinks we are in the constant habit of making normative and evaluative judgments to fuel the will. We tend to think that certain things are good and certain things are bad. These normative and evaluative judgments play a key role in our psychological economies. What’s more, these normative and evaluative judgments tend to take an unconditional form: we don’t question them or doubt them when they are operative. They dominate our minds and make us think that whatever course of action we have chosen is necessary right now. Any disruption to this psychological economy, would be existentially threatening to creatures like us.

(I would also add — with the caveat that I am not a Nietzschean scholar and I get the impression that what I am about to say would be sacrilegious in the eyes of some Nietzscheans — that despite all the protestations to the contrary, there is an implicit evaluative judgment underlying Nietzsche’s thinking about the role of the will to power in human life. He seems to value it and think it should be given an outlet for its insatiable appetite. Indeed, he seems to go so far as to claim that it would be self-denying and inhumane to deny it its role.)

The problem with theoretical nihilism is that it seems to undercut the evaluative judgments that provide the fuel for the will. It rips asunder our psychological economy and turns us into ‘wantons’: people who are not committed to anything and get carried away by the whims of the moment. If our belief that ‘X is good’ is just an expression of opinion, and not grounded in some deeper metaphysical truth about the value of X, it is difficult to see why we should sustain our commitment to X. Maybe we should value something else? Or maybe we should just give up valuing altogether and let other people decide what we should do? This latter possibility is referred to as ‘passive nihilism’ by Nietzsche. It involves the passive acceptance of evaluative beliefs foisted upon us by others and not the active choosing of our own value systems. Nietzsche seems to prefer active nihilism over passive nihilism. But this is puzzling. How can you be an active nihilist if you accept theoretical nihilism? How can you remain committed to any project or plan if you know, deep down, that they aren’t really worthwhile?

One solution, favoured by Hussain, is to act ‘as if’ your projects and plans have value. More precisely, Hussain argues that one way to avoid complete motivational collapse is to sustain ‘honest illusions’ about the value of your projects and plans. But it is hard to see how honest illusions by themselves can address the problem. If the illusions are honest — i.e. if you are not deluding yourself into thinking that theoretical nihilism is false — won’t your projects and plans continue to feel hollow? This is where the doctrine of eternal recurrence comes to the rescue.


2. How Eternal Recurrence Adds Weight to Our Decisions
The idea of eternal recurrence adds weight to our illusions of value. It acts as a motivation amplifier. It takes advantage of the fact that we are naturally evaluative beings. We make evaluative judgments all the time. Our commitment to theoretical nihilism might call those judgments into question but it doesn’t change the fact that evaluations bubble up into our minds unbidden all the time. The important thing is to work with these natural evaluations and turn them into something with a bit more oomph.

You do this, according to Nietzsche, by taking a step back. So suppose you are deciding which course to study at university. You have the choice of computer engineering or biology. You have an interest in both, but you are not sure what to do. Instead of getting bound up with your occurrent feelings and emotions about the choice, you should instead get some psychological distance. This is what the imaginative exercise underlying the doctrine of eternal recurrence enables you to do. Instead of looking at the choice as being one that is being made at a particular moment in time, look on it as a series of choices that will repeat themselves across multiple (ultimately infinite) possible worlds. In other words, imagine that you will face the same decision over and over again. Furthermore, imagine that there is a constraint on how you get to decide across all those possible worlds: whatever you choose right now, in this world, will be the choice you have make in all those other possible worlds. So if you are stuck with the choice you make now across an eternity of replays, you need to make sure that the decision is one that you would be happy with across eternity.



I think that the movie Groundhog Day is, in some ways, a meditation on this Nietzschean exercise. In the movie, the main character (played with sardonic world-weariness by Bill Murray) is forced to live the same day over and over again. At first, this causes him great anxiety, then, when he gets used to it, he takes advantage of it to satisfy his hedonistic desires. He finds this hollow and unsatisfying so, finally, he tries to do good. In the end, he constructs a perfect day of perfect choices — ones that he can be happy playing out over and over again. In other words, he arrives at a set of decisions that can be endorsed under the strictures of eternal recurrence. Admittedly, the movie is not perfect illustration of eternal recurrence. Ultimately Bill Murray’s character gets to escape the fate of living out the same day over and over again, and, furthermore, the idea of doing good is contrary to theoretical nihilism. Nevertheless, for all its imperfections, I think it is a useful analogy.

You may still be wondering: how exactly does eternal recurrence add weight that is otherwise lacking. There are two things worth saying here. First, as Hussain points out, imagining the decisions playing out over and over again allows for small differences between preferences to aggregate into larger differences. So, for example, computer engineering and biology might seem roughly equal to you in the here and now, but maybe you have a very slight preference for biology. This slight preference may not be enough for you to discount the possibility of studying computer engineering right now, but over repeated plays the small difference will aggregate into something much larger. Consequently, the decision to study biology over computer engineering acquires weight that it previously lacked.

The other point, which Hussain makes in a couple of different ways, is that the idea underlying eternal recurrence is not that dissimilar to other proposed decision heuristics. For example, moral constructivists often claim that we can build objective normative and evaluative truths out of our tendency to form normative and evaluative beliefs. We can do this by following certain decision heuristics. When confronted with an evaluative belief like ‘X is good’ they might try to determine whether X is really good by imagining what an ‘ideal observer’ would think of X. The ideal observer is a hypothetical person who is perfectly rational and has full information about the nature of X and its relation to the world. Would that person endorse the belief that X is good? If they would, then X is indeed good; if not, then it is not. Hussain argues that what these moral constructivists are doing is very similar to what Nietzsche is trying to do with the doctrine of eternal recurrence. The one important difference, of course, is that Nietzsche is not trying to use the heuristic to construct objective normative and evaluative truths. He is just trying to add weight to our choices that might otherwise be lacking. In some ways, this makes his decision heuristic less of a hard sell.

Similarly, Hussain points out that the account of willpower favoured by some psychologists has a lot of overlap with the doctrine of eternal recurrence. George Ainslie (and others) have argued that our tendency to succumb to weakness of the will is caused by the fact that we radically (hyperbolically) discount the value of certain future options. This radical discounting can lead to the phenomenon of ‘preference reversal’. For example, we may value not-smoking much more than smoking, but at certain crucial moments our desire to smoke the next cigarette will exceed our desire to quit due to the way in which we discount future value (illustrated below).



How can we avoid succumbing to weakness of the will? One suggestion is that instead of focusing on the value of smoking vis-a-vis not-smoking in a particular moment we focus on the value of smoking vis-a-vis not-smoking across a large set of decision points. So imagine you must repeatedly face the choice of smoking/not-smoking. What’s the value to you of not-smoking across all those repeated decisions points and how does it compare to the value of smoking? If you really do value not-smoking over smoking then thinking about the choice in these terms could make the critical difference both because it creates some distance between you and your current decision and because focuses on the aggregate outcomes of all those decisions, not just one particular decision. This is very like the eternal recurrence idea.



3. Conclusion

That’s it, that’s Hussain’s interpretation of eternal recurrence. As mentioned at the outset, I find it interesting, particularly the links he draws between eternal recurrence and other related decision heuristics. I do, however, have some lingering doubts. For instance, I am not sure exactly how the imaginative exercise is supposed to work in practice. Obviously, I cannot imagine an infinite number of possible replays of the decision I am about to make. So would five be enough? Or thirty-five? Can I really imagine that many replays or will I tend to lose interest after two or three?

More importantly, although eternal recurrence may add weight to values that are otherwise light, I cannot see how it can add weight to nothing at all. So if I value nothing (or have no clear intuitions or beliefs about what I value), imagining multiple replays of a decision isn’t going to do much good, is it? Hussain alludes to this problem early on in his article, noting that honest illusions may not really address the problem of theoretical nihilism unless you have an underlying desire to sustain those honest illusions. He repeats the point later in the article when he notes that eternal recurrence is a game and for it to work you have to have the desire to play the game. But where does this desire come from? How can that desire have sufficient weight to guide all our decision-making? Is it not as questionable and contingent as any other desire as a result of theoretical nihilism? In short, couldn’t theoretical nihilism lead to a total collapse of the will (i.e. the loss of the will to will) and if so wouldn’t that render the doctrine of eternal recurrence moot? I guess one Nietzschean answer is that it is impossible for the will to completely collapse since we have the strong psychological habit of valuing things. Eternal recurrence is about amplifying those inevitable natural values into something more impressive. But I’m not sure if that is true. Certainly, there are days when I feel apathetic about everything and if that mood sustained itself over a long period of time I’d find it difficult to care about playing the eternal recurrence game.