Pages

Friday, April 28, 2017

The Ethics of Crash Optimisation Algorithms




Patrick Lin started it. In an article entitled ‘The Ethics of Autonomous Cars’ (published in The Atlantic in 2013), he considered the principles that self-driving cars should follow when they encountered tricky moral dilemmas on the road. We all encounter these situations from time to time. Something unexpected happens and you have to make a split second decision. A pedestrian steps onto the road and you don’t see him until the last minute: do you slam on the brakes or swerve to avoid? Lin made the obvious point that no matter how safe they were, self-driving cars would encounter situations like this, and so engineers would have to design ‘crash-optimisation’ algorithms that the cars would use to make those split second decisions.

In a later article Lin explained the problem by using a variation on the famous ‘trolley problem’ thought experiment. The classic trolley problem asks you to imagine a trolley car hurtling out of control down a railroad track. If it continues on its present course, it will collide with and kill five people. You can, however, divert it onto a sidetrack. If you do so, it will kill only one person. What should you do? Ethicists have debated the appropriate choice for the last forty years. Lin’s variation on the trolley problem worked like this:

Imagine in some distant future, your autonomous car encounters this terrible choice: it must either swerve left and strike an eight-year old girl, or swerve right and strike an 80-year old grandmother. Given the car’s velocity, either victim would surely be killed on impact. If you do not swerve, both victims will be struck and killed; so there is good reason to think that you ought to swerve one way or another. But what would be the ethically correct decision? If you were programming the self-driving car, how would you instruct it to behave if it ever encountered such a case, as rare as it may be? 
(Lin 2016, 69)

There is certainly value to thinking about problems of this sort. But some people worry that, in focusing on individualised moral dilemmas such as this, the framing of the ethical challenges facing the designers of self-driving cars is misleading. There are important differences between the moral choice confronting the designer of the crash optimisation system (whether it be programmed from the top-down with clearly prescribed rules or the bottom-up using some machine-learning system) and the choices faced by drivers in particular dilemmas. Recently, some papers have been written drawing attention to these differences. One of them is Hin-Yan Liu’s ’Structural Discrimination and Autonomous Vehicles’. I just interviewed Hin-Yan for my podcast about this and other aspects of his research, but I want to take this opportunity to examine the argument in that paper in more detail.


1. The Structural Discrimination Problem
Liu’s argument is that the design of crash optimisation algorithms could lead to structural discrimination (note: to be fair to him, Lin acknowledged the potential discriminatory impact in his 2016 paper).

Structural discrimination is a form of indirect discrimination. Direct discrimination arises where some individual or organisation intentionally disadvantages someone because they belong to a particular race, ethnic group, gender, class (etc). Once upon a time there were, allegedly, signs displayed outside pubs, hotels and places of employment in the UK saying ‘No blacks, No Irish’. The authenticity of these signs is disputed, but if they really existed, they would provide a clear example of direct discrimination. Indirect discrimination is different. It arises where some policy or practice has a seemingly unobjectionable express intent or purpose but nevertheless has a discriminatory impact. For example, a hairdressing salon that had a policy requiring all staff to show off their hair to customers might have discriminatory impact on (some) potential Muslim staff (I took this example from Citizen’s Advice UK).

Structural discrimination is a more generalised form of indirect discrimination whereby entire systems are set up are structured in such a way that they impose undue burdens on particular groups. How might this happen with crash optimisation algorithms? The basic argument works like this:


  • (1) If a particular rule or policy is determined with reference to factors that ignore potential forms of discrimination, and if that rule is followed in the majority of circumstances, it is likely to have an unintended structurally discriminatory impact.

  • (2) The crash optimisation algorithms followed by self-driving cars are (a) likely to be determined with reference to factors that ignore potential forms of discrimination and (b) are likely to be followed in the majority of circumstances.

  • (3) Therefore, crash optimisation algorithms are likely to have an unintended discriminatory impact.



The first premise should be relatively uncontroversial. It is making a probabilistic claim. It is saying that if so-and-so happens it is likely to have a discriminatory impact, not that it definitely will. The intuition here is that discrimination is a subtle thing. If we don’t try to anticipate it and prevent it from happening, we are likely to do things that have unintended discriminatory effects. Go back to the example of the hairdressing salon and the rule about uncovered hair. Presumably, no one designing that rule thought they were doing anything that might be discriminatory. They just wanted their staff to show off their hair so that customers would get a good impression. They didn’t consciously factor in potential forms of bias or discrimination. This is what created the potential for discrimination.

The first part of premise one is simply saying that what is true in the case of the hair salon is likely to be true more generally. Unless we consciously direct our attention to the possibility of discriminatory impact, it will be sheer luck whether we avoid it. That might not be too problematic if the rules we designed were limited in their application. For example, if the rule about uncovered hair for staff only applied to one particular hairdressing salon, then we have some problem but it would fall far short of structural discrimination. There would be discrimination in the particular salon, but that discrimination would not spread across society as whole. Muslim hairdressers would not be excluded from work at all salons. It is only when the rule is followed in the majority of cases that we get the conditions in which structural discrimination can breed.

This brings us to premise two. This is the critical one. Are there any reasons to accept it? Looking first to condition (a), there are indeed some reasons to believe that this will be the case. The reasons have to do with the ‘trolley problem’-style framing of the ethical challenges facing the designers of self-driving cars. That framing encourages us to think about the morally optimal choice in a particular case, not at a societal level. It encourages us to pick the least bad option, even if that option contravenes some widely-agreed moral principle. A consequentialist, for example, might resolve the granny vs. child dilemma in favour of the child based on the quantity of harm that will result. They might say that the child has more potentially good life years ahead of them (possibly justifying this by reference to the QALY standard) and hence it does more good to save the child (or, to put it another way, less harm to kill the granny). The problem with this reasoning is that in focusing purely on the quantity of harm we ignore factors that we ought to consider (such as the potential for ageism) if we wish to avoid a discriminatory impact. As Liu puts it:

[A]nother bling spot of trolley problem ethics…is that the calculus is conducted with seemingly featureless and identical “human units”, as the variable being emphasised is the quantity of harm rather than its character or nature.

We could try to address this problem by getting the designers of the algorithms to look more closely at the characteristics of the individuals that might be affected by the choices made by the cars, but this will then lead us to the second problem, namely the fact that whatever solution we hit upon is likely to be multiplied and shared across many self-driving cars, and that multiplication and sharing is likely to exacerbate any potentially discriminatory effect. Why is this? Well, presumably car manufacturers will standardise the optimisation algorithms they offer on their cars (not least because the software that actually drives the car is likely to be cloud-based and to adapt and learn based on the data collected from all cars). This will result in greater homogeneity in how cars respond to trolley-problem like dilemmas, which will in turn increase any potentially discriminatory effect. For example, if an algorithm does optimise by resolving the dilemma in favour of the child, we get a situation in which all cars using that algorithm favour children over grannies, and so an extra burden is imposed on grannies across society as a whole. They face a higher risk of being killed by a self-driving car.

There are some subtleties to this argument that are worth exploring. You could reject it by arguing that there will still presumably be some diversity in how car manufacturers optimise their algorithms. So, for example, perhaps all BMWs will be consequentialist in their approach whereas all Audis will be deontological. This is likely to result in a degree of diversity but perhaps much less diversity than we currently have. This is what I think is most interesting about Liu’s argument. In a sense, we are all running crash-optimisation algorithms in our heads right now. We use these algorithms to resolve the moral dilemmas we face while driving. But as various experiments have revealed, the algorithms humans use are plural and messy. Most people have intuitions that make them lean in favour of consequentialist solutions in some cases and deontological solutions in others. Thus the moral choices made at an individual level can shift and change across different contexts and moods. This presumably creates great diversity at a societal level. The differences across the different car manufacturers is likely to be more limited.

This is, admittedly, speculative. We don’t know whether the diversity we have right now is so great that it avoids any pronounced structural discrimination in the resolution of moral dilemmas. But this is what is interesting about Liu’s argument: It make an assumption about the current state of affairs (namely that there is great diversity in the resolution of moral dilemmas) that might be true but is difficult to verify until we enter a new state of affairs (one in which self-driving cars dominate the roads) and see whether there is a greater discriminatory impact or not. Right now, we are at a moment of uncertainty.

Of course, there might be technical solutions to the structural discrimination problem. Perhaps, for instance, crash optimisation algorithms could be designed with some element of randomisation, i.e. they randomly flip back-and-forth between different moral rules. This might prevent structural discrimination from arising. It might seem odd to advocate moral randomisation as a solution to the problem of structural discrimination, but perhaps a degree of randomisation is one of the benefits of the world in which we currently live.


2. The Immunity Device Thought Experiment
There is another nice feature to Liu’s paper. After setting out the structural discrimination problem, he introduces a fascinating thought experiment. And unlike many philosophical thought experiments, this is one that might make the transition from thought to reality.

At the core of the crash optimisation dilemma is a simple question: how do we allocate risk in society? In this instance the risk of dying in a car accident. We face many similar risk allocation decisions already. Complex systems of insurance and finance are set up with the explicit goal of spreading and reallocating these risks. We often allow people to purchase additional protection from risk through increased insurance premiums, and we sometimes allocate/gift people extra protections (e.g. certain politicians or leaders). Might we end up doing the same thing when it comes to the risk of being struck by a self-driving car? Liu asks us to imagine the following:

Immunity Device Thought Experiment:‘It would not be implausible or unreasonable for the manufacturers of autonomous vehicles to issue what I would call here an “immunity device”: the bearer of such a device would become immune to collisions with autonomous vehicles. With the ubiquity of smart personal communication devices, it would not be difficult to develop a transmitting device to this end which signals the identity of its owner. Such an amulet would protect its owner in situations where an autonomous vehicle finds itself careening towards her, and would have the effect of deflecting the care away from that individual and thereby divert the car to engage in a new trolley problem style dilemma elsewhere.' 
(Liu 2016, 169)

The thought experiment raises a few important and interesting questions. First, is such a device technically feasible? Second, should we allow for the creation of such a device? And third, if we did, how should we allocate the immunity it provides?

On the first question, I agree with what Liu says. It seems like we have the underlying technological infrastructure that could facilitate the creation of such a device. It would be much like any other smart device and would simply have to be in communication with the car. There may be technical challenges but they would not be insurmountable. There is a practical problem if everybody managed to get their hands on an immunity device: that would, after all, defeat the purpose. But Liu suggests a work around to this: have a points-based (trump card) rating system attached to the device. So people don’t get perfect immunity; they get bumped up and down a ranking order. This changes the nature of the allocation question. It’s no longer who should get such a device but, rather, how the points should be allocated.

On the second question, I have mixed views. I feel very uncomfortable with the idea, but I can’t quite pin down my concern. I can see some arguments in its favour. We do, after all, have broadly analogous systems nowadays whereby people get additional protection through systems of social insurance. Nevertheless, there are some important disanalogies between what Liu imagines and other forms of insurance. In the case of, say, health insurance, we generally allow richer people to buy additional protection in the form of higher premiums. This can have negative redistributive consequences, but the gain to the rich person does not necessarily come at the expense of the poorer person. Indeed, in a very real sense, the rich person’s higher premium might be subsidising the healthcare of the poorer person. Furthermore, the protection that the rich person buys may never be used: it’s there as peace of mind. In the case of the immunity device, it seems like the rich person buying the device (or the points) would necessarily be doing so at the expense of someone else. After all, the device provides protection in the event of a self-driving car finding itself in a dilemma. The dilemma is such that the car has to strike someone. If you are buying immunity in such a scenario it means you are necessarily paying for the car to be diverted so that it strikes someone else. This might provide the basis for an objection to the idea itself: this is something that we possibly should not allow to exist. The problem with this objection is that it effectively applies the doctrine of double effect to this scenario, which is not something I am not comfortable with. Also, even if we did ban such devices, we would still have to decide how to allocate the risk burden: at some stage you would have to make a choice as to who should bear the risk burden (unless you adopt the randomisation solution).

This brings us to the last question. If we did allow such a device to be created, how would we allocate the protection it provides. The market-based solution seems undesirable, for the reasons just stated. Liu considers the possibility of allocating points as a system of social reward and punishment. So, for example, if you commit a crime you could be punished by shouldering an increased risk burden (by being pushed down the ranking system). That seems prima facie more acceptable than allocating the immunity through the market. This is for two reasons. First, we are generally comfortable with the idea of punishment (though there are those who criticise it). Second, according to most definitions, punishment involves the intentional harming of another. So the kinds of concerns I raised in the previous paragraph would not apply to allocation-via-punishment: if punishment is justified at all then it seems like it would justify the intentional imposition of a risk burden on another. That said, there are reasons to think that directly harming someone through imprisonment or fine is more morally acceptable than increasing the likelihood of their being injured/killed in a car accident. After all, if you object to corporal or capital punishment you may have reason to object to increasing the likelihood of bodily injury or death.


Okay, that brings us to the end of this post. I want to conclude by recommending Liu's paper. We discuss the ideas in it in more detail in the podcast we recorded. It should be available in a couple of weeks. Also, I should emphasise that Liu introduces the Immunity Device as a thought experiment. He is definitely not advocating its creation. He just thinks it helps us to think through some of the tricky ethical questions raised by the introduction of self-driving cars.




No comments:

Post a Comment