Friday, December 6, 2019

66 - Wong on Confucianism, Robots and Moral Deskilling

Pak-Hang Wong

In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation.

You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 2:56 - How do robots disrupt our moral lives?
  • 7:18 - Robots and Moral Deskilling
  • 12:52 - The Folk Model of Virtue Acquisition
  • 21:16 - The Confucian approach to Ethics
  • 24:28 - Confucianism versus the European approach
  • 29:05 - Confucianism and situationism
  • 34:00 - The Importance of Rituals
  • 39:39 - A Confucian Response to Moral Deskilling
  • 43:37 - Criticisms (moral silencing)
  • 46:48 - Generalising the Confucian approach
  • 50:00 - Do we need new Confucian rituals?

Relevant Links




Wednesday, December 4, 2019

Will we ever have fully autonomous vehicles? Some reasons for pessimism




What is the future of the automotive industry? If you’ve been paying attention over the past decade, you’ll know the answer: self-driving (a.k.a autonomous) vehicles. Instead of relying on imperfect, biased, lazy and reckless human beings to get us from A to B, we will rely on sophisticated and efficient computer programs. This future may not be that far away. We already rely on computers to fly planes and drive trains. All we will be doing is extending our reliance on them to the roads and public highways.

There are, of course, some technical hurdles to overcome. The public highways are more unpredictable than the skies and railways. But impressive strides have been made with driverless technology in the recent past and it doesn’t seem implausible to think that it will become widespread within the next 10-15 years. Once it does, the benefits will be great — at least if you believe the hype — there will be fewer accidents and we will all have more time to focus on the things we love to do during our daily commutes: catch up on work or TV, post to social media and so on. There will also be other beneficial side effects. Less space will need to be allocated to carparks in our cities and towns, allowing us to create more pleasant urban living spaces, the traffic system might become more efficient and less crowded, there may even be a drop in light pollution.

Will any of this come to pass? In this article, I want to argue for a slightly unusual form of scepticism about the future of self-driving vehicles. This scepticism has two elements to it. First, I will argue that a combination of ethical, legal and strategic factors will encourage us not to make and market fully autonomous vehicles. Second, I will argue that despite this disincentive, many of us will, in fact, treat vehicles as effectively fully autonomous. This could be very bad for those of us expected to use such vehicles.

I develop this argument in three stages. I start with a quick overview of the five different ‘levels’ of automated driving that has been proposed by the Society of Automotive Engineers. Second, I argue that concerns about responsibility and liability ‘gaps’ may cause us to get stuck on the middle levels of automated driving. Third, and finally, I consider some of the consequences of this.


1. Getting Stuck: The Levels of Autonomous Driving
If you have spent any time reading up about autonomous vehicles you will be familiar with the ‘levels’ of autonomy framework. First proposed and endorsed by the Society of Automotive Engineers, the framework tries to distinguish between different types of vehicle autonomy. The diagram below illustrates the framework.



This framework has been explained to me in several different ways over the years. I think it is fair to say that nobody thinks the different levels are obvious and discrete categories. The assumption is that there is probably a continuum of possible vehicles ranging from the completely non-autonomous at one end of the spectrum* to the fully autonomous at the other. But it is hard for the human mind to grasp a smooth continuum of possibility and so it helps if we divide it up into discrete categories or, in this case, levels.

What of the levels themselves? The first level — so called ‘Level 0’ — covers all traditional vehicles: the ones where the human driver performs all the critical driving functions like steering, braking, accelerating, lane changing and so on. The second level (Level 1) covers vehicles with some driver assist technologies, e.g. enhanced or assisted braking and parking. Many of the cars we buy nowadays have such assistive features. Level 2 covers vehicles with some automated functions, e.g. automated steering, acceleration, lane changing, but in which the human driver is still expected to play an active supervisory and interventionist role. Tesla’s enhanced autopilot is often said to be an example of Level 2 automation. The contract Tesla user’s sign when they download the autopilot software stipulates that they must be alert and willing to take control at all times. Level 3 covers vehicles with more automated functionality than Level 2. It is sometimes said to involve ‘conditional autonomy’, which means the vehicle can do most things by itself, but a human is still expected to be an alert supervisor of the vehicle and has to intervene when requested to do so by the vehicle (usually if the vehicle encounters some situation involving uncertainty). The Waymo vehicles that Uber uses are claimed to be Level 3 vehicles (though there is some dispute about this). Level 4 covers vehicles with the capacity for full automation, but with a residual role for human supervisors. Finally, Level 5 covers vehicles that involve full automation, with no role for human intervention.

The critical point is that all the levels of automation between 1 and 4 (and especially between 2-4) assume that there is an important role for human ‘drivers’ in the operation of autonomous vehicles. That is to say, until we arrive at Level 5 automation, the assumption is that humans will never be ‘off the hook’ or ‘out of the loop’ when it comes to controlling the autonomous vehicle. They can never sit back and relax. They have to be alert to the possibility of taking control. This, in turn, means that all autonomous vehicles that fall short of Level 5 will have to include some facility or protocol for handing over control from the vehicle to the human user, in at least some cases.

While this ‘levels’ of automation model has been critiqued, it is useful for present purposes. It helps me to clarify my central thesis, which is that I think there are important ethical, legal and strategic reasons for trying to prevent us from ever getting Level 5 automation. This means we are most likely to get stuck somewhere around Levels 3 and 4 (most likely Level 3), at least officially. Some people will say that this is a good thing because they think it is a good thing for humans to exercise ‘meaningful control’ over autonomous driving systems. But I think it might be a bad thing because people will tend to treat these vehicles as effectively fully autonomous.

Let me now explain why I think this is the case.


2. Why we might get stuck at Level 3 or 4
The argument for thinking that we might get stuck at level 3 or 4 is pretty straightforward and I am not the first to make it. In the debate about autonomous vehicles, one of the major ethical and legal concerns arising from their widespread deployment is that they might create responsibility or liability gaps. The existence, or even the perceived existence, of these gaps creates an incentive not to create fully autonomous vehicles.

Our current legal and ethical approach to driving assumes that, in almost all cases, the driver is responsible if something goes wrong. He or she can be held criminally liable for reckless or dangerous driving, and can be required to pay compensation to the victims of any crashes resulting from this. The latter is, of course, usually facilitated through a system of insurance, but, except in countries like New Zealand, the system of insurance still defaults to the assumption of individual driver responsibility. There are some exceptions to this. If there was a design defect in the car then liability may shift to the manufacturer, but it can be quite difficult to prove this in practice.

The widespread deployment of autonomous vehicles throws this existing system into disarray because it raises questions as to who or what is responsible in the event of an accident. Is the person sitting in the vehicle responsible if the autonomous driving program does something wrong? Presumably not, if they were not the ones driving the car at the time. This implies that the designers and manufacturers should be held responsible. But what if the defect in the driving program was not reasonably foreseeable or if it was acquired as a result of the learning algorithm used by the system? Would it be fair, just and reasonable to impose liability on the manufacturers in this case? Confusion as to where responsibility lies in such cases gives rise to worries about responsibility ‘gaps’.

There are all sorts of proposals to plug the gap. Some people think it is easy enough to ‘impute’ driverhood to the manufacturers or designers of the autonomous vehicle program. Jeffrey Gurney, for example, has made this argument. He points out that if a piece of software is driving the vehicle, it makes sense to treat it as the driver of the car. And since it is under the ultimate control of the manufacturer, it makes sense to impute driverhood to them, by proxy. What it doesn’t make sense to do, according to Gurney, is to treat the person sitting in the vehicle as the driver. They are really just a passenger. This proposal has the advantage of leaving much of the existing legal framework in place. Responsibility is still applied to the ‘driver’ of the vehicle; the driver just happens to no longer be sitting in the car.

There are other proposals too, of course. Some people argue that we should modify existing product liability laws to cover defects in the driving software. Some favour applying a social insurance model to cover compensation costs arising from accidents. Some like the idea of extending ‘strict liability’ rules to prevent manufacturers from absolving themselves of responsibility simply because something wasn’t reasonably foreseeable.

All these proposals have some merit but what is interesting about them is that (a) they assume that the responsibility ‘gap’ problem arises when the car is operating in autonomous mode (i.e. when the computer program is driving the car) and (b) that in such a case the most fair, just and reasonable thing to do is to apply liability to the manufacturers or designers of the vehicle. This, however, ignores the fact that most autonomous vehicles are not fully autonomous (i.e. not level 5 vehicles) and that manufacturers would have a strong incentive to push liability onto the user of the vehicle, if they could get away with it.

This is exactly what the existence of Levels 2 to 4 autonomous driving enables them to exploit. By designing vehicles in such a way that there is always some allowance for handover of control to a human driver, manufacturers can create systems that ‘push’ responsibility onto humans at critical junctures. To repeat the example already given, this is exactly what Tesla did when it initially rolled out its autopilot program: it required users to sign an agreement stating that they would remain alert and ready to take control at all times.

Furthermore, it’s not just the financial and legal incentives of the manufacturers that might favour this set-up. There are also practical reasons to favour this arrangement in the long run. It is a very difficult engineering challenge to create a fully autonomous road vehicle. The road environment is too unpredictable and messy. It’s much easier to create a system that can do some (perhaps even most) driving tasks but leave others to humans. Why go to the trouble of creating a fully autonomous Level 5 vehicle when it would be such a practical challenge and when there is little financial incentive for doing so? Similarly, it might even be the case that policy-makers and legal officials favour sticking with Levels 2 to 4. Allowing for handover to humans will enable much of the existing legal framework to remain in place, perhaps with some adjustments to product liability law to cover software defects. Drivers might also like this because allows them to maintain some semblance of control over their vehicles.

That said, there are clearly better and worse ways to manage the handover from computer to human. One of the problems with the Tesla system was that it required constant vigilance and supervision, and potentially split-second handover to a human. This is tricky since humans struggle to maintain concentration when using automated systems and may not be able to do anything with a split-second handover.

Some engineers refer to this as the ‘unsafe valley’ problem in the design of autonomous vehicles. In a recent paper on the topic, Frank Flemisch and his colleagues have proposed a way to get out of this unsafe valley by having a much slower and safer system of handover to a human. Roughly, they call for autonomous vehicles that handle the more predictable driving tasks (e.g. driving on a motorway), have a long lead-in time for warning humans when they need to take control of the vehicle, and go to a ‘safe state’ (e.g. slow down and pull in to the hard shoulder or lay-by) if the human does not heed these warnings.

This model of autonomous driving is interesting. If it works, it could make Level 3 type systems much safer. But either way, the momentum seems to be building toward a world in which we never get to fully autonomous vehicles. Instead we stuck somewhere in-between.


3. The Consequences of Getting Stuck
Lots of people will be happy if we stuck at Level 3 or 4. Getting stuck means that we retain some illusion of meaningful human control over these systems. Even if the motives for getting stuck are not entirely benevolent, it still means that we get some of the benefits of the technology, while at the same time respecting the dignity and agency of the human beings who use these systems. Furthermore, even if we might prefer it if manufacturers took more responsibility for what happened with these systems, getting stuck at Level 3 or 4 means we still get to live in a world where some human is in charge. That sounds like a win-win.

But I’m a little more sceptical. I think getting stuck might turn out to be a bad thing. To make the case for this I will use the legal distinction between de jure and de facto realities. The de jure reality is what the law says should be the case; the de facto reality is what actually happens on the ground. For example, it might say in a statute somewhere that people who possess small quantities of recreational drugs are doing something illegal and ought to be sentenced to jail as a result. That’s the de jure reality. In practice, it might turn out that the legal authorities turn a blind eye to anyone that possesses a small quantity of such drugs. They don’t care because they have limited resources and bigger fish to fry. So the de facto reality is very different from the de jure reality.

I think a similar divergence between the official, legal, reality and what’s happening on the ground might arise if we get stuck at Level 3 or 4. The official position of manufacturers might be that their vehicles are not fully autonomous and require human control in certain circumstances. And the official legal and policy position might be that fully autonomous vehicles cannot exist and that manufacturers have to create ‘safe’ handover systems to allow humans to take control of the vehicles when needs be. But what will the reality be on the ground? We already know that drivers using Level 2 systems flout the official rules. They sit in the back seat or watch movies on their phones when they should be paying attention to what is happening (they do similar things in non-autonomous vehicles). Is this behaviour likely to discontinue even in a world with safer handover systems? It’s hard to see why it would. So we might end up with a de facto reality in which users treat their vehicles as almost fully autonomous, and a de jure world in which this is not supposed to happen.

Here’s the crucial point: the users might be happy with this divergence between de facto and de jure reality. They might be happy to treat the systems as if they are fully autonomous because this gives them the most of the benefits of the technology: their time and attention can be taken up by something else. And they might be happy to accept the official legal position because they don’t think that they are likely to get into an accident that makes the official legal rules apply to them in a negative way. Many human drivers already do this. How many people reading this article have broken the speed limits whilst driving, or have skirted with driving while borderline on the legal limit for alcohol, or have driven when excessively tired? Officially, most drivers know that they shouldn’t do these things; but in practice they do because they doubt that they will suffer the consequences. The same might be true in the case of autonomous vehicles. Drivers might treat them as close to fully autonomous because the systems are safe enough to allow them to get away with this most of the time. They discount the possibility that something will go wrong. What we end up with then is a world in which we have an official illusion of ‘meaningful control’ that disadvantages the primary users of autonomous vehicles, but only when something goes wrong.

Of course, there is nothing inevitable about the scenario I am sketching. It might be possible to design autonomous driving systems so that it is practically impossible for humans to flout the official rules (e.g. perhaps facial recognition technology could be used to ensure humans are paying attention and some electric shock system could be used to wake them up if they are falling asleep). It might also be possible to enforce the official position in a punitive way that makes it very costly for human users to flout the official rules (though we have been down this path before with speeding and drink-driving laws). The problem with doing this, however, is that we have to walk a very fine line. If we go too far, we might make using an autonomous vehicle effectively the same as using a traditionally human-driven vehicle and thus prevent us from realising the alleged benefits of these systems. If we don’t go far enough, we don’t resolve the problem.

Alternatively, we could embrace the idea of autonomous driving and try not to create an incentive to get stuck at Level 3 or 4. I’m not sure what the best outcome is but there are tradeoffs inherent in both.

* Although I do have some qualms about referring to any car or automobile as non-autonomous since, presumably, at least some functions within the vehicle are autonomous. For example, many of the things that happen in the engine of my car happen without my direct supervision or control. Indeed, if you asked me, I wouldn’t even know how to supervise and control the engine.



Tuesday, November 26, 2019

Anticipating Automation and Utopia


On the 11th of January 2020, I will be giving a talk to the London Futurists group about my book Automation and Utopia. The talk will take place from 2 to 4pm in Birkbeck College London. The full details are available here. If you are around London on that date, then you might be interested in attending. If you know of anyone else who might be, then please spread the word.

In advance of the event, I sat down with the Chair of the London Futurists, David Wood, to chat about some of the key themes from my book. You can watch the video of our conversation above.

I can promise that the talk on the 11th won't simply be a rehash or elaboration of this conversation, nor indeed a rehash of any of my recent interviews or talks about the book. I'll be focusing on something different.



Friday, November 22, 2019

65 - Vold on How We Can Extend Our Minds With AI


Karina Vold

In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research Fellow at the Faculty of Philosophy, and a Digital Charter Fellow at the Alan Turing Institute. We talk about the ethics extended cognition and how it pertains to the use of artificial intelligence. This is a fascinating topic because it addresses one of the oft-overlooked effects of AI on the human mind.

You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 1:55 - Some examples of AI cognitive extension
  • 13:07 - Defining cognitive extension
  • 17:25 - Extended cognition versus extended mind
  • 19:44 - The Coupling-Constitution Fallacy
  • 21:50 - Understanding different theories of situated cognition
  • 27:20 - The Coupling-Constitution Fallacy Redux
  • 30:20 - What is distinctive about AI-based cognitive extension?
  • 34:20 - The three/four different ways of thinking about human interactions with AI
  • 40:04 - Problems with this framework
  • 49:37 - The Problem of Cognitive Atrophy
  • 53:31 - The Moral Status of AI Extenders
  • 57:12 - The Problem of Autonomy and Manipulation
  • 58:55 - The policy implications of recognising AI cognitive extension
 

Relevant Links




Tuesday, November 19, 2019

The Case Against Righteous Anger




There is a lot of anger in the world right now. You hear it people’s voices; you feel it in the air. Turn on a TV and what will you see? Journalists snapping questions at politicians; politicians snapping back with indignation. Dip your toe into social media and what will you read? People seething and roiling in rage. Anyone who disagrees with them is a ‘fucking idiot’, ‘garbage’, ‘worthless’. The time for quiet reflection and dialogue is over. We are at war. Anger is our fuel.

As someone raised to view anger as a bad thing, but who falls prey to it all the time, I find this to be unwelcome development. There are, however, some who believe that anger is a good thing. There are moral philosophers, for example, argue that anger is an essential foundation for our moral beliefs and practices — that it is an appropriate response to injustice. Amia Srinivasan, for instance, has argued that even if anger can be counterproductive it is, for victims of injustice, often ‘apt’ and we need to factor that into our understanding of injustice. Similarly, the philosopher Sally Haslanger has said that being angry is important because it helps her to care about certain political issues. Indeed, she cites the need for some anger as one reason why she quit doing certain Eastern meditative practices such as yoga:

Eventually I quit doing yoga because I found it left me too cut off from the world, especially from the political engagement that I cared so much about. I didn't want to be serene. I didn't want to be centered. Or at least not as much as my involvement in yoga then required. My anger and my intensity are an important part of who I am, and I couldn't find a way to combine them with the yoga I was doing at the time. 
(Haslanger - “What is it like to be a philosopher?”)

In his fascinating book, The Geography of Morals, Owen Flanagan takes a long hard look at this positive view of anger by contrasting it with the Buddhist/Stoic view of anger (which favours purging ourselves of anger). He does this as part of an effort to understand what different moral traditions can learn from each other.

In the remainder of this article I want to examine what Flanagan says. In particular, I want to clarify and analyse the argument he presents for thinking that the Western moral tradition (currently in thrall to righteous anger) should shift to become more like the Buddhist/Stoic tradition.


1. Identifying Different Moral Worlds
I will start by saying something about Flanagan’s perspective and method. In many ways, this is more interesting and important than his specific arguments about anger.

Flanagan wants to explore different possible moral worlds. He believes that we each get raised and encultured in a particular set of moral traditions. These traditions tell us how we ought to feel and behave . Over time, these feelings and behaviours become solidified. We learn to see them as natural, perhaps even necessary. We refuse to accept that other communities exist with different, but equally valid, moral traditions. Flanagan’s goal is to get us to ‘see’ these other moral possibilities and take them seriously.

Flanagan tries to tread a fine line between moral objectivism and moral relativism when staking out this view. As I read him, he is committed to some form of objectivism. Thus he thinks there are some moral ‘worlds’ that are beyond the pale (e.g. the moral world of fascist Germany). Nevertheless, he thinks that the space of moral possibility is much wider than we typically believe. We shouldn’t dismiss all alternative moral traditions off the bat. We should reflect upon them and see if there are any good reasons (consistent with at least some of our existing beliefs) to shift over to those alternative moral traditions. This means adopting a form of super-wide reflective equilibrium: a method that looks to achieve balanced, reasonable judgments across different moral traditions and not just within one.

The discussion of anger is just a case study in the application of this method. Nevertheless, it is a case study that Flanagan takes very seriously indeed, dedicating three and half chapters of his book to its analysis. Being a Westerner, Flanagan starts within the Western moral tradition that endorses righteous anger. He argues that this tradition consists of angry feelings and behaviours that are endorsed and perpetuated by a superstructure of anger-related norms and scripts. In other words, individuals in the Western tradition experience feelings of anger (emotional hotness, rage, indignation, impulsiveness) and behave in angry ways (criticising, shaming, punishing, lashing out, violent rebuke). Some, but not all, of these feelings and behaviours are then reinforced and protected by norms (i.e. permissions and recommendations about when one ought to feel and behave angrily) and scripts (i.e. sets of patterned angry behaviours that are deemed appropriate in certain circumstances). These feelings, behaviours, norms and scripts are, in turn, supported by a deeper set of metaphysical and moral beliefs about individualism, inter-personal relationships and justice. This is what supports the view that at least one form of anger — righteous anger — is a good thing.


As someone raised in this Western tradition, Flanagan believed that it was necessary and correct for many years. He knew that other traditions saw things differently, but he couldn’t see those as viable options. If someone wrongs you, of course you should feel angry and look for retribution. How else are you supposed to behave? A couple of visits to post-apartheid South Africa helped him to reconsider. I’ll let him speak for himself on this issue:

Both times I visited…I found myself in feeling in awe that Nelson Mandela and his comrades had found it in themselves not to kill all the white folk…It amazed me that apartheid ended, that it could have ended, without an even worse bloodbath than had already occurred, and that South Africa found its way to enter an era of “truth and reconciliation”…The best explanation was that I was not raised to see how ending a practice like apartheid was psychologically, morally, or practically possible without a bloodbath. I didn’t see that this was a variety of moral possibility…I was raised in a world where every tale of the victories of the forces of good over the forces of evil involved righteous fury, death and destruction. 
(Flanagan 2017, 159)
This led him to look more closely at the Buddhist/Stoic view of anger.

The Buddhist/Stoic view of anger is very different from the Western one.* Both traditions think that anger is something that ought to be eliminated from human life (as much as possible). The Buddhist view is deeply metaphysical in nature. Life involves suffering, according to the Buddhist. This suffering stems from mistaken beliefs about the nature of reality and the emotional reactions that arise from these beliefs. Anger arises from egoism: a belief that individuals are wronged by the actions of others. Egoism is mistaken. There is no self: the idea of a single conscious self is an illusion that can be revealed through meditative practice. Similarly, our belief that the world is divided up into concrete categories and individuals is also mistaken. The world is a single, interconnected whole. When we appreciate this, we can see how destructive anger can really be. Each instance of anger has ripple effects across the whole. It doesn’t just affect us or a handful of others. It affects everyone. Persisting with it prolongs our suffering. (I’m greatly simplifying a long discussion in Flanagan’s book)

The Stoic view is more pragmatic. The classic Stoic text on anger comes from Seneca. He argues that anger emerges as a response to injury and is manifested by the desire to cause injury in kind. There are three problem with this. First, anger tends to overreach and overreact. This is something you probably experience yourself: when you are angry you tend to lash out in a wild manner. You are rarely measured or proportionate. You need to ‘cool down’ to do that. Second, Seneca argues that anger is practically useless. Anger leads to the breakdown of relations and the severing of bonds of trust. The perpetual cycles of anger prevent us from moving forward with our lives and getting what we want. Third, Seneca argues that anger is not spur to virtue. It tends to wither the virtuous response and block us from true happiness. It is only the non-virtuous person who takes pleasure in causing pain and suffering to others.



Flanagan sees something attractive in the Buddhist/Stoic view. A world not prey to the dark side of anger sounds like a good thing. He thinks we should consider shifting from our current embrace of righteous anger to this alternative. But there are four major objections to this suggestion. Let’s address each of them in turn.


2. The Impossibility Objection
The first objection is that the Buddhist/Stoic view asks the impossible of us:

Impossibility Objection: Anger is hard-wired into the human mind/body. It is a psychobiological necessity. We cannot eliminate it without fundamentally changing human nature (which is something we cannot, yet, do).

This is a common view. Flanagan quotes several philosophers who have endorsed a version of it. Perhaps the most well-known is Peter Strawson who wrote a famous article back in the 1960s about the ‘reactive attitudes’ (anger, resentment, indignation etc) and the central role they play in human moral life. His view has been influential in both philosophy and psychology. Followers of his view tend to see anger as an instinctual given: as part of the fundament of humanity.

Is this really the case? Flanagan spends a long time answering this question (taking up an entire chapter). But he only really makes three key points. His first is that we need to critically scrutinise what it means to say that anger is a ‘psychobiological necessity’. Clearly, there are some things that are hard-wired into (most) humans from birth. Flanagan gives the example of crying. A newborn baby will naturally — without any instruction or learning — cry. They won’t, however, get angry. This is an emotional and behavioural trait that emerges later in childhood. This means that if anger is a psychobiological necessity it is one that emerges in the course of childhood development and not something that is there from the start. Furthermore, when it does first emerge it is not in its sophisticated adult form, with the associated norms and scripts. It is more like a raw emotion that gets expressed in various, not always consistent ways. This ‘developmental distance’ between birth and the emergence of anger should give us some pause. How do we know that something is a psychobiological necessity, and just not a strongly entrenched cultural norm, if it emerges in the course of childhood development? Flanagan argues that we have been historically too quick to assume that cultural norms are psychobiological necessities.

The second point Flanagan makes is that there are some cultures where anger, if it can be said to exist at all, gets expressed in very different ways from what we see in the West. There is, indeed, a long-standing debate about whether you can meaningfully compare emotions across different cultures, but even if we accept that you can, we must also accept that the shared emotions can be quite minimal. Flanagan gives the example of Catherine Lutz’s work on the emotional repertoire of the Ifaluk people from the South Pacific. Lutz argues that the Ifaluk have a very different emotional repertoire from what you would see in North America. Their equivalent of justifiable anger — an emotion called song — is both triggered by different moral transgressions (much more minor that what would provoke an American) and results in different behaviours (the refusal to eat being one way of expressing anger). Similarly, Lutz argues that the Ifaluk don’t have an equivalent to the Western emotion of love; instead they have fago, which combines what we might call love, compassion and sadness into a single emotional response. Cross-cultural work of this sort suggests that there is more ‘cultural plasticity’ to our reactive attitudes than we might think. Thus, even if there is some basic reactive response like anger, there is room to play around with the behavioural norms and scripts associated with that response.

This brings us to Flanagan’s third key point which is that this plasticity opens up some space in which the moral reformer can play around. We can ask the question whether our current practices and beliefs around anger are morally optimal. Maybe they are not. Maybe they were once adaptive but we now have reason to think they are less so. Flanagan makes an analogy with vegetarianism to underscore this point. He argues that the desire to eat meat may be ‘programmed’ into us (to some extent) because it was adaptive to eat meat in the past. But we have since discovered reasons to think that eating meat is not morally optimal. Thus, if we can survive without eating meat — and many people do — there may be reason to shift our moral beliefs and practices to vegetarianism. Something similar could be true for anger and the shift to the Buddhist/Stoic view. All of this leads Flanagan to conclude that:

Even if anger is original and natural in some forms, those forms are inchoate until a moral ecology speaks, forms and authorizes them. 
(Flanagan 2017, 199)

The claim then is that we should not authorize righteous anger. The persuasiveness of this, of course, depends on whether righteous anger is morally optimal or not. That’s where the next three objections come in.


3. The Attachment Objection
The second objection is that Buddhist/Stoic view asks us to forgo the goods of attachment:

The Attachment Objection: A flourishing human life will consist of relationships involving deep attachments to others. Deep attachments to others necessitate some capacity for anger. Therefore, in order to access the good of attachment we need to allow for anger.

It is often said that love and anger go together. How many times have you felt angry at someone you love? Surprisingly often, I suspect. This might seem paradoxical but it is not. When you are attached to another person, you care deeply about them. You want them to do well and act well. If they do, you will feel the positive emotions of respect, admiration and love. Conversely, you don’t want them to step out of line and do wrong. If they do, you will feel the negative emotions of anger, resentment and indignation. The claim underlying this second objection is that you cannot break the axiological link between the positive and negative emotions. You cannot have the goods of attachment without also being open to negative emotions such as anger. This is healthy, normal and desirable. If you were completely detached from others — if you viewed their actions with equipoise — you would be inhuman, alien.

I covered a variant of this objection previously when looking at the ethics of grief. To briefly recap what I said there, one common argument about grief is that experiencing it is a good thing because it means that the person who died meant something to you. If you felt nothing after their deaths, that would be an indictment of the relationship you had with them. Although this might be true, there are problems when it comes to the calibration of grief. Sometimes grief is overwhelming. It dominates your conscious life. You cannot move beyond it. In these cases the grief, though perhaps initially indicative of a positive relationship with the deceased, becomes destructive. This is one reason why Buddhist and Stoic philosophers also recommend limiting and extirpating grief from our lives. This doesn’t mean completely forgoing our attachments to others. It just means moderating those attachments and ensuring they don’t become destructive.

Flanagan thinks we should adopts a similar strategy when it comes to anger and attachment. We should recognise that attachment to others comes with a whole suite of emotions (respect, love, admiration, sorrow, grief, anger, indignation, rage). It is not at all obvious that each of these emotions is essential to attachment, i.e. that we cannot feel attached without one or more of them. Indeed, it already seems to be the case that some people can live deeply attached lives without experiencing one or more of these emotions. If this is true, and if some of the emotions commonly associated with attachment are destructive, then perhaps we should look to extirpate them from our lives.

Flanagan bolsters this by arguing that of all the emotions and passions associated with attachment, there is something troubling about anger. It is not just that anger tends to be miscalibrated and prone to overreach (as Seneca argued) but that there is something inherently destructive about it:

But anger is a response that marks injury and seeks to do harm. It is vengeful and spiteful. It does not seek to heal like forgiveness and sorrow. Nor does it encourage and compliment goodness as gratitude does. It is ugly and harmful, and in the business of passing pain. 
(Flanagan 2017, 203)

At its extremes, anger can sever the bonds of attachment and destroy once positive relationships. Flanagan’s suggestion then is that we redirect our emotional energies away from anger and towards sorrow, gratitude and forgiveness. These emotions are still associated with attachment and thus allow us to access the goods of attachment, but enable us to do so without the destructive consequences of anger. So when someone transgresses or wrongs us we should feel sorrow for their transgression, gratitude for the good they have done, and seek to forgive or move on.


4. The Injustice and Catharsis Objections
This idea that sorrow, gratitude and forgiveness should be our go-to emotions in the event of a moral transgression will be unsettling to anyone raised to think that anger and retribution are the appropriate responses to wrongdoing. If someone wrongs us surely we should not roll over and forgive; we should meet fire with fire? This is essential to the process of identifying and responding to injustice.
This is something that the third and fourth objections to the Buddhist/Stoic view try to get at. We can treat these objections as a pair since they are closely related:

The Injustice Objection: Anger is necessary, socially, if we are to properly identify and respond to injustice/moral wrongdoing.

The Catharsis Objection: Anger is necessary, personally, if we are to heal and move on from injustice/wrongdoing.

It is these kinds of objections that seem to motivate feminist and minority critics of Buddhist/Stoic passivity. I think this is apparent in the previously-mentioned work of Amia Srinivasan and Sally Haslanger. Their claim appears to be that women (and other minorities), as victims of oppression, need to embrace their anger if they are to address the conditions of their oppression. Flanagan cites other examples of this in his book, focusing in particular on work done on the appropriate response to sexual violence.

Flanagan is not as dismissive of these two objections as he is of the others. He recognises the importance of responding to injustice and accepts that some anger (minimal though it may be) might be necessary for psychological healing. Nevertheless, he thinks there are good reasons to think that anger is less important than proponents of these critiques make out. He makes three points in response to them.

First, he reemphasises that embracing the Buddhist/Stoic view does not mean giving up on all passions or emotions. It means accentuating and encouraging useful emotions and discouraging and extirpating destructive ones. This is done not by denying feelings but by moderating the beliefs, norms and scripts associated with them. This is important because it means that embracing the Buddhists/Stoic view does not entail ignoring all instances of injustice and becoming a pushover. It just means responding to injustice in a different way. To illustrate the point, Flanagan discusses a thought experiment (first proposed by Martha Nussbaum and based on the life of Elie Wiesel) involving a soldier liberating a Nazi death camp. In Nussbaum’s original formulation the soldier experiences profound rage and anger at what has happened to the people in the death camp. Nussbaum argues that this is the appropriate and desirable response to the injustices that occurred. Flanagan replies by asking us to imagine that instead of experiencing rage and anger the soldier experiences profound sorrow and compassion for the victims of the Nazis. Would this be any less appropriate and desirable a response to the injustice? Flanagan argues that it would not.

Second, he argues that anger is clearly not necessary in order to recognise and respond to injustice. To illustrate this he turns to his favoured example of the restoration movement in post-apartheid Africa. The leaders of this movement did not deny that people felt angry at what happened, but they did work hard to ensure that anger did not play a “pivotal or sustaining role” in seeking truth and reconciliation. They saw that anger could be destructive and that there was a need to ‘let go’ of anger if the society was to heal and move forward.

Third, and specifically in response to the catharsis objection, Flanagan argues that expressing an emotion such as anger often has a psychologically destructive effect, not a healing effect. The intuition underlying the catharsis objection is that anger is something that builds up inside us and needs to be released. Once it is released we return to a more normal, less angry state. The problem is that whenever this has been tested in practice, the opposite result is usually found to occur. In other words, instead of releasing and reducing angry, the expression of anger often just begets more anger (for a review of the main studies done to date, see here). This suggests that if we want to avoid destructive cycles of anger, we should avoid too much catharsis.


5. Conclusion
To sum up, Flanagan argues that we should consider shifting our moral equilibrium. Instead of viewing righteous anger as morally necessary and occasionally positive, we should see it as potentially destructive and counter-productive. We should shift to a more Buddhist/Stoic approach to anger.

To repeat what I said above, I think Flanagan’s argument about anger is not just interesting in and of itself, but also interesting from a methodological perspective. Trying to achieve super-wide equilibrium between different moral traditions can open up the moral possibility space. Doing this allows us to imagine new moral realities.


* Yes, of course, Stoicism is a Western tradition and so it is wrong to suppose that there is a single, unchallenged Western view of anger. Flanagan focuses on what he takes to be the dominant view within the Western liberal tradition.


Saturday, November 16, 2019

The Future of Automation? Video Interview about Automation and Utopia


I recently sat down and did a video chat with Adam Ford about my new book Automation and Utopia. I've talked to Adam several times over the years. He runs a great Youtube channel where he interviews pretty much every leading figure in futurism and transhumanism. I highly recommend checking it out.

This conversation, unlike some of the others I have done about the book, focuses mainly on the automation of work. Will it happen? What's different about the current wave of automation compared to previous waves? What will be the consequences of widespread automation? We also talk about Nozick's experience machine argument towards the end.

If you are interested in the book, you might consider buying a copy or recommending it to a library (etc). If you have already read it, you might consider reviewing it, mentioning it online, or recommending it to friends or colleagues.




Wednesday, November 13, 2019

Mass Surveillance, Artificial Intelligence and New Legal Challenges



[This is the text of a talk I gave to the Irish Law Reform Commission Annual Conference in Dublin on the 13th of November 2018]


In the mid-19th century, a set of laws were created to address the menace that newly-invented automobiles and locomotives posed to other road users. One of the first such laws was the English The Locomotive Act 1865, which subsequently became known as the ‘Red Flag Act’. Under this act, any user of a self-propelled vehicle had to ensure that at least two people were employed to manage the vehicle and that one of these persons:

“while any locomotive is in motion, shall precede such locomotive on foot by not less than sixty yards, and shall carry a red flag constantly displayed, and shall warn the riders and drivers of horses of the approach of such locomotives…”

The motive behind this law was commendable. Automobiles did pose a new threat to other, more vulnerable, road users. But to modern eyes the law was also, clearly, ridiculous. To suggest that every car should be preceded by a pedestrian waving a red flag would seem to defeat the point of having a car: the whole idea is that it is faster and more efficient than walking. The ridiculous nature of the law eventually became apparent to its creators and all such laws were repealed in the 1890s, approximately 30 years after their introduction.[1]

The story of the Red Flag laws shows that legal systems often get new and emerging technologies badly wrong. By focusing on the obvious or immediate risks, the law can neglect the long-term benefits and costs.

I mention all this by way of warning. As I understand it, it has been over 20 years since the Law Reform Commission considered the legal challenges around privacy and surveillance. A lot has happened in the intervening decades. My goal in this talk is to give some sense of where we are now and what issues may need to be addressed over the coming years. In doing this, I hope not to forget the lesson of the Red Flag laws.


1. What’s changed? 
 Let me start with the obvious question. What has changed, technologically speaking, since the LRC last considered issues around privacy and surveillance? Two things stand out.

First, we have entered an era of mass surveillance. The proliferation of digital devices — laptops, computers, tablets, smart phones, smart watches, smart cars, smart fridges, smart thermostats and so forth — combined with increased internet connectivity has resulted in a world in which we are all now monitored and recorded every minute of every day of our lives. The cheapness and ubiquity of data collecting devices means that it is now, in principle, possible to imbue every object, animal and person with some data-monitoring technology. The result is what some scholars refer to as the ‘internet of everything’ and with it the possibility of a perfect ‘digital panopticon’. This era of mass surveillance puts increased pressure on privacy and, at least within the EU, has prompted significant legislative intervention in the form of the GDPR.

Second, we have created technologies that can take advantage of all the data that is being collected. To state the obvious: data alone is not enough. As all lawyers know, it is easy to befuddle the opposition in a complex law suit by ‘dumping’ a lot of data on them during discovery. They drown in the resultant sea of information. It is what we do with the data that really matters. In this respect, it is the marriage of mass surveillance with new kinds of artificial intelligence that creates the new legal challenges that we must now tackle with some urgency.

Artificial intelligence allows us to do three important things with the vast quantities of data that are now being collected:


  • (i) It enables new kinds of pattern matching - what I mean here is that AI systems can spot patterns in data that were historically difficult for computer systems to spot (e.g. image or voice recognition), and that may also be difficult, if not impossible, for humans to spot due to their complexity. To put it another way, AI allows us to understand data in new ways.
  • (ii) It enables the creation of new kinds of informational product - what I mean here is that the AI systems don’t simply rebroadcast, dispassionate and objective forms of the data we collect. They actively construct and reshape the data into artifacts that can be more or less useful to humans.
  • (iii) It enables new kinds of action and behaviour - what I mean here is that the informational products created by these AI systems are not simply inert artifacts that we observe with bemused detachment. They are prompts to change and alter human behaviour and decision-making.


On top of all this, these AI systems do these things with increasing autonomy (or, less controversially, automation). Although humans do assist the AI systems in both understanding, constructing and acting on foot of the data being collected, advances in AI and robotics make it increasingly possible for machines to do things without direct human assistance or intervention.



It is these ways of using data, coupled with increasing automation, that I believe give rise to the new legal challenges. It is impossible for me to cover all of these challenges in this talk. So what I will do instead is to discuss three case studies that I think are indicative of the kinds of challenges that need to be addressed, and that correspond to the three things we can now do with the data that we are collecting.


2. Case Study: Facial Recognition Technology
The first case study has to do with facial recognition technology. This is an excellent example of how AI can understand data in new ways. Facial recognition technology is essentially like fingerprinting for the face. From a selection of images, an algorithm can construct a unique mathematical model of your facial features, which can then be used to track and trace your identity across numerous locations.

The potential conveniences of this technology are considerable: faster security clearance at airports; an easy way to record and confirm attendance in schools; an end to complex passwords when accessing and using your digital services; a way for security services to track and identify criminals; a tool for locating missing persons and finding old friends. Little surprise then that many of us have already welcomed the technology into our lives. It is now the default security setting on the current generation of smartphones. It is also being trialled at airports (including Dublin Airport),[2] train stations and public squares around the world. It is cheap and easily plugged into existing CCTV surveillance systems. It can also take advantage of the vast databases of facial images collected by governments and social media engines.

Despite its advantages, facial recognition technology also poses a significant number of risks. It enables and normalises blanket surveillance of individuals across numerous environments. This makes it the perfect tool for oppressive governments and manipulative corporations. Our faces are one of our most unique and important features, central to our sense of who we are and how we relate to each other — think of the Beatles immortal line ‘Eleanor Rigby puts on the face that she keeps in the jar by the door’ — facial recognition technology captures this unique feature and turns into a digital product that can be copied and traded, and used for marketing, intimidation and harassment.

Consider, for example, the unintended consequences of the FindFace app that was released in Russia in 2016. Intended by its creators to be a way of making new friends, the FindFace app matched images on your phone with images in social media databases, thus allowing you to identify people you may have met but whose names you cannot remember. Suppose you met someone at a party, took a picture together with them, but then didn’t get their name. FindFace allows you use the photo to trace their real identity.[3] What a wonderful idea, right? Now you need never miss out on an opportunity for friendship because of oversight or poor memory. Well, as you might imagine, the app also has a dark side. It turns out to be the perfect technology for stalkers, harassers and doxxers (the internet slang for those who want to out people’s real world identities). Anyone who is trying to hide or obscure their identity can now be traced and tracked by anyone who happens to take a photograph of them.

What’s more, facial recognition technology is not perfect. It has been shown to be less reliable when dealing with non-white faces, and there are several documented cases in which it matches the wrong faces, thus wrongly assuming someone is a criminal when they are not. For example, many US drivers have had their licences cancelled because an algorithm has found two faces on a licence database to be suspiciously similar and has then wrongly assumed the people in question to be using a false identity. In another famous illustration of the problem, 28 members of the US congress (most of them members of racial minorities), were falsely matched with criminal mugshots using facial recognition technology created by Amazon.[4] As some researchers have put it, the widespread and indiscriminate use of facial recognition means that we are all now part of a perpetual line-up that is both biased and error prone.[5] The conveniences of facial recognition thus come at a price, one that often only becomes apparent when something goes wrong, and is more costly for some social groups than others.

What should be done about this from a legal perspective? The obvious answer is to carefully regulate the technology to manage its risks and opportunities. This is, in a sense, what is already being done under the GDPR. Article 9 of the GDPR stipulates that facial recognition is a kind of biometric data that is subject to special protections. The default position is that it should not be collected, but this is subject to a long list of qualifications and exceptions. It is, for example, permissible to collect it if the data has already been made public, if you get the explicit consent of the person, if it serves some legitimate public interest, if it is medically necessary or necessary for public health reasons, if it is necessary to protect other rights and so on. Clearly the GDPR does restrict facial recognition in some ways. A recent Swedish case fined a school for the indiscriminate use of facial recognition for attendance monitoring.[6] Nevertheless, the long list of exceptions makes the widespread use of facial recognition not just a possibility but a likelihood. This is something the EU is aware of and in light of the Swedish case they have signalled an intention to introduce stricter regulation of facial recognition.

This is something we in Ireland should also be considering. The GDPR allows states to introduce stricter protections against certain kinds of data collection. And, according to some privacy scholars, we need the strictest possible protections to to save us from the depredations of facial recognition. Woodrow Hartzog, one of the foremost privacy scholars in the US, and Evan Selinger, a philosopher specialising in the ethics of technology, have recently argued that facial recognition technology must be banned. As they put it (somewhat alarmingly):[7]

“The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.”

They caution against anyone who thinks that the technology can be procedurally regulated, arguing that governmental and commercial interests will always lobby for expansion of the technology beyond its initially prescribed remit. They also argue that attempts at informed consent will be (and already are) a ‘spectacular failure’ because people don’t understand what they are consenting to when they give away their facial fingerprint.

Some people might find this call for a categorical ban extreme, unnecessary and impractical. Why throw the baby out with the bathwater and other cliches to that effect. But I would like to suggest that there is something worth taking seriously here, particularly since facial recognition technology is just the tip of the iceberg of data collection. People are already experimenting with emotion recognition technology, which uses facial images to predict future behaviour in real time, and there are many other kinds of sensitive data that are being collected, digitised and traded. Genetic data is perhaps the most obvious other example. Given that data is what fuels the fire of AI, it is possible that we should consider cutting off some of the fuel supply in its entirety.


3. Case Study: Deepfakes
Let me move on to my second case study. This one has to do with how AI is used to create new informational products from data. As an illustration of this I will focus on so-called ‘deepfake’ technology. This is a machine learning technique that allows you to construct realistic synthetic media from databases of images and audio files. The most prevalent use of deepfakes is, perhaps unsurprisingly, in the world of pornography, where the faces of famous actors have been repeatedly grafted onto porn videos. This is disturbing and makes deepfakes an ideal technology for ‘synthetic’ revenge porn.

Perhaps more socially significant than this, however, are the potential political uses of deepfake technology. In 2017, a team of researchers at the University of Washington created a series of deepfake videos of Barack Obama which I will now play for you.[8] The images in these videos are artificial. They haven’t been edited together from different clips. They have been synthetically constructed by an algorithm from a database of audiovisual materials. Obviously, the video isn’t entirely convincing. If you look and listen closely you can see that there is something stilted and artificial about it. In addition to this it uses pre-recorded audio clips to sync to the synthetic video. Nevertheless, if you weren’t looking too closely, you might be convinced it was real. Furthermore, there are other teams working on using the same basic technique to create synthetic audio too. So, as the technology improves, it could be very difficult for even the most discerning viewers to tell the difference between fiction and reality.

Now there is nothing new about synthetic media. With the support of the New Zealand Law Foundation, Tom Barraclough and Curtis Barnes have published one of the most detailed investigations into the legal policy implications of deepfake technology.[9] In their report, they highlight the fact that an awful lot of existing audiovisual media is synthetic: it is all processed, manipulated and edited to some degree. There is also a long history of creating artistic and satirical synthetic representations of political and public figures. Think, for example, of the caricatures in Punch magazine or in the puppet show Spitting Image. Many people who use deepfake technology to create synthetic media will, no doubt, claim a legitimate purpose in doing so. They will say they are engaging in legitimate satire or critique, or producing works of artistic significance.

Nevertheless, there does seem to be something worrying about deepfake technology. The highly realistic nature of the audiovisual material being created makes it the ideal vehicle for harassment, manipulation, defamation, forgery and fraud. Furthermore, the realism of the resultant material also poses significant epistemic challenges for society. The philosopher Regina Rini captures this problem well. She argues that deepfake technology poses a threat to our society’s ‘epistemic backstop’. What she means is that as a society we are highly reliant on testimony from others to get by. We rely on it for news and information, we use it to form expectations about the world and build trust in others. But we know that testimony is not always reliable. Sometimes people will lie to us; sometimes they will forget what really happened. Audiovisual recordings provide an important check on potentially misleading forms of testimony. They encourage honesty and competence. As Rini puts it:[10]

“The availability of recordings undergirds the norms of testimonial practice…Our awareness of the possibility of being recorded provides a quasi-independent check on reckless testifying, thereby strengthening the reasonability of relying on the words of others. Recordings do this in two distinctive ways: actively correcting errors in past testimony and passively regulating ongoing testimonial practices.”

The problem with deepfake technology is that it undermines this function. Audiovisual recordings can no longer provide the epistemic backstop that keeps us honest.

What does this mean for the law? I am not overly concerned about the impact of deepfake technology on legal evidence-gathering practices. The legal system, with its insistence on ‘chain of custody’ and testimonial verification of audiovisual materials, is perhaps better placed than most to deal with the threat of deepfakes (though there will be an increased need for forensic experts to identify deepfake recordings in court proceedings). What I am more concerned about is how deepfake technologies will be weaponised to harm and intimidate others — particularly members of vulnerable populations. The question is whether anything can be done to provide legal redress for these problems? As Barraclough and Barnes point out in their report, it is exceptionally difficult to legislate in this area. How do you define the difference between real and synthetic media (if at all)? How do you balance the free speech rights against the potential harms to others? Do we need specialised laws to do this or are existing laws on defamation and fraud (say) up to the task? Furthermore, given that deepfakes can be created and distributed by unknown actors, who would the potential cause of action be against?

These are difficult questions to answer. The one concrete suggestion I would make is that any existing or proposed legislation on ‘revenge porn’ should be modified so that it explicitly covers the possibility of synthetic revenge porn. Ireland is currently in the midst of legislating against the nonconsensual sharing of ‘intimate images’ in the Harassment, Harmful Communications and Related Offences Bill. I note that the current wording of the offence in section 4 of the Bill covers images that have been ‘altered’ but someone might argue that synthetically constructed images are not, strictly speaking, altered. There may be plans to change this wording to cover this possibility — I know that consultations and amendments to the Bill are ongoing[11] — but if there aren’t then I suggest that there should be.

To reiterate, I am using deepfake technology as an illustration of a more general problem. There are many other ways in which the combination data and AI can be used to mess with the distinction between fact and fiction. The algorithmic curation and promotion of fake news, for example, or the use of virtual and augmented reality to manipulate our perception of public and private spaces, both pose significant threats to property rights, privacy rights and political rights. We need to do something to legally manage this brave new (technologically constructed) world.



4. Case Study: Algorithmic Risk Prediction
Let me turn turn now to my final case study. This one has to do with how data can be used to prompt new actions and behaviours in the world. For this case study, I will look to the world of algorithmic risk prediction. This is where we take a collection of datapoints concerning an individual’s behaviour and lifestyle and feed it into an algorithm that can make predictions about their likely future behaviour. This is a long-standing practice in insurance, and is now being used in making credit decisions, tax auditing, child protection, and criminal justice (to name but a few examples). I’ll focus on its use in criminal justice for illustrative purposes.

Specifically, I will focus on the debate surrounding the COMPAS algorithm, that has been used in a number of US states. The COMPAS algorithm (created by a company called Northpointe, now called Equivant) uses datapoints to generate a recidivism risk score for criminal defendants. The datapoints include things like the person’s age at arrest, their prior arrest/conviction record, the number of family members who have been arrested/convicted, their address, their education and job and so on. These are then weighted together using an algorithm to generate a risk score. The exact weighting procedure is unclear, since the COMPAS algorithm is a proprietary technology, but the company that created it has released a considerable amount of information about the datapoints it uses into the public domain.

If you know anything about the COMPAS algorithm you will know that it has been controversial. The controversy stems from two features of how the algorithm works. First, the algorithm is relatively opaque. This is a problem because the fair administration of justice requires that legal decision-making be transparent and open to challenge. A defendant has a right to know how a tribunal or court arrived at its decision and to challenge or question its reasoning. If this information isn’t known — either because the algorithm is intrinsically opaque or has been intentionally rendered opaque for reasons of intellectual property — then this principle of fair administration is not being upheld. This was one of the grounds on which the use of COMPAS algorithm was challenged in the US case of Loomis v Wisconsin.[12] In that case, the defendant, Loomis, challenged his sentencing decision on the basis that the trial court had relied on the COMPAS risk score in reaching its decision. His challenge was ultimately unsuccessful. The Wisconsin Supreme Court reasoned that the trial court had not relied solely on the COMPAS risk score in reaching its decision. The risk score was just one input into the court’s decision-making process, which was itself transparent and open to challenge. That said, the court did agree that courts should be wary when relying on such algorithms and said that warnings should be attached to the scores to highlight their limitations.



The second controversy associated with the COMPAS algorithm has to do with its apparent racial bias. To understand this controversy I need to say a little bit more about how the algorithm works. Very roughly, the COMPAS algorithm is used to sort defendants into to outcome ‘buckets’: a 'high risk' reoffender bucket or a 'low risk' reoffender bucket. A number of years back a group of data journalists based at ProPublica conducted an investigation into which kinds of defendants got sorted into those buckets. They discovered something disturbing. They found that the COMPAS algorithm was more likely to give black defendants a false positive high risk score and more likely to give white defendants a false negative low risk score. The exact figures are given in the table below. Put another way, the COMPAS algorithm tended to rate black defendants as being higher risk than they actually were and white defendants as being lower risk than they actually were. This was all despite the fact that the algorithm did not explicitly use race as a criterion in its risk scores.



Needless to say, the makers of the COMPAS algorithm were not happy about this finding. They defended their algorithm, arguing that it was in fact fair and non-discriminatory because it was well calibrated. In other words, they argued that it was equally accurate in scoring defendants, irrespective of their race. If it said a black defendant was high risk, it was right about 60% of the time and if it said that a white defendant was high risk, it was right about 60% of the time. This turns out to be true. The reason why it doesn't immediately look like it is equally accurate upon a first glance at the relevant figures is that there are a lot more black defendants than white defendants -- an unfortunate feature of the US criminal justice system that is not caused by the algorithm but is, rather, a feature the algorithm has to work around.

So what is going on here? Is the algorithm fair or not? Here is where things get interesting. Several groups of mathematicians analysed this case and showed that the main problem here is that the makers of COMPAS and the data journalists were working with different conceptions of fairness and that these conceptions were fundamentally incompatible. This is something that can be formally proved. The clearest articulation of this proof can be found in a paper by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan.[13] To simplify their argument, they said that there are two things you might want a fair decision algorithm to do: (i) you might want it to be well-calibrated (i.e. equally accurate in its scoring irrespective of racial group); (ii) you might want it to achieve an equal representation for all groups in the outcome buckets. They then proved that except in two unusual cases, it is impossible to satisfy both criteria. The two unusual cases are when the algorithm is a 'perfect predictor' (i.e. it always get things right) or, alternatively, when the base rates for the relevant populations are the same (e.g. there are the same number of black defedants as there are white defendants). Since no algorithmic decision procedure is a perfect predictor, and since our world is full of base rate inequalities, this means that no plausible real-world use of a predictive algorithm is likely to be perfectly fair and non-discriminatory. What's more, this is generally true for all algorithmic risk predictions and not just true for cases involving recidivism risk. If you would like to see a non-mathematical illustration of the problem, I highly recommend checking out a recent article in the MIT Technology Review which includes a game you can play using the COMPAS algorithm and which illustrates the hard tradeoff between different conceptions of fairness.[14]

What does all this mean for the law? Well, when it comes to the issue of transparency and challengeability, it is worth noting that the GDPR, in articles 13-15 and article 22, contains what some people refer to as a ‘right to explanation’. It states that, when automated decision procedures are used, people have a right to access meaningful information about the logic underlying the procedures. What this meaningful information looks like in practice is open to some interpretation, though there is now an increasing amount of guidance from national data protection units about what is expected.[15] But in some ways this misses the deeper point. Even if we make these procedures perfectly transparent and explainable, there remains the question about how we manage the hard tradeoff between different conceptions of fairness and non-discrimination. Our legal conceptions of fairness are multidimensional and require us to balance competing interests. When we rely on human decision-makers to determine what is fair, we accept that there will be some fudging and compromise involved. Right now, we let this fudging take place inside the minds of the human decision-makers, oftentimes without questioning it too much or making it too explicit. The problem with algorithmic risk predictions is that they force us to make this fudging explicit and precise. We can no longer pretend that the decision has successfully balanced all the competing interests and demands. We have to pick and choose. Thus, in some ways, the real challenge with these systems is not that they are opaque and non-transparent but, rather, that when they are transparent they force us to make hard choices.

To some, this is the great advantage of algorithmic risk prediction. A paper by Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan and Cass Sunstein entitled ‘Discrimination in the Age of the Algorithm’ makes this very case.[16] They argue that the real problem at the moment is that decision-making is discriminatory and its discriminatory nature is often implicit and hidden from view. The widespread use of transparent algorithms will force it into the open where it can be washed by the great disinfectant of sunlight. But I suspect others will be less sanguine about this new world of algorithmically mediated justice. They will argue that human-led decision-making, with its implicit fudging, is preferable, partly because it allows us to sustain the illusion of justice. Which world do we want to live in? The transparent and explicit world imagined by Kleinberg et al, or the murky and more implicit world of human decision-making? This is also a key legal challenge for the modern age.


5. Conclusion
It’s time for me to wrap up. One lingering question you might have is whether any of the challenges outlined above are genuinely new. This is a topic worth debating. In one sense, there is nothing completely new about the challenges I have just discussed. We have been dealing with variations of them for as long as humans have lived in complex, literate societies. Nevertheless, there are some differences with the past. There are differences of scope and scale — mass surveillance and AI enables collection of data at an unprecedented scale and its use on millions of people at the same time. There are differences of speed and individuation — AI systems can update their operating parameters in real time and in highly individualised ways. And finally, there are the crucial differences in the degree of autonomy with which these systems operate, which can lead to problems in how we assign legal responsibility and liability.



Endnotes

  • [1] I am indebted to Jacob Turner for drawing my attention to this story. He discusses it in his book Robot Rules - Regulating Artificial Intelligence (Palgrave MacMillan, 2018). This is probably the best currently available book about Ai and law. 
  • [2] See https://www.irishtimes.com/business/technology/airport-facial-scanning-dystopian-nightmare-rebranded-as-travel-perk-1.3986321; and https://www.dublinairport.com/latest-news/2019/05/31/dublin-airport-participates-in-biometrics-trial 
  • [3] https://arstechnica.com/tech-policy/2016/04/facial-recognition-service-becomes-a-weapon-against-russian-porn-actresses/# 
  • [4] This was a stunt conducted by the ACLU. See here for the press release https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28 
  • [5] https://www.perpetuallineup.org/ 
  • [6] For the story, see here https://www.bbc.com/news/technology-49489154 
  • [7] Their original call for this can be found here: https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66 
  • [8] The video can be found here; https://www.youtube.com/watch?v=UCwbJxW-ZRg; For more information on the research see here: https://www.washington.edu/news/2017/07/11/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video/; https://grail.cs.washington.edu/projects/AudioToObama/siggraph17_obama.pdf 
  • [9] The full report can be found here: https://static1.squarespace.com/static/5ca2c7abc2ff614d3d0f74b5/t/5ce26307ad4eec00016e423c/1558340402742/Perception+Inception+Report+EMBARGOED+TILL+21+May+2019.pdf 
  • [10] The paper currently exists in a draft form but can be found here: https://philpapers.org/rec/RINDAT 
  • [11] https://www.dccae.gov.ie/en-ie/communications/consultations/Pages/Regulation-of-Harmful-Online-Content-and-the-Implementation-of-the-revised-Audiovisual-Media-Services-Directive.aspx 
  • [12] For a summary of the judgment, see here: https://harvardlawreview.org/2017/03/state-v-loomis/ 
  • [13] “Inherent Tradeoffs in the Fair Determination of Risk Scores” - available here https://arxiv.org/abs/1609.05807 
  • [14] The article can be found at this link - https://www.technologyreview.com/s/613508/ai-fairer-than-judge-criminal-risk-assessment-algorithm/ 
  • [15] Casey et al ‘Rethinking Explainabie Machines’ - available here https://scholarship.law.berkeley.edu/btlj/vol34/iss1/4/ 
  • [16] An open access version of the paper can be downloaded here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3329669