Pages

Monday, December 30, 2019

Academic Publications 2019




Another year, another end of year review of academic productivity. As I noted in last year's entry, 2018 was the year in which modesty and self-deprecation were in vogue. I've seen less of that this year. The preference seems to be for people to announce, without noticeable shame, that they are 'thrilled' or 'humbled' to share their latest publications and related career successes.

As per usual, I try to sidestep these fashions and offer this list unapologetically for anyone who might care to read the things I have published over the past 12 months. You can access free versions of most publications (the book is the only exception) by clicking on the links provided.

The typical rules apply: I've only included items that were published for the first time in 2019. I've excluded journal articles that were previously published in an online only version and got bumped into an official journal issue this year. I've also excluded items that were accepted for publication in 2019 but haven't yet seen the light of day.


Books


Peer-reviewed Journals


Book Chapters





Friday, December 27, 2019

Some recent media and podcasts



Regular readers will know that I have been shilling for my book Automation and Utopia for the past couple of months. In that vein, I did two recent podcasts on the book and related topics.


  • The first was on Mike Hagan's 'Radio Orbit' show. This was a fun and wide-ranging interview. It was recorded via phone so my voice is a bit muffled but overall it's probably one of my better interview performances. You can download the episode here.

  • The second was on Matt Ward's 'The Disruptors' podcast. This one focuses a lot on the likelihood of automation in the workplace and Matt plays a good devil's advocate on some of my claims. You can listen to it here or watch a video version (which I was not aware was being recorded) here.

This is a bit more out of date but my lecture 'Mass Surveillance, Artificial Intelligence and New Legal Challenges' was featured in a couple of news stories in Ireland, if you are interested. Here's one report from The Irish Times and another from The Journal.ie. Unrelated to this, I was also briefly quoted in this story about the ethics (and law) of people creating 3D avatars of celebs and exes for sexual purposes.

Finally, for some unknown reason, I was featured on this list of 30 people to follow in Europe on AI. I'm not sure what the methodology was but it is nice to be featured nonetheless.






Tuesday, December 17, 2019

67 - Rini on Deepfakes and the Epistemic Backstop

reginarini

In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow at the NYU Center for Bioethics, a postdoctoral research fellow in philosophy at Oxford University and a junior research fellow of Jesus College Oxford. We talk about the political and epistemological consequences of deepfakes. This is a fascinating and timely conversation.

You can download this episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed here).


Show Notes

  • 0:00 - Introduction
  • 3:20 - What are deepfakes?
  • 7:35 - What is the academic justification for creating deepfakes (if any)?
  • 11:35 - The different uses of deepfakes: Porn versus Politics
  • 16:00 - The epistemic backstop and the role of audiovisual recordings
  • 22:50 - Two ways that recordings regulate our testimonial practices
  • 26:00 - But recordings aren't a window onto the truth, are they?
  • 34:34 - Is the Golden Age of recordings over?
  • 39:36 - Will the rise of deepfakes lead to the rise of epistemic elites?
  • 44:32 - How will deepfakes fuel political partisanship?
  • 50:28 - Deepfakes and the end of public reason
  • 54:15 - Is there something particularly disruptive about deepfakes?
  • 58:25 - What can be done to address the problem?
 

Relevant Links




Thursday, December 12, 2019

What causes moral change? Some reflections on Appiah's Honour Code


Chinese Foot Binding


Morality changes over time. Once upon time, racism, sexism, and torture were widely practiced and, in some cases, celebrated. None of these practices has been completely eliminated, but there has been a significant change in our moral attitudes toward them. The vast majority of people now view them as unacceptable. What causes this kind of moral change?

In his book, The Honor Code, Kwame Anthony Appiah examines three historical moral revolutions (and one ongoing revolution) and comes up with an answer. He argues that changing perceptions of honour, as opposed to changes in moral belief, do most of the work. Indeed, he argues that in each of the three cases he examines, both moral argumentation and legal norms had already condemned the practices in question. They prevailed in spite of this. It was only when the practices were perceived to be dishonourable that the moral revolutions really took effect.

I recently read (well listened to) Appiah’s book. I found it a fascinating exploration of moral change, but I couldn’t figure out whether it the central thesis was interesting or not. I couldn’t shake the sense that there was something trivial about it. In what follows, I want to bring some order to my thoughts and see whether my initial impression is wrong. Is there, in fact, something insightful about Appiah’s argument? I will give an equivocal assessment in what follows.


1. Preliminary Thoughts about the Mechanics of Moral Change
Before I get into Appiah’s argument, I want to make a few general comments about the nature of moral change. Morality can be thought of as a system of propositions and imperatives. It consists of propositions describing the value of certain actions, events and states of affairs, e.g. “pleasure is good”, “pain is bad”, “friendship is good”, “torture is bad” and so forth. It consists of imperatives telling people to do or forbear from doing certain things, e.g. “don’t torture people”, “do give money to charity” and so forth.

The system of propositions and imperatives that constitute morality can be thought of in purely intellectual terms. That is to say, you might think of a moral system as something that is offered to us in order to garner our intellectual assent: we are asked to ‘believe’ in the propositions and ‘accept’ the imperatives, or not. That said, most people agree that a moral system ought to have some practical impact as well. If it is really a system of morality, it ought to present us with reasons for action and ought to change our actual behaviour. To put it more succinctly, most people think that morality is both an intellectual and practical affair.

What then is moral change? Presumably, moral change involves changes in the collection of propositions and imperatives to which we offer our intellectual assent, i.e. changes to what we believe is good and bad or right and wrong. And also changes in our moral behaviour. Full moral change would require both; partial moral change would involve one or the other.

The critical question then is: what causes changes in the intellectual and practical aspects of morality. Why do people no longer believe that torture is morally acceptable? Why is the practice no longer so prevalent? Broadly speaking, there are two drivers of moral change: intellectual and material. Intellectual drivers of change are ideas or concepts that change how we think about the system of morality. Perhaps someone presents a really good argument for thinking that torture is not morally permissible and this leads us to change our mind about it. That would be an intellectual driver of change at work. Material drivers of change are changes to the material or technological conditions of existence that have implications for moral beliefs and practices. For example, technology that makes it easier to extract information from people without causing tremendous pain might reduce the incentive to use certain kinds of torture, which might in turn affect our moral beliefs and practice about the permissibility of torture. That would be a material driver of change at work.

The distinction between intellectual and material drivers of change is not, of course, sharp. There are probably cases in which it is difficult to decide whether a given driver counts as intellectual or material. This is particularly true if you are a reductive materialist or idealist who thinks there is no ultimate distinction between mind and matter.

If we ignore this philosophical complication, however, my guess would be that most episodes of moral change involve a combination of both intellectual and material drivers of change (operating in a complex feedback loop). For present purposes, I will largely ignore material drivers of change because they do not feature heavily in Appiah’s account (although they do lurk in the background). Instead, I will focus on different kinds of intellectual drivers of moral change. Appiah’s account, it turns out, focuses on a distinction between moral and non-moral intellectual drivers of change.

What is this distinction? In the example I just gave I assumed that the intellectual driver of moral change was itself part of the system of morality. But this need not always be the case. Non-moral ideas and incentives might also affect moral beliefs and practices. Consider the following example. Suppose one day I decide to read Peter Singer’s famous essay on famine and the duty to give more money to charities in the third world. I carefully consider his arguments and come to believe they are correct. The following day I radically change my moral practices and start giving more money to charity. In this case, we have an intellectual driver of change that is clearly moral in nature: I was persuaded by reading Singer’s moral arguments. Contrast that with the following case. One of my close friends is an avowed Singerite who routinely gives half his money to charity. I really like my friend. I like the people he hangs out with and would like to win his respect. Consequently, even though I haven’t read any of Singer’s moral arguments, I decide to give half my money to charity as well. In this case, we have a non-moral intellectual driver of change: I change my behaviour because I care about winning my friend’s respect.

The central thesis of Appiah’s book is that a particular kind of non-moral driver of intellectual change — our conception honour — plays an outsized role in changing moral behaviour.


2. Honour Worlds and Moral Revolutions
What, then, is honour? Appiah has a somewhat intricate theory that underlies his book. It starts by stipulating that honour is a form of respect. You are honourable if you are respected by a relevant cohort of your peers; you are dishonourable if you are not. Following Stephen Darwall, Appiah goes on to suggest that there are two forms of respect:

Recognition Respect: The kind of respect that comes from being recognised as an equal member of a given social or cultural group. People with recognition respect ‘belong’ to their relevant social groups and hence have equal standing among their peers.

Appraisal Respect: The kind of respect that comes from being recognised as having superior capacities to one’s peers. People with appraisal respect are esteemed in the eyes of other members of their social group for their prowess, virtue, ability and so on.

Honour attaches to both kinds of respect but they are different in nature. Recognition respect is flat and egalitarian: once you have it, you have the same amount as everyone else. Appraisal respect is hierarchical and inegalitarian: you can be more or less esteemed, depending on your capacities. This is important because it means that recognition respect is non-competitive and non zero-sum (everyone can have the same amount) whereas appraisal respect is highly competitive and zero sum.

Appiah goes on to argue that each of us belongs to (or would like to belong to) an ‘honour world’. Honour worlds consist of people who share basic recognition respect and compete for appraisal respect. Honour worlds can come in a variety of shapes and sizes. A family, a tribe, a religion, a profession, or even a social class (e.g. the nobility or the working class) could constitute an honour world. What is crucial about honour worlds is that they are defined by an ‘honour code’, i.e. a set of rules or norms that tells members of the Honour World what they must do to win or maintain the respect of their peers.

Honour worlds are not fixed or immutable. Their boundaries are always being contested. Some people find themselves excluded from an Honour World and fight for inclusion. Some people find themselves being pushed out because they failed to live up to the Honour Code. Honour Worlds can expand and contract, depending on the circumstances.

Honour codes are also not fixed or immutable. What you have to do to win the respect of your peers can change from time to time. Indeed, it is the very fact that honour codes can change — coupled with the fact that honour worlds can expand and contract — that is at the heart of Appiah’s argument. His claim is that changing conceptions of what you must do to win honour, along with changes in the structure of given honour worlds, lie at the heart of several important moral revolutions.





3. Appiah’s Three Moral Revolutions
Appiah focuses on three moral revolutions that took place over the course of the 19th and early 20th century. These revolutions were: the abolition of duelling amongst the British nobility; the end of foot-binding in China, and the end of slavery in the British empire. The detailed discussion of each revolution, its causes and its consequences, are the highlight of Appiah’s book. I learned a lot reading about each revolution. I won’t be able to do justice to the richness of Appiah’s discussion here. All I can do is summarise the key points, highlighting what Appiah sees as the central role that honour played in facilitating all three revolutions.

Let’s start with the example of duelling. Pistol duelling was once a popular way for members of the aristocracy to resolve disputes concerning honour. If a gentleman thought his honour was being impugned by another gentleman, he would challenge him to a duel. Appiah starts his discussion of duelling with the famous case of the Duke of Wellington and the Earl of Winchilsea. The Duke, who was at the time the Prime Minister of the UK, challenged Winchilsea to a duel because the latter wrote an article accusing Wellington of being dishonest when it came to Catholic emancipation. Appiah then documents the fascinating history of duelling among members of the aristocracy in England and France. It was not uncommon at the time for members of this social group to participate in duels. Indeed, there were thousands of documented cases in France and several previous British prime ministers and ministers of state, prior to Wellington, had participated in duels whilst in office. The practice continued despite the fact that (a) the official churches spoke out against it; (b) it was illegal; and (c) many Enlightenment intellectuals argued against it on moral grounds.

Appiah then wonders: Why did duelling come to an end if (a), (b) and (c) weren't enough? His claim is that changing conceptions of honour played a key role. For starters, the practice itself was faintly ridiculous. There were lots of odd rules and habits in duelling that allowed you to get out of actually killing someone (neither Winchilsea nor Wellington were injured in their duel). This ridiculousness became much more apparent in the age of mass media when reports of duels were widely circulated among the nobility and beyond. This also drew attention the scandalous and hypocritical fact that the aristocracy were not abiding by the law. Whilst duelling was primarily a game played by the aristocracy and not known to the masses, it could be sustained as a system for maintaining honour. But when it was exposed and discussed in the era of emerging mass media, this was more difficult to sustain. This impacted on the conception of what was honourable. The duel was no longer a way of sustaining and protecting honour; it was a way of looking ridiculous and hypocritical. In short, it became dishonourable.

The next example is that of foot-binding in China. This was the horrendously painful practice of tightly binding and, essentially, breaking women’s feet in order to change their shape (specifically so that they appeared to be smaller and pointier). Appiah explores some fascinating socio-cultural explanations of why this practice took root. It seems that foot-binding began as a class/status thing. It was upper class women (consorts and members of the imperial harem) who bound their feet. This may have been because foot-binding was a way to control the sexual fidelity of these women. It is difficult for women whose feet are bound to walk without assistance. Thus, one way for the emperor to ensure the sexual fidelity of his harem was to literally prevent them from walking around. Whatever the explanation, once it became established in the upper classes, the practice of foot-binding spread ‘downwards’. Appiah argues that this was because it was a way of signalling one’s membership in an honour world.

As with duelling, foot-binding was frequently criticised by intellectuals in China and was widely recognised as being painful. Nevertheless, it persisted for hundreds of years before quickly dropping out of style in the late-19th and early 20th century. Why? Appiah argues that it was due to the impact of globalisation and the changing perception of national honour. As industrialisation sped up, and the ships and armies of other countries arrived at their door, it became apparent to the Chinese elite that China was losing global influence to other countries — Britain, America and Japan being the obvious examples. These were all cultures that did not practice foot-binding. There was also, at the same time, an influx of Western religious missionaries to China, who were keen on changing the practice. They focused their efforts on the upper classes and tried to persuade them that there was something dishonourable about the practice. They argued it brought shame to the Chinese nation. These missionaries embedded themselves in Chinese culture, and succeeded in getting members of the Chinese nobility to accept that it would be dishonourable to bind the feet of their daughters and to marry their sons to a woman whose feet were bound. This led to a rapid decline in the practice and its eventual abolition. It was, consequently, changing perceptions of honour, particularly national honour, that did for foot-binding in China.

The final historical case study is the abolition of slavery in the British empire. This took place in the early part of the 19th century. I’ll be briefer with this example. Appiah argues that the moral revolution around slavery came in two distinct phases. The first took place largely among the nobility and upper middle class, where abolitionists argued that the practice brought shame on the British Empire. The second phase, which was possibly more interesting, took place among members of the working class. One of the distinctive features of slavery as a practice was that it signalled that certain people did not belong to an honour world: that they were not owed basic recognition respect. These people were slaves and one of the reasons they were denied recognition respect was because they were manual workers. There was, consequently, the tendency to assume that there was something dishonourable about manual work. This changed in the early 19th century because of the rise of the working class. As working class identity became a thing, and as members of the working class wanted to be recognised as honour-bearing citizens, they pushed for the abolition of slavery because it brought dishonour to the kind of work they did.

In addition to these three historical revolutions, Appiah also discusses one ongoing moral revolution: the revolution in relation to honour killing in Pakistan. In Pakistan, honour killing is illegal and is often condemned by religious authorities as being contrary to Islamic teachings. Despite this, the practice persists and politicians and authorities often turn a blind eye to it. Appiah argues that this is because of the honour code that exists in certain communities. In order for this to change there will need to be a revolution in how honour is understood in those communities. Appiah documents some of the efforts to do that in the book.


4. Is Appiah’s theory an interesting one?
That’s a quick overview of Appiah’s theory. Let’s now take stock. What is Appiah really saying? As I see it, his theory boils down to two main claims:

Claim 1: Changes to moral beliefs and practices (at least in the cases Appiah reviews) are primarily driven by changing perceptions and understandings of honour.

Claim 2: Honour is not necessarily moral in nature. That is to say, what people think is honourable is not necessarily the same thing as what they think is morally right.

Are these claims interesting? Do they tell us something significant about the nature of moral revolution? Let’s take each in turn.

Claim 1 strikes me as being relatively plausible but not exactly groundbreaking. All it really says is that one of things we care most about is how we are perceived by our peer groups. We want them to like us and think well of us and so we tend to behave in ways that will raise us in their eyes. This is what honour and the honour code boils down to (particularly since Appiah defines honour in terms of recognition respect and appraisal respect).

I’m sure this is true. Humans are a social species and we care about our social position. One of the more provocative books of recent years is Kevin Simler and Robin Hanson’s book The Elephant in the Brain. In this book, Simler and Hanson argue that the majority of our behaviour can be understood as a form of social signalling. We do things not for their intrinsic merits nor for the reasons we often state but, rather, to send signals to our peers. Although Simler and Hanson push an extreme form of this social signalling model of human behaviour, I’m confident that signalling of this sort is a major driver of human moral behaviour. But does that make it an interesting idea? Not necessarily. For one thing, it may be the case that there are many other critical drivers of moral change that are not adequately covered by Appiah’s case studies. Honour may be the catalyst in his selected cases but other factors may be more important in other cases. For another thing, even if honour is the critical variable in some cases, we can’t do anything with this information unless we can say something useful about the kinds of things that honour tends to latch onto. Are there some universal or highly stable laws of what counts as honourable or is it all arbitrary?

This is where Claim 2 becomes important. This is, in many ways, the distinctive part of Appiah’s thesis: that honour is not necessarily moral in nature. Sometimes people have moralised honour codes — i.e. codes in which that which is perceived to be honourable is also understood to be moral — but sometimes they don’t. Indeed, each of the three historical case studies illustrate this. In all three cases, moral arguments were already marshalled against the practices of duelling, foot-binding and slavery. It was the recalcitrant honour code that was the impediment to moral change.

But let’s pause for a moment and think about this in more detail. When Appiah says that honour is not necessarily moralised is he saying that from his own perspective — i.e. that of a 21st century outsider to the honour codes he is analysing — or is he saying it from some other, universally objective stance where there is a single moral code that is open to both the insiders and outsides to a given honour world? The answer could make a big difference. For claim 2 to be true (and interesting) it would be have to be the case that insiders to a given honour code know that their duties to their honour code are in conflict with their moral duties and I’m just not sure that this is the case. I suspect many insiders to a given honour world think that their honour code is already moralised, i.e. that following the honour code means doing the morally right thing. I suspect there are also others that think they have conflicting moral duties: those specified by the honour code and those specified by some external source (e.g the law). But that in itself doesn’t mean that the honour code is perceived by them to be amoral or immoral. Such conflicts of duty are a standard part of everyday morality anyway. Beyond that, I would guess that it is relatively rare for people to think that their honour code is completely immoral but that they are bound to follow it regardless.

All that said, Appiah’s theory might be interesting insofar as it gives us some guidance as to how we can change moral practices. Appiah suggests that moral criticism and argumentation by itself is going to be relatively ineffective; as is top-down legal and regulatory reform, particularly when it is pushed on an honour world from the outside. So if we find a set of beliefs and practices that are morally objectionable, but honourable, we should approach its reform in a different way. Specifically, we should try doing any of the following three things (alone or in combination): (a) make moral conduct honourable (i.e. moralise the honour code), or (b) become an insider to an existing honour code and show how, within the terms of that code, some given conduct is, in fact, dishonourable (e.g. Muslim critics of honour killing can show how the practice conflicts with the superior duty to the rules of Islam) or (c) expand the honour world (i.e. the group identity-circle of respect) to include those with a moralised honour code and then try to reform the honour code to match the moralised ideal (e.g. what happened in China with foot-binding).

This might, indeed, be sound advice. Arguing from an ivory tower is unlikely to start a moral revolution.




Monday, December 9, 2019

Ten Years of Philosophical Disquisitions




Once upon a time, I used to mark yearly anniversaries on this blog. I stopped doing that a few years ago but since today is the 10th anniversary of this blog I though I should mark the occasion. For better or worse, this blog has been a major part of my life for the last 10 years. I have published over 1100 posts on it. (This was the first one). The blog itself has received just over 4 million pageviews. At the moment it is averaging about 70,000 pageviews per month. Given the way the internet works, I'm guessing about 90% of those pageviews are robots, but in light of my own stated philosophical views, I guess I shouldn't be too concerned about that!

As I have said before, I don't do any of this in the hope of getting readers. I do it mainly as an outlet for my own curiosity and understanding. That may well sound selfish, but I believe that if I didn't focus on the intrinsically fascinating nature of what I was reading and writing I wouldn't have sustained this for 10 years. Fame and fashion are, after all, fickle things.

That said, I do appreciate the fact that so many people seem to have derived some value from the things I have written on here. It amazes me that even one person has read it, never mind hundreds of thousands.

Anyway, in light of the occasion, here are the ten most popular posts from the past ten years:




The most popular post is the one on intoxicated consent to sexual relations. I guess that says something about what gets popular on the internet. One thing that I find interesting about this list is that the philosophy of religion doesn't feature much on it. This is despite the fact that the majority of the articles I wrote in the first few years were largely focused on that topic.





Friday, December 6, 2019

66 - Wong on Confucianism, Robots and Moral Deskilling

Pak-Hang Wong

In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation.

You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 2:56 - How do robots disrupt our moral lives?
  • 7:18 - Robots and Moral Deskilling
  • 12:52 - The Folk Model of Virtue Acquisition
  • 21:16 - The Confucian approach to Ethics
  • 24:28 - Confucianism versus the European approach
  • 29:05 - Confucianism and situationism
  • 34:00 - The Importance of Rituals
  • 39:39 - A Confucian Response to Moral Deskilling
  • 43:37 - Criticisms (moral silencing)
  • 46:48 - Generalising the Confucian approach
  • 50:00 - Do we need new Confucian rituals?

Relevant Links




Wednesday, December 4, 2019

Will we ever have fully autonomous vehicles? Some reasons for pessimism




What is the future of the automotive industry? If you’ve been paying attention over the past decade, you’ll know the answer: self-driving (a.k.a autonomous) vehicles. Instead of relying on imperfect, biased, lazy and reckless human beings to get us from A to B, we will rely on sophisticated and efficient computer programs. This future may not be that far away. We already rely on computers to fly planes and drive trains. All we will be doing is extending our reliance on them to the roads and public highways.

There are, of course, some technical hurdles to overcome. The public highways are more unpredictable than the skies and railways. But impressive strides have been made with driverless technology in the recent past and it doesn’t seem implausible to think that it will become widespread within the next 10-15 years. Once it does, the benefits will be great — at least if you believe the hype — there will be fewer accidents and we will all have more time to focus on the things we love to do during our daily commutes: catch up on work or TV, post to social media and so on. There will also be other beneficial side effects. Less space will need to be allocated to carparks in our cities and towns, allowing us to create more pleasant urban living spaces, the traffic system might become more efficient and less crowded, there may even be a drop in light pollution.

Will any of this come to pass? In this article, I want to argue for a slightly unusual form of scepticism about the future of self-driving vehicles. This scepticism has two elements to it. First, I will argue that a combination of ethical, legal and strategic factors will encourage us not to make and market fully autonomous vehicles. Second, I will argue that despite this disincentive, many of us will, in fact, treat vehicles as effectively fully autonomous. This could be very bad for those of us expected to use such vehicles.

I develop this argument in three stages. I start with a quick overview of the five different ‘levels’ of automated driving that has been proposed by the Society of Automotive Engineers. Second, I argue that concerns about responsibility and liability ‘gaps’ may cause us to get stuck on the middle levels of automated driving. Third, and finally, I consider some of the consequences of this.


1. Getting Stuck: The Levels of Autonomous Driving
If you have spent any time reading up about autonomous vehicles you will be familiar with the ‘levels’ of autonomy framework. First proposed and endorsed by the Society of Automotive Engineers, the framework tries to distinguish between different types of vehicle autonomy. The diagram below illustrates the framework.



This framework has been explained to me in several different ways over the years. I think it is fair to say that nobody thinks the different levels are obvious and discrete categories. The assumption is that there is probably a continuum of possible vehicles ranging from the completely non-autonomous at one end of the spectrum* to the fully autonomous at the other. But it is hard for the human mind to grasp a smooth continuum of possibility and so it helps if we divide it up into discrete categories or, in this case, levels.

What of the levels themselves? The first level — so called ‘Level 0’ — covers all traditional vehicles: the ones where the human driver performs all the critical driving functions like steering, braking, accelerating, lane changing and so on. The second level (Level 1) covers vehicles with some driver assist technologies, e.g. enhanced or assisted braking and parking. Many of the cars we buy nowadays have such assistive features. Level 2 covers vehicles with some automated functions, e.g. automated steering, acceleration, lane changing, but in which the human driver is still expected to play an active supervisory and interventionist role. Tesla’s enhanced autopilot is often said to be an example of Level 2 automation. The contract Tesla user’s sign when they download the autopilot software stipulates that they must be alert and willing to take control at all times. Level 3 covers vehicles with more automated functionality than Level 2. It is sometimes said to involve ‘conditional autonomy’, which means the vehicle can do most things by itself, but a human is still expected to be an alert supervisor of the vehicle and has to intervene when requested to do so by the vehicle (usually if the vehicle encounters some situation involving uncertainty). The Waymo vehicles that Uber uses are claimed to be Level 3 vehicles (though there is some dispute about this). Level 4 covers vehicles with the capacity for full automation, but with a residual role for human supervisors. Finally, Level 5 covers vehicles that involve full automation, with no role for human intervention.

The critical point is that all the levels of automation between 1 and 4 (and especially between 2-4) assume that there is an important role for human ‘drivers’ in the operation of autonomous vehicles. That is to say, until we arrive at Level 5 automation, the assumption is that humans will never be ‘off the hook’ or ‘out of the loop’ when it comes to controlling the autonomous vehicle. They can never sit back and relax. They have to be alert to the possibility of taking control. This, in turn, means that all autonomous vehicles that fall short of Level 5 will have to include some facility or protocol for handing over control from the vehicle to the human user, in at least some cases.

While this ‘levels’ of automation model has been critiqued, it is useful for present purposes. It helps me to clarify my central thesis, which is that I think there are important ethical, legal and strategic reasons for trying to prevent us from ever getting Level 5 automation. This means we are most likely to get stuck somewhere around Levels 3 and 4 (most likely Level 3), at least officially. Some people will say that this is a good thing because they think it is a good thing for humans to exercise ‘meaningful control’ over autonomous driving systems. But I think it might be a bad thing because people will tend to treat these vehicles as effectively fully autonomous.

Let me now explain why I think this is the case.


2. Why we might get stuck at Level 3 or 4
The argument for thinking that we might get stuck at level 3 or 4 is pretty straightforward and I am not the first to make it. In the debate about autonomous vehicles, one of the major ethical and legal concerns arising from their widespread deployment is that they might create responsibility or liability gaps. The existence, or even the perceived existence, of these gaps creates an incentive not to create fully autonomous vehicles.

Our current legal and ethical approach to driving assumes that, in almost all cases, the driver is responsible if something goes wrong. He or she can be held criminally liable for reckless or dangerous driving, and can be required to pay compensation to the victims of any crashes resulting from this. The latter is, of course, usually facilitated through a system of insurance, but, except in countries like New Zealand, the system of insurance still defaults to the assumption of individual driver responsibility. There are some exceptions to this. If there was a design defect in the car then liability may shift to the manufacturer, but it can be quite difficult to prove this in practice.

The widespread deployment of autonomous vehicles throws this existing system into disarray because it raises questions as to who or what is responsible in the event of an accident. Is the person sitting in the vehicle responsible if the autonomous driving program does something wrong? Presumably not, if they were not the ones driving the car at the time. This implies that the designers and manufacturers should be held responsible. But what if the defect in the driving program was not reasonably foreseeable or if it was acquired as a result of the learning algorithm used by the system? Would it be fair, just and reasonable to impose liability on the manufacturers in this case? Confusion as to where responsibility lies in such cases gives rise to worries about responsibility ‘gaps’.

There are all sorts of proposals to plug the gap. Some people think it is easy enough to ‘impute’ driverhood to the manufacturers or designers of the autonomous vehicle program. Jeffrey Gurney, for example, has made this argument. He points out that if a piece of software is driving the vehicle, it makes sense to treat it as the driver of the car. And since it is under the ultimate control of the manufacturer, it makes sense to impute driverhood to them, by proxy. What it doesn’t make sense to do, according to Gurney, is to treat the person sitting in the vehicle as the driver. They are really just a passenger. This proposal has the advantage of leaving much of the existing legal framework in place. Responsibility is still applied to the ‘driver’ of the vehicle; the driver just happens to no longer be sitting in the car.

There are other proposals too, of course. Some people argue that we should modify existing product liability laws to cover defects in the driving software. Some favour applying a social insurance model to cover compensation costs arising from accidents. Some like the idea of extending ‘strict liability’ rules to prevent manufacturers from absolving themselves of responsibility simply because something wasn’t reasonably foreseeable.

All these proposals have some merit but what is interesting about them is that (a) they assume that the responsibility ‘gap’ problem arises when the car is operating in autonomous mode (i.e. when the computer program is driving the car) and (b) that in such a case the most fair, just and reasonable thing to do is to apply liability to the manufacturers or designers of the vehicle. This, however, ignores the fact that most autonomous vehicles are not fully autonomous (i.e. not level 5 vehicles) and that manufacturers would have a strong incentive to push liability onto the user of the vehicle, if they could get away with it.

This is exactly what the existence of Levels 2 to 4 autonomous driving enables them to exploit. By designing vehicles in such a way that there is always some allowance for handover of control to a human driver, manufacturers can create systems that ‘push’ responsibility onto humans at critical junctures. To repeat the example already given, this is exactly what Tesla did when it initially rolled out its autopilot program: it required users to sign an agreement stating that they would remain alert and ready to take control at all times.

Furthermore, it’s not just the financial and legal incentives of the manufacturers that might favour this set-up. There are also practical reasons to favour this arrangement in the long run. It is a very difficult engineering challenge to create a fully autonomous road vehicle. The road environment is too unpredictable and messy. It’s much easier to create a system that can do some (perhaps even most) driving tasks but leave others to humans. Why go to the trouble of creating a fully autonomous Level 5 vehicle when it would be such a practical challenge and when there is little financial incentive for doing so? Similarly, it might even be the case that policy-makers and legal officials favour sticking with Levels 2 to 4. Allowing for handover to humans will enable much of the existing legal framework to remain in place, perhaps with some adjustments to product liability law to cover software defects. Drivers might also like this because allows them to maintain some semblance of control over their vehicles.

That said, there are clearly better and worse ways to manage the handover from computer to human. One of the problems with the Tesla system was that it required constant vigilance and supervision, and potentially split-second handover to a human. This is tricky since humans struggle to maintain concentration when using automated systems and may not be able to do anything with a split-second handover.

Some engineers refer to this as the ‘unsafe valley’ problem in the design of autonomous vehicles. In a recent paper on the topic, Frank Flemisch and his colleagues have proposed a way to get out of this unsafe valley by having a much slower and safer system of handover to a human. Roughly, they call for autonomous vehicles that handle the more predictable driving tasks (e.g. driving on a motorway), have a long lead-in time for warning humans when they need to take control of the vehicle, and go to a ‘safe state’ (e.g. slow down and pull in to the hard shoulder or lay-by) if the human does not heed these warnings.

This model of autonomous driving is interesting. If it works, it could make Level 3 type systems much safer. But either way, the momentum seems to be building toward a world in which we never get to fully autonomous vehicles. Instead we stuck somewhere in-between.


3. The Consequences of Getting Stuck
Lots of people will be happy if we stuck at Level 3 or 4. Getting stuck means that we retain some illusion of meaningful human control over these systems. Even if the motives for getting stuck are not entirely benevolent, it still means that we get some of the benefits of the technology, while at the same time respecting the dignity and agency of the human beings who use these systems. Furthermore, even if we might prefer it if manufacturers took more responsibility for what happened with these systems, getting stuck at Level 3 or 4 means we still get to live in a world where some human is in charge. That sounds like a win-win.

But I’m a little more sceptical. I think getting stuck might turn out to be a bad thing. To make the case for this I will use the legal distinction between de jure and de facto realities. The de jure reality is what the law says should be the case; the de facto reality is what actually happens on the ground. For example, it might say in a statute somewhere that people who possess small quantities of recreational drugs are doing something illegal and ought to be sentenced to jail as a result. That’s the de jure reality. In practice, it might turn out that the legal authorities turn a blind eye to anyone that possesses a small quantity of such drugs. They don’t care because they have limited resources and bigger fish to fry. So the de facto reality is very different from the de jure reality.

I think a similar divergence between the official, legal, reality and what’s happening on the ground might arise if we get stuck at Level 3 or 4. The official position of manufacturers might be that their vehicles are not fully autonomous and require human control in certain circumstances. And the official legal and policy position might be that fully autonomous vehicles cannot exist and that manufacturers have to create ‘safe’ handover systems to allow humans to take control of the vehicles when needs be. But what will the reality be on the ground? We already know that drivers using Level 2 systems flout the official rules. They sit in the back seat or watch movies on their phones when they should be paying attention to what is happening (they do similar things in non-autonomous vehicles). Is this behaviour likely to discontinue even in a world with safer handover systems? It’s hard to see why it would. So we might end up with a de facto reality in which users treat their vehicles as almost fully autonomous, and a de jure world in which this is not supposed to happen.

Here’s the crucial point: the users might be happy with this divergence between de facto and de jure reality. They might be happy to treat the systems as if they are fully autonomous because this gives them the most of the benefits of the technology: their time and attention can be taken up by something else. And they might be happy to accept the official legal position because they don’t think that they are likely to get into an accident that makes the official legal rules apply to them in a negative way. Many human drivers already do this. How many people reading this article have broken the speed limits whilst driving, or have skirted with driving while borderline on the legal limit for alcohol, or have driven when excessively tired? Officially, most drivers know that they shouldn’t do these things; but in practice they do because they doubt that they will suffer the consequences. The same might be true in the case of autonomous vehicles. Drivers might treat them as close to fully autonomous because the systems are safe enough to allow them to get away with this most of the time. They discount the possibility that something will go wrong. What we end up with then is a world in which we have an official illusion of ‘meaningful control’ that disadvantages the primary users of autonomous vehicles, but only when something goes wrong.

Of course, there is nothing inevitable about the scenario I am sketching. It might be possible to design autonomous driving systems so that it is practically impossible for humans to flout the official rules (e.g. perhaps facial recognition technology could be used to ensure humans are paying attention and some electric shock system could be used to wake them up if they are falling asleep). It might also be possible to enforce the official position in a punitive way that makes it very costly for human users to flout the official rules (though we have been down this path before with speeding and drink-driving laws). The problem with doing this, however, is that we have to walk a very fine line. If we go too far, we might make using an autonomous vehicle effectively the same as using a traditionally human-driven vehicle and thus prevent us from realising the alleged benefits of these systems. If we don’t go far enough, we don’t resolve the problem.

Alternatively, we could embrace the idea of autonomous driving and try not to create an incentive to get stuck at Level 3 or 4. I’m not sure what the best outcome is but there are tradeoffs inherent in both.

* Although I do have some qualms about referring to any car or automobile as non-autonomous since, presumably, at least some functions within the vehicle are autonomous. For example, many of the things that happen in the engine of my car happen without my direct supervision or control. Indeed, if you asked me, I wouldn’t even know how to supervise and control the engine.